diff --git a/doc/locales/fr/LC_MESSAGES/framework-docs.po b/doc/locales/fr/LC_MESSAGES/framework-docs.po index 80222e5409d..67edf687bbb 100644 --- a/doc/locales/fr/LC_MESSAGES/framework-docs.po +++ b/doc/locales/fr/LC_MESSAGES/framework-docs.po @@ -3,7 +3,7 @@ msgid "" msgstr "" "Project-Id-Version: Flower Docs\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" -"POT-Creation-Date: 2024-03-15 14:23+0000\n" +"POT-Creation-Date: 2024-05-28 11:47+0200\n" "PO-Revision-Date: 2023-09-05 17:54+0000\n" "Last-Translator: Charles Beauville \n" "Language: fr\n" @@ -13,7 +13,7 @@ msgstr "" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: 8bit\n" -"Generated-By: Babel 2.14.0\n" +"Generated-By: Babel 2.15.0\n" #: ../../source/contributor-explanation-architecture.rst:2 msgid "Flower Architecture" @@ -60,11 +60,11 @@ msgstr "" #: ../../source/contributor-how-to-build-docker-images.rst:4 msgid "" "Flower provides pre-made docker images on `Docker Hub " -"`_ that include all necessary " -"dependencies for running the server. You can also build your own custom " -"docker images from scratch with a different version of Python or Ubuntu " -"if that is what you need. In this guide, we will explain what images " -"exist and how to build them locally." +"`_ that include all necessary dependencies" +" for running the SuperLink. You can also build your own custom docker " +"images from scratch with a different version of Python or Ubuntu if that " +"is what you need. In this guide, we will explain what images exist and " +"how to build them locally." msgstr "" #: ../../source/contributor-how-to-build-docker-images.rst:9 @@ -79,10 +79,12 @@ msgid "Clone the flower repository." msgstr "**Fourche le dépôt de Flower**" #: ../../source/contributor-how-to-build-docker-images.rst:17 +#: ../../source/how-to-run-flower-using-docker.rst:144 msgid "Verify the Docker daemon is running." msgstr "" #: ../../source/contributor-how-to-build-docker-images.rst:19 +#: ../../source/how-to-run-flower-using-docker.rst:146 msgid "" "Please follow the first section on :doc:`Run Flower using Docker ` which covers this step in more detail." @@ -90,12 +92,11 @@ msgstr "" #: ../../source/contributor-how-to-build-docker-images.rst:23 msgid "" -"Currently, Flower provides two images, a base image and a server image. " -"There will also be a client image soon. The base image, as the name " -"suggests, contains basic dependencies that both the server and the client" -" need. This includes system dependencies, Python and Python tools. The " -"server image is based on the base image, but it additionally installs the" -" Flower server using ``pip``." +"Currently, Flower provides two images, a ``base`` image and a " +"``superlink`` image. The base image, as the name suggests, contains basic" +" dependencies that the SuperLink needs. This includes system " +"dependencies, Python and Python tools. The SuperLink image is based on " +"the base image, but it additionally installs the SuperLink using ``pip``." msgstr "" #: ../../source/contributor-how-to-build-docker-images.rst:28 @@ -107,13 +108,13 @@ msgstr "" #: ../../source/contributor-how-to-build-docker-images.rst:31 msgid "" -"Both, base and server image are configured via build arguments. Through " -"build arguments, we can make our build more flexible. For example, in the" -" base image, we can specify the version of Python to install using the " -"``PYTHON_VERSION`` build argument. Some of the build arguments have " -"default values, others must be specified when building the image. All " -"available build arguments for each image are listed in one of the tables " -"below." +"Both, base and SuperLink image are configured via build arguments. " +"Through build arguments, we can make our build more flexible. For " +"example, in the base image, we can specify the version of Python to " +"install using the ``PYTHON_VERSION`` build argument. Some of the build " +"arguments have default values, others must be specified when building the" +" image. All available build arguments for each image are listed in one of" +" the tables below." msgstr "" #: ../../source/contributor-how-to-build-docker-images.rst:38 @@ -146,6 +147,7 @@ msgid "Example" msgstr "Exemples de PyTorch" #: ../../source/contributor-how-to-build-docker-images.rst:48 +#: ../../source/contributor-how-to-build-docker-images.rst:94 #, fuzzy msgid "``PYTHON_VERSION``" msgstr "Version Python" @@ -157,7 +159,7 @@ msgstr "" #: ../../source/contributor-how-to-build-docker-images.rst:50 #: ../../source/contributor-how-to-build-docker-images.rst:54 #: ../../source/contributor-how-to-build-docker-images.rst:58 -#: ../../source/contributor-how-to-build-docker-images.rst:100 +#: ../../source/contributor-how-to-build-docker-images.rst:108 #, fuzzy msgid "Yes" msgstr "Types" @@ -194,6 +196,7 @@ msgid "``69.0.2``" msgstr "``1.0.0b0``" #: ../../source/contributor-how-to-build-docker-images.rst:60 +#: ../../source/contributor-how-to-build-docker-images.rst:98 msgid "``UBUNTU_VERSION``" msgstr "" @@ -220,7 +223,7 @@ msgstr "" #: ../../source/contributor-how-to-build-docker-images.rst:80 #, fuzzy -msgid "Building the server image" +msgid "Building the SuperLink image" msgstr "Démarrer le serveur" #: ../../source/contributor-how-to-build-docker-images.rst:90 @@ -232,57 +235,74 @@ msgid "The repository name of the base image." msgstr "" #: ../../source/contributor-how-to-build-docker-images.rst:92 -msgid "Defaults to ``flwr/server``." +msgid "Defaults to ``flwr/base``." msgstr "" -#: ../../source/contributor-how-to-build-docker-images.rst:94 -msgid "``BASE_IMAGE_TAG``" +#: ../../source/contributor-how-to-build-docker-images.rst:95 +#, fuzzy +msgid "The Python version of the base image." +msgstr "Évaluer la réponse d'un client." + +#: ../../source/contributor-how-to-build-docker-images.rst:96 +msgid "Defaults to ``py3.11``." msgstr "" -#: ../../source/contributor-how-to-build-docker-images.rst:95 -msgid "The image tag of the base image." +#: ../../source/contributor-how-to-build-docker-images.rst:99 +#, fuzzy +msgid "The Ubuntu version of the base image." +msgstr "Chargement des données" + +#: ../../source/contributor-how-to-build-docker-images.rst:100 +msgid "Defaults to ``ubuntu22.04``." msgstr "" -#: ../../source/contributor-how-to-build-docker-images.rst:96 -msgid "Defaults to ``py3.11-ubuntu22.04``." +#: ../../source/contributor-how-to-build-docker-images.rst:102 +msgid "``FLWR_PACKAGE``" msgstr "" -#: ../../source/contributor-how-to-build-docker-images.rst:98 +#: ../../source/contributor-how-to-build-docker-images.rst:103 +msgid "The PyPI package to install." +msgstr "" + +#: ../../source/contributor-how-to-build-docker-images.rst:104 +#, fuzzy +msgid "Defaults to ``flwr``." +msgstr "Flux de travail" + +#: ../../source/contributor-how-to-build-docker-images.rst:106 msgid "``FLWR_VERSION``" msgstr "" -#: ../../source/contributor-how-to-build-docker-images.rst:99 +#: ../../source/contributor-how-to-build-docker-images.rst:107 msgid "Version of Flower to be installed." msgstr "" -#: ../../source/contributor-how-to-build-docker-images.rst:101 +#: ../../source/contributor-how-to-build-docker-images.rst:109 #, fuzzy -msgid "``1.7.0``" +msgid "``1.8.0``" msgstr "``1.0.0b0``" -#: ../../source/contributor-how-to-build-docker-images.rst:103 +#: ../../source/contributor-how-to-build-docker-images.rst:112 msgid "" -"The following example creates a server image with the official Flower " -"base image py3.11-ubuntu22.04 and Flower 1.7.0:" +"The following example creates a SuperLink image with the official Flower " +"base image py3.11-ubuntu22.04 and Flower 1.8.0:" msgstr "" -#: ../../source/contributor-how-to-build-docker-images.rst:114 +#: ../../source/contributor-how-to-build-docker-images.rst:122 msgid "" -"The name of image is ``flwr_server`` and the tag ``0.1.0``. Remember that" -" the build arguments as well as the name and tag can be adapted to your " -"needs. These values serve as examples only." +"The name of image is ``flwr_superlink`` and the tag ``0.1.0``. Remember " +"that the build arguments as well as the name and tag can be adapted to " +"your needs. These values serve as examples only." msgstr "" -#: ../../source/contributor-how-to-build-docker-images.rst:117 +#: ../../source/contributor-how-to-build-docker-images.rst:125 msgid "" "If you want to use your own base image instead of the official Flower " -"base image, all you need to do is set the ``BASE_REPOSITORY`` and " -"``BASE_IMAGE_TAG`` build arguments. The value of ``BASE_REPOSITORY`` must" -" match the name of your image and the value of ``BASE_IMAGE_TAG`` must " -"match the tag of your image." +"base image, all you need to do is set the ``BASE_REPOSITORY``, " +"``PYTHON_VERSION`` and ``UBUNTU_VERSION`` build arguments." msgstr "" -#: ../../source/contributor-how-to-build-docker-images.rst:131 +#: ../../source/contributor-how-to-build-docker-images.rst:138 msgid "After creating the image, we can test whether the image is working:" msgstr "" @@ -829,17 +849,19 @@ msgid "Open Jupyter Notebooks on Google Colab" msgstr "Ouvre les carnets Jupyter sur Google Colab" #: ../../source/contributor-how-to-install-development-versions.rst:51 +#, fuzzy msgid "" -"Open the notebook ``doc/source/tutorial-get-started-with-flower-" +"Open the notebook ``doc/source/tutorial-series-get-started-with-flower-" "pytorch.ipynb``:" msgstr "" "Ouvrir le notebook ``doc/source/tutorial/Flower-1-Intro-to-FL-" "PyTorch.ipynb`` :" #: ../../source/contributor-how-to-install-development-versions.rst:53 +#, fuzzy msgid "" "https://colab.research.google.com/github/adap/flower/blob/main/doc/source" -"/tutorial-get-started-with-flower-pytorch.ipynb" +"/tutorial-series-get-started-with-flower-pytorch.ipynb" msgstr "" "https://colab.research.google.com/github/adap/flower/blob/main/doc/source" "/tutorial-get-started-with-flower-pytorch.ipynb" @@ -854,9 +876,10 @@ msgstr "" " :" #: ../../source/contributor-how-to-install-development-versions.rst:57 +#, fuzzy msgid "" "https://colab.research.google.com/github/adap/flower/blob/branch-" -"name/doc/source/tutorial-get-started-with-flower-pytorch.ipynb" +"name/doc/source/tutorial-series-get-started-with-flower-pytorch.ipynb" msgstr "" "https://colab.research.google.com/github/adap/flower/blob/branch-" "name/doc/source/tutorial-get-started-with-flower-pytorch.ipynb" @@ -1115,10 +1138,19 @@ msgstr "" "nous recommandons `Python 3.8 `_." #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:14 -msgid "Virutualenv with Pyenv/Virtualenv" +msgid "" +"Due to a known incompatibility with `ray " +"`_, we currently recommend utilizing at " +"most `Python 3.11 `_ for running Flower " +"simulations." +msgstr "" + +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:19 +#, fuzzy +msgid "Virtualenv with Pyenv/Virtualenv" msgstr "Virutualenv avec Pyenv/Virtualenv" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:16 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:21 msgid "" "One of the recommended virtual environment is `pyenv " "`_/`virtualenv `_ pour plus de " "détails." -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:18 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:23 #, fuzzy msgid "" "Once Pyenv is set up, you can use it to install `Python Version 3.10 " @@ -1140,19 +1172,19 @@ msgstr "" "Une fois Pyenv mis en place, tu peux l'utiliser pour installer `Python " "Version 3.7 `_ ou supérieure :" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:24 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:29 msgid "Create the virtualenv with:" msgstr "Crée le virtualenv avec :" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:31 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:36 msgid "Activate the virtualenv by running the following command:" msgstr "Active la virtualenv en exécutant la commande suivante :" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:39 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:44 msgid "Virtualenv with Poetry" msgstr "Virtualenv et la poésie" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:41 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:46 msgid "" "The Flower examples are based on `Poetry `_ to manage dependencies. After installing Poetry you " @@ -1162,7 +1194,7 @@ msgstr "" "poetry.org/docs/>`_ pour gérer les dépendances. Après l'installation de " "Poetry, il te suffit de créer un environnement virtuel avec :" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:47 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:52 msgid "" "If you open a new terminal you can activate the previously created " "virtual environment with the following command:" @@ -1170,11 +1202,11 @@ msgstr "" "Si tu ouvres un nouveau terminal, tu peux activer l'environnement virtuel" " précédemment créé avec la commande suivante :" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:55 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:60 msgid "Virtualenv with Anaconda" msgstr "Virtualenv avec Anaconda" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:57 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:62 msgid "" "If you prefer to use Anaconda for your virtual environment then install " "and setup the `conda `_. Après l'avoir configuré, tu peux créer un " "environnement virtuel avec :" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:63 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:68 msgid "and activate the virtual environment with:" msgstr "et active l'environnement virtuel avec :" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:71 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:76 msgid "And then?" msgstr "Et ensuite ?" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:73 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:78 msgid "" "As soon as you created your virtual environment you clone one of the " "`Flower examples `_." @@ -1222,7 +1254,7 @@ msgstr "" "reStructuredText (fichiers `.rst`) et Markdown (fichiers `.md`)." #: ../../source/contributor-how-to-write-documentation.rst:10 -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:142 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:169 #, fuzzy msgid "" "Note that, in order to build the documentation locally (with ``poetry run" @@ -1804,7 +1836,7 @@ msgstr "" msgid "Creating and merging a pull request (PR)" msgstr "Créer et fusionner une pull request (PR)" -#: ../../source/contributor-tutorial-contribute-on-github.rst:205 +#: ../../source/contributor-tutorial-contribute-on-github.rst:206 msgid "**Create the PR**" msgstr "**Créer le PR**" @@ -1848,6 +1880,13 @@ msgstr "" #: ../../source/contributor-tutorial-contribute-on-github.rst:193 msgid "" +"The title should be changed to adhere to the :ref:`pr_title_format` " +"guidelines, otherwise it won't be possible to merge the PR. So in this " +"case, a correct title might be ``docs(framework:skip) Fix typos``." +msgstr "" + +#: ../../source/contributor-tutorial-contribute-on-github.rst:196 +msgid "" "The input box in the middle is there for you to describe what your PR " "does and to link it to existing issues. We have placed comments (that " "won't be rendered once the PR is opened) to guide you through the " @@ -1858,16 +1897,11 @@ msgstr "" "commentaires (qui ne seront pas rendus une fois le PR ouvert) pour te " "guider tout au long du processus." -#: ../../source/contributor-tutorial-contribute-on-github.rst:196 -msgid "" -"It is important to follow the instructions described in comments. For " -"instance, in order to not break how our changelog system works, you " -"should read the information above the ``Changelog entry`` section " -"carefully. You can also checkout some examples and details in the " -":ref:`changelogentry` appendix." +#: ../../source/contributor-tutorial-contribute-on-github.rst:199 +msgid "It is important to follow the instructions described in comments." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:200 +#: ../../source/contributor-tutorial-contribute-on-github.rst:201 msgid "" "At the bottom you will find the button to open the PR. This will notify " "reviewers that a new PR has been opened and that they should look over it" @@ -1877,7 +1911,7 @@ msgstr "" "qui informera les réviseurs qu'un nouveau PR a été ouvert et qu'ils " "doivent le consulter pour le fusionner ou demander des modifications." -#: ../../source/contributor-tutorial-contribute-on-github.rst:203 +#: ../../source/contributor-tutorial-contribute-on-github.rst:204 msgid "" "If your PR is not yet ready for review, and you don't want to notify " "anyone, you have the option to create a draft pull request:" @@ -1886,11 +1920,11 @@ msgstr "" " personne, tu as la possibilité de créer un brouillon de demande de " "traction :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:208 +#: ../../source/contributor-tutorial-contribute-on-github.rst:209 msgid "**Making new changes**" msgstr "**Faire de nouveaux changements**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:208 +#: ../../source/contributor-tutorial-contribute-on-github.rst:209 msgid "" "Once the PR has been opened (as draft or not), you can still push new " "commits to it the same way we did before, by making changes to the branch" @@ -1900,11 +1934,11 @@ msgstr "" "toujours y pousser de nouveaux commits de la même manière qu'auparavant, " "en apportant des modifications à la branche associée au PR." -#: ../../source/contributor-tutorial-contribute-on-github.rst:230 +#: ../../source/contributor-tutorial-contribute-on-github.rst:231 msgid "**Review the PR**" msgstr "**Review the PR**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:211 +#: ../../source/contributor-tutorial-contribute-on-github.rst:212 msgid "" "Once the PR has been opened or once the draft PR has been marked as " "ready, a review from code owners will be automatically requested:" @@ -1913,7 +1947,7 @@ msgstr "" " étant prêt, une révision des propriétaires de code sera automatiquement " "demandée :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:215 +#: ../../source/contributor-tutorial-contribute-on-github.rst:216 msgid "" "Code owners will then look into the code, ask questions, request changes " "or validate the PR." @@ -1921,11 +1955,11 @@ msgstr "" "Les propriétaires du code vont alors se pencher sur le code, poser des " "questions, demander des modifications ou valider le RP." -#: ../../source/contributor-tutorial-contribute-on-github.rst:217 +#: ../../source/contributor-tutorial-contribute-on-github.rst:218 msgid "Merging will be blocked if there are ongoing requested changes." msgstr "La fusion sera bloquée s'il y a des changements demandés en cours." -#: ../../source/contributor-tutorial-contribute-on-github.rst:221 +#: ../../source/contributor-tutorial-contribute-on-github.rst:222 msgid "" "To resolve them, just push the necessary changes to the branch associated" " with the PR:" @@ -1933,11 +1967,11 @@ msgstr "" "Pour les résoudre, il suffit de pousser les changements nécessaires vers " "la branche associée au PR :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:225 +#: ../../source/contributor-tutorial-contribute-on-github.rst:226 msgid "And resolve the conversation:" msgstr "Et résous la conversation :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:229 +#: ../../source/contributor-tutorial-contribute-on-github.rst:230 msgid "" "Once all the conversations have been resolved, you can re-request a " "review." @@ -1945,11 +1979,11 @@ msgstr "" "Une fois que toutes les conversations ont été résolues, tu peux " "redemander un examen." -#: ../../source/contributor-tutorial-contribute-on-github.rst:250 +#: ../../source/contributor-tutorial-contribute-on-github.rst:251 msgid "**Once the PR is merged**" msgstr "**Une fois que le PR est fusionné**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:233 +#: ../../source/contributor-tutorial-contribute-on-github.rst:234 msgid "" "If all the automatic tests have passed and reviewers have no more changes" " to request, they can approve the PR and merge it." @@ -1958,7 +1992,7 @@ msgstr "" " de modifications à demander, ils peuvent approuver le PR et le " "fusionner." -#: ../../source/contributor-tutorial-contribute-on-github.rst:237 +#: ../../source/contributor-tutorial-contribute-on-github.rst:238 msgid "" "Once it is merged, you can delete the branch on GitHub (a button should " "appear to do so) and also delete it locally by doing:" @@ -1967,19 +2001,19 @@ msgstr "" "(un bouton devrait apparaître pour le faire) et aussi la supprimer " "localement en faisant :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:244 +#: ../../source/contributor-tutorial-contribute-on-github.rst:245 msgid "Then you should update your forked repository by doing:" msgstr "Ensuite, tu dois mettre à jour ton dépôt forké en faisant :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:253 +#: ../../source/contributor-tutorial-contribute-on-github.rst:254 msgid "Example of first contribution" msgstr "Exemple de première contribution" -#: ../../source/contributor-tutorial-contribute-on-github.rst:256 +#: ../../source/contributor-tutorial-contribute-on-github.rst:257 msgid "Problem" msgstr "Problème" -#: ../../source/contributor-tutorial-contribute-on-github.rst:258 +#: ../../source/contributor-tutorial-contribute-on-github.rst:259 #, fuzzy msgid "" "For our documentation, we've started to use the `Diàtaxis framework " @@ -1988,7 +2022,7 @@ msgstr "" "Pour notre documentation, nous avons commencé à utiliser le cadre " "`Diàtaxis `_." -#: ../../source/contributor-tutorial-contribute-on-github.rst:260 +#: ../../source/contributor-tutorial-contribute-on-github.rst:261 #, fuzzy msgid "" "Our \"How to\" guides should have titles that continue the sentence \"How" @@ -1998,7 +2032,7 @@ msgstr "" "la phrase \"Comment faire pour...\", par exemple, \"Comment passer à " "Flower 1.0\"." -#: ../../source/contributor-tutorial-contribute-on-github.rst:262 +#: ../../source/contributor-tutorial-contribute-on-github.rst:263 msgid "" "Most of our guides do not follow this new format yet, and changing their " "title is (unfortunately) more involved than one might think." @@ -2007,7 +2041,7 @@ msgstr "" "changer leur titre est (malheureusement) plus compliqué qu'on ne le " "pense." -#: ../../source/contributor-tutorial-contribute-on-github.rst:264 +#: ../../source/contributor-tutorial-contribute-on-github.rst:265 #, fuzzy msgid "" "This issue is about changing the title of a doc from present continuous " @@ -2016,7 +2050,7 @@ msgstr "" "Cette question porte sur le changement du titre d'un document du présent " "continu au présent simple." -#: ../../source/contributor-tutorial-contribute-on-github.rst:266 +#: ../../source/contributor-tutorial-contribute-on-github.rst:267 #, fuzzy msgid "" "Let's take the example of \"Saving Progress\" which we changed to \"Save " @@ -2026,21 +2060,21 @@ msgstr "" "remplacé par \"Sauvegarder la progression\". Est-ce que cela passe notre " "contrôle ?" -#: ../../source/contributor-tutorial-contribute-on-github.rst:268 +#: ../../source/contributor-tutorial-contribute-on-github.rst:269 #, fuzzy msgid "Before: \"How to saving progress\" ❌" msgstr "Avant : \"Comment sauvegarder les progrès\" ❌" -#: ../../source/contributor-tutorial-contribute-on-github.rst:270 +#: ../../source/contributor-tutorial-contribute-on-github.rst:271 #, fuzzy msgid "After: \"How to save progress\" ✅" msgstr "Après : \"Comment sauvegarder la progression\" ✅" -#: ../../source/contributor-tutorial-contribute-on-github.rst:273 +#: ../../source/contributor-tutorial-contribute-on-github.rst:274 msgid "Solution" msgstr "Solution" -#: ../../source/contributor-tutorial-contribute-on-github.rst:275 +#: ../../source/contributor-tutorial-contribute-on-github.rst:276 #, fuzzy msgid "" "This is a tiny change, but it'll allow us to test your end-to-end setup. " @@ -2050,12 +2084,12 @@ msgstr "" "configuration de bout en bout. Après avoir cloné et configuré le repo " "Flower, voici ce que tu dois faire :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:277 +#: ../../source/contributor-tutorial-contribute-on-github.rst:278 #, fuzzy msgid "Find the source file in ``doc/source``" msgstr "Trouve le fichier source dans `doc/source`" -#: ../../source/contributor-tutorial-contribute-on-github.rst:278 +#: ../../source/contributor-tutorial-contribute-on-github.rst:279 #, fuzzy msgid "" "Make the change in the ``.rst`` file (beware, the dashes under the title " @@ -2064,7 +2098,7 @@ msgstr "" "Effectue la modification dans le fichier `.rst` (attention, les tirets " "sous le titre doivent être de la même longueur que le titre lui-même)" -#: ../../source/contributor-tutorial-contribute-on-github.rst:279 +#: ../../source/contributor-tutorial-contribute-on-github.rst:280 #, fuzzy msgid "" "Build the docs and `check the result `_" -#: ../../source/contributor-tutorial-contribute-on-github.rst:282 +#: ../../source/contributor-tutorial-contribute-on-github.rst:283 msgid "Rename file" msgstr "Renommer le fichier" -#: ../../source/contributor-tutorial-contribute-on-github.rst:284 +#: ../../source/contributor-tutorial-contribute-on-github.rst:285 msgid "" "You might have noticed that the file name still reflects the old wording." " If we just change the file, then we break all existing links to it - it " @@ -2090,22 +2124,22 @@ msgstr "" "important** d'éviter cela, car briser des liens peut nuire à notre " "classement dans les moteurs de recherche." -#: ../../source/contributor-tutorial-contribute-on-github.rst:287 +#: ../../source/contributor-tutorial-contribute-on-github.rst:288 #, fuzzy msgid "Here's how to change the file name:" msgstr "Voici comment changer le nom du fichier :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:289 +#: ../../source/contributor-tutorial-contribute-on-github.rst:290 #, fuzzy msgid "Change the file name to ``save-progress.rst``" msgstr "Change le nom du fichier en `save-progress.rst`" -#: ../../source/contributor-tutorial-contribute-on-github.rst:290 +#: ../../source/contributor-tutorial-contribute-on-github.rst:291 #, fuzzy msgid "Add a redirect rule to ``doc/source/conf.py``" msgstr "Ajouter une règle de redirection à `doc/source/conf.py`" -#: ../../source/contributor-tutorial-contribute-on-github.rst:292 +#: ../../source/contributor-tutorial-contribute-on-github.rst:293 #, fuzzy msgid "" "This will cause a redirect from ``saving-progress.html`` to ``save-" @@ -2114,11 +2148,11 @@ msgstr "" "Cela entraînera une redirection de `saving-progress.html` vers `save-" "progress.html`, les anciens liens continueront à fonctionner." -#: ../../source/contributor-tutorial-contribute-on-github.rst:295 +#: ../../source/contributor-tutorial-contribute-on-github.rst:296 msgid "Apply changes in the index file" msgstr "Applique les changements dans le fichier d'index" -#: ../../source/contributor-tutorial-contribute-on-github.rst:297 +#: ../../source/contributor-tutorial-contribute-on-github.rst:298 #, fuzzy msgid "" "For the lateral navigation bar to work properly, it is very important to " @@ -2129,16 +2163,16 @@ msgstr "" "très important de mettre également à jour le fichier `index.rst`. C'est " "là que nous définissons toute l'arborescence de la barre de navigation." -#: ../../source/contributor-tutorial-contribute-on-github.rst:300 +#: ../../source/contributor-tutorial-contribute-on-github.rst:301 #, fuzzy msgid "Find and modify the file name in ``index.rst``" msgstr "Trouve et modifie le nom du fichier dans `index.rst`" -#: ../../source/contributor-tutorial-contribute-on-github.rst:303 +#: ../../source/contributor-tutorial-contribute-on-github.rst:304 msgid "Open PR" msgstr "Open PR" -#: ../../source/contributor-tutorial-contribute-on-github.rst:305 +#: ../../source/contributor-tutorial-contribute-on-github.rst:306 #, fuzzy msgid "" "Commit the changes (commit messages are always imperative: \"Do " @@ -2147,120 +2181,27 @@ msgstr "" "Valide les modifications (les messages de validation sont toujours " "impératifs : \"Fais quelque chose\", dans ce cas \"Modifie...\")" -#: ../../source/contributor-tutorial-contribute-on-github.rst:306 +#: ../../source/contributor-tutorial-contribute-on-github.rst:307 msgid "Push the changes to your fork" msgstr "Transmets les changements à ta fourchette" -#: ../../source/contributor-tutorial-contribute-on-github.rst:307 -msgid "Open a PR (as shown above)" -msgstr "Ouvre un RP (comme indiqué ci-dessus)" - #: ../../source/contributor-tutorial-contribute-on-github.rst:308 +msgid "" +"Open a PR (as shown above) with title ``docs(framework) Update how-to " +"guide title``" +msgstr "" + +#: ../../source/contributor-tutorial-contribute-on-github.rst:309 msgid "Wait for it to be approved!" msgstr "Attends qu'elle soit approuvée !" -#: ../../source/contributor-tutorial-contribute-on-github.rst:309 +#: ../../source/contributor-tutorial-contribute-on-github.rst:310 msgid "Congrats! 🥳 You're now officially a Flower contributor!" msgstr "" "Félicitations 🥳 Tu es désormais officiellement une contributrice de " "Flower !" -#: ../../source/contributor-tutorial-contribute-on-github.rst:313 -msgid "How to write a good PR title" -msgstr "Comment écrire un bon titre de PR" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:315 -msgid "" -"A well-crafted PR title helps team members quickly understand the purpose" -" and scope of the changes being proposed. Here's a guide to help you " -"write a good GitHub PR title:" -msgstr "" -"Un titre de PR bien choisi permet aux autres développeurs de rapidement " -"comprendre l'intérêt et le scope des changements proposés. Voici un guide" -" pour vous aider à écrire des bons titres de PR :" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:317 -msgid "" -"1. Be Clear and Concise: Provide a clear summary of the changes in a " -"concise manner. 1. Use Actionable Verbs: Start with verbs like \"Add,\" " -"\"Update,\" or \"Fix\" to indicate the purpose. 1. Include Relevant " -"Information: Mention the affected feature or module for context. 1. Keep " -"it Short: Avoid lengthy titles for easy readability. 1. Use Proper " -"Capitalization and Punctuation: Follow grammar rules for clarity." -msgstr "" -"1. Soyez clair et concis : Donnez un résumé clair des changements de " -"manière concise. 1. Utilisez des verbes actionnables : Commencez par des " -"verbes comme \"Add\", \"Update\", ou \"Fix\" pour indiquer le but. 1. " -"Inclure des renseignements pertinents : Mentionner la caractéristique ou " -"le module concerné pour le contexte. 1. Gardez le court : Évitez les " -"longs titres pour une lisibilité facile. 1. Utiliser une bonne " -"capitalisation et une ponctuation : Suivre les règles de grammaire pour " -"la clarté." - -#: ../../source/contributor-tutorial-contribute-on-github.rst:323 -msgid "" -"Let's start with a few examples for titles that should be avoided because" -" they do not provide meaningful information:" -msgstr "" -"Commençons par quelques exemples de titres qui devraient être évités " -"parce qu'ils ne fournissent pas d'information significative :" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:325 -msgid "Implement Algorithm" -msgstr "Implement Algorithm" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:326 -msgid "Database" -msgstr "Database" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:327 -msgid "Add my_new_file.py to codebase" -msgstr "Add my_new_file.py to codebase" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:328 -msgid "Improve code in module" -msgstr "Improve code in module" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:329 -msgid "Change SomeModule" -msgstr "Change SomeModule" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:331 -msgid "" -"Here are a few positive examples which provide helpful information " -"without repeating how they do it, as that is already visible in the " -"\"Files changed\" section of the PR:" -msgstr "" -"Voici quelques bons exemples qui fournissent de l'information utile sans " -"répéter comment ils le font, comme cela est déjà visible dans la section " -"\"Files changed\" de la PR :" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:333 -msgid "Update docs banner to mention Flower Summit 2023" -msgstr "Update docs banner to mention Flower Summit 2023" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:334 -msgid "Remove unnecessary XGBoost dependency" -msgstr "Remove unnecessary XGBoost dependency" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:335 -msgid "Remove redundant attributes in strategies subclassing FedAvg" -msgstr "Remove redundant attributes in strategies subclassing FedAvg" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:336 -#, fuzzy -msgid "Add CI job to deploy the staging system when the ``main`` branch changes" -msgstr "Add CI job to deploy the staging system when the `main` branch changes" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:337 -msgid "" -"Add new amazing library which will be used to improve the simulation " -"engine" -msgstr "" -"Add new amazing library which will be used to improve the simulation " -"engine" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:341 +#: ../../source/contributor-tutorial-contribute-on-github.rst:314 #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:548 #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:946 #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:727 @@ -2269,7 +2210,7 @@ msgstr "" msgid "Next steps" msgstr "Prochaines étapes" -#: ../../source/contributor-tutorial-contribute-on-github.rst:343 +#: ../../source/contributor-tutorial-contribute-on-github.rst:316 msgid "" "Once you have made your first PR, and want to contribute more, be sure to" " check out the following :" @@ -2277,7 +2218,7 @@ msgstr "" "Une fois que tu auras fait ton premier RP, et que tu voudras contribuer " "davantage, ne manque pas de consulter les sites suivants :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:345 +#: ../../source/contributor-tutorial-contribute-on-github.rst:318 #, fuzzy msgid "" ":doc:`Good first contributions `_, où vous devriez " "particulièrement regarder les contributions :code:`baselines`." -#: ../../source/contributor-tutorial-contribute-on-github.rst:349 +#: ../../source/contributor-tutorial-contribute-on-github.rst:322 #: ../../source/fed/0000-20200102-fed-template.md:60 msgid "Appendix" msgstr "Annexe" -#: ../../source/contributor-tutorial-contribute-on-github.rst:354 -#, fuzzy -msgid "Changelog entry" -msgstr "Changelog" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:356 -msgid "" -"When opening a new PR, inside its description, there should be a " -"``Changelog entry`` header." -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:358 -msgid "" -"Above this header you should see the following comment that explains how " -"to write your changelog entry:" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:360 -msgid "" -"Inside the following 'Changelog entry' section, you should put the " -"description of your changes that will be added to the changelog alongside" -" your PR title." +#: ../../source/contributor-tutorial-contribute-on-github.rst:327 +msgid "PR title format" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:363 -msgid "" -"If the section is completely empty (without any token) or non-existent, " -"the changelog will just contain the title of the PR for the changelog " -"entry, without any description." +#: ../../source/contributor-tutorial-contribute-on-github.rst:329 +msgid "We enforce the following PR title format:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:366 +#: ../../source/contributor-tutorial-contribute-on-github.rst:335 msgid "" -"If the section contains some text other than tokens, it will use it to " -"add a description to the change." +"(or ``(:skip) `` to ignore the PR in the " +"changelog)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:368 +#: ../../source/contributor-tutorial-contribute-on-github.rst:337 msgid "" -"If the section contains one of the following tokens it will ignore any " -"other text and put the PR under the corresponding section of the " -"changelog:" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:370 -msgid " is for classifying a PR as a general improvement." -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:372 -msgid " is to not add the PR to the changelog" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:374 -msgid " is to add a general baselines change to the PR" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:376 -msgid " is to add a general examples change to the PR" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:378 -msgid " is to add a general sdk change to the PR" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:380 -msgid " is to add a general simulations change to the PR" +"Where ```` needs to be in ``{ci, fix, feat, docs, refactor, " +"break}``, ```` should be in ``{framework, baselines, datasets, " +"examples, or '*' when modifying multiple projects which requires the " +"':skip' flag to be used}``, and ```` starts with a capitalised " +"verb in the imperative mood." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:382 -msgid "Note that only one token should be used." -msgstr "" +#: ../../source/contributor-tutorial-contribute-on-github.rst:341 +#, fuzzy +msgid "Valid examples:" +msgstr "Exemples de PyTorch" -#: ../../source/contributor-tutorial-contribute-on-github.rst:384 -msgid "" -"Its content must have a specific format. We will break down what each " -"possibility does:" +#: ../../source/contributor-tutorial-contribute-on-github.rst:343 +msgid "``feat(framework) Add flwr build CLI command``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:386 -msgid "" -"If the ``### Changelog entry`` section contains nothing or doesn't exist," -" the following text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:344 +msgid "``refactor(examples:skip) Improve quickstart-pytorch logging``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:390 -msgid "" -"If the ``### Changelog entry`` section contains a description (and no " -"token), the following text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:345 +msgid "``ci(*:skip) Enforce PR title format``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:396 -msgid "" -"If the ``### Changelog entry`` section contains ````, nothing will " -"change in the changelog." -msgstr "" +#: ../../source/contributor-tutorial-contribute-on-github.rst:347 +#, fuzzy +msgid "Invalid examples:" +msgstr "Exemples de PyTorch" -#: ../../source/contributor-tutorial-contribute-on-github.rst:398 -msgid "" -"If the ``### Changelog entry`` section contains ````, the " -"following text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:349 +msgid "``feat(framework): Add flwr build CLI command`` (extra ``:``)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:402 +#: ../../source/contributor-tutorial-contribute-on-github.rst:350 msgid "" -"If the ``### Changelog entry`` section contains ````, the " -"following text will be added to the changelog::" +"``feat(*) Add flwr build CLI command`` (missing ``skip`` flag along with " +"``*``)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:406 -msgid "" -"If the ``### Changelog entry`` section contains ````, the " -"following text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:351 +msgid "``feat(skip) Add flwr build CLI command`` (missing ````)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:410 -msgid "" -"If the ``### Changelog entry`` section contains ````, the following " -"text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:352 +msgid "``feat(framework) add flwr build CLI command`` (non capitalised verb)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:414 -msgid "" -"If the ``### Changelog entry`` section contains ````, the " -"following text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:353 +msgid "``feat(framework) Add flwr build CLI command.`` (dot at the end)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:418 -msgid "" -"Note that only one token must be provided, otherwise, only the first " -"action (in the order listed above), will be performed." +#: ../../source/contributor-tutorial-contribute-on-github.rst:354 +msgid "``Add flwr build CLI command.`` (missing ``()``)" msgstr "" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:2 @@ -2430,6 +2310,7 @@ msgid "Get started as a contributor" msgstr "Devenez un·e contributeur·ice" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:5 +#: ../../source/how-to-run-flower-using-docker.rst:132 msgid "Prerequisites" msgstr "Prérequis" @@ -2589,10 +2470,55 @@ msgid "Run Linters and Tests" msgstr "Vérifier le format et tester le code" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:106 +msgid "Add a pre-commit hook" +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:108 +msgid "" +"Developers may integrate a pre-commit hook into their workflow utilizing " +"the `pre-commit `_ library. The pre-" +"commit hook is configured to execute two primary operations: " +"``./dev/format.sh`` and ``./dev/test.sh`` scripts." +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:110 +msgid "There are multiple ways developers can use this:" +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:112 +msgid "Install the pre-commit hook to your local git directory by simply running:" +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:118 +msgid "" +"Each ``git commit`` will trigger the execution of formatting and " +"linting/test scripts." +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:119 +msgid "" +"If in a hurry, bypass the hook using ``--no-verify`` with the ``git " +"commit`` command. ::" +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:124 +msgid "" +"For developers who prefer not to install the hook permanently, it is " +"possible to execute a one-time check prior to committing changes by using" +" the following command:" +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:130 +msgid "" +"This executes the formatting and linting checks/tests on all the files " +"without modifying the default behavior of ``git commit``." +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:133 msgid "Run Github Actions (CI) locally" msgstr "Exécuter les GitHub Actions (CI) localement" -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:108 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:135 #, fuzzy msgid "" "Developers could run the full set of Github Actions workflows under their" @@ -2606,7 +2532,7 @@ msgstr "" "fois installé, exécuter la commande suivante dans le dossier principale " "de Flower :" -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:115 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:142 msgid "" "The Flower default workflow would run by setting up the required Docker " "machines underneath." @@ -2614,12 +2540,12 @@ msgstr "" "Le workflow par défaut de Flower sera exécuté en configurant les machines" " Docker requises en arrière plan." -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:120 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:147 #, fuzzy msgid "Build Release" msgstr "Inédit" -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:122 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:149 msgid "" "Flower uses Poetry to build releases. The necessary command is wrapped in" " a simple script::" @@ -2627,7 +2553,7 @@ msgstr "" "Flower utilise Poetry pour construire les nouvelles versions. La commande" " nécessaire est comprise dans un script simple ::" -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:127 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:154 msgid "" "The resulting :code:`.whl` and :code:`.tar.gz` releases will be stored in" " the :code:`/dist` subdirectory." @@ -2635,19 +2561,19 @@ msgstr "" "Les versions résultantes :code:`.whl` et :code:`.tar.gz` seront stockées " "dans le sous-répertoire:code:`/dist`." -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:132 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:159 #, fuzzy msgid "Build Documentation" msgstr "Amélioration de la documentation" -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:134 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:161 msgid "" "Flower's documentation uses `Sphinx `_. " "There's no convenience script to re-build the documentation yet, but it's" " pretty easy::" msgstr "" -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:140 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:167 msgid "This will generate HTML documentation in ``doc/build/html``." msgstr "" @@ -2788,7 +2714,6 @@ msgstr "" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:94 #: ../../source/example-jax-from-centralized-to-federated.rst:277 -#: ../../source/example-mxnet-walk-through.rst:356 #: ../../source/example-pytorch-from-centralized-to-federated.rst:310 #: ../../source/tutorial-quickstart-jax.rst:283 msgid "Next Steps" @@ -3002,7 +2927,6 @@ msgstr "" "opération pour plusieurs tours." #: ../../source/example-jax-from-centralized-to-federated.rst:123 -#: ../../source/example-mxnet-walk-through.rst:204 #: ../../source/example-pytorch-from-centralized-to-federated.rst:181 #: ../../source/tutorial-quickstart-jax.rst:129 msgid "" @@ -3018,7 +2942,6 @@ msgstr "" "d'apprentissage fédéré." #: ../../source/example-jax-from-centralized-to-federated.rst:133 -#: ../../source/example-mxnet-walk-through.rst:214 #: ../../source/example-pytorch-from-centralized-to-federated.rst:191 #: ../../source/tutorial-quickstart-jax.rst:139 msgid "We can already start the *server*:" @@ -3065,13 +2988,11 @@ msgstr "" "pour tester le modèle :" #: ../../source/example-jax-from-centralized-to-federated.rst:161 -#: ../../source/example-mxnet-walk-through.rst:242 #: ../../source/tutorial-quickstart-jax.rst:167 msgid ":code:`set_parameters (optional)`" msgstr ":code:`set_parameters (optional)`" #: ../../source/example-jax-from-centralized-to-federated.rst:160 -#: ../../source/example-mxnet-walk-through.rst:241 #: ../../source/example-pytorch-from-centralized-to-federated.rst:219 #: ../../source/tutorial-quickstart-jax.rst:166 msgid "" @@ -3085,7 +3006,6 @@ msgid "transform parameters to NumPy :code:`ndarray`'s" msgstr "transforme les paramètres en NumPy :code:`ndarray`'s" #: ../../source/example-jax-from-centralized-to-federated.rst:162 -#: ../../source/example-mxnet-walk-through.rst:243 #: ../../source/example-pytorch-from-centralized-to-federated.rst:220 #: ../../source/tutorial-quickstart-jax.rst:168 msgid "" @@ -3096,17 +3016,14 @@ msgstr "" ":code:`ndarray`'s (pensez à la liste des couches du réseau neuronal)" #: ../../source/example-jax-from-centralized-to-federated.rst:163 -#: ../../source/example-mxnet-walk-through.rst:244 #: ../../source/example-pytorch-from-centralized-to-federated.rst:221 #: ../../source/tutorial-quickstart-jax.rst:169 -#: ../../source/tutorial-quickstart-mxnet.rst:171 #: ../../source/tutorial-quickstart-pytorch.rst:155 -#: ../../source/tutorial-quickstart-scikitlearn.rst:108 +#: ../../source/tutorial-quickstart-scikitlearn.rst:118 msgid ":code:`get_parameters`" msgstr ":code:`get_parameters`" #: ../../source/example-jax-from-centralized-to-federated.rst:164 -#: ../../source/example-mxnet-walk-through.rst:245 #: ../../source/example-pytorch-from-centralized-to-federated.rst:222 #: ../../source/tutorial-quickstart-jax.rst:170 msgid "" @@ -3118,19 +3035,15 @@ msgstr "" ":code:`flwr.client.NumPyClient` attend)" #: ../../source/example-jax-from-centralized-to-federated.rst:167 -#: ../../source/example-mxnet-walk-through.rst:248 #: ../../source/example-pytorch-from-centralized-to-federated.rst:225 #: ../../source/tutorial-quickstart-jax.rst:173 -#: ../../source/tutorial-quickstart-mxnet.rst:177 #: ../../source/tutorial-quickstart-pytorch.rst:161 -#: ../../source/tutorial-quickstart-scikitlearn.rst:115 +#: ../../source/tutorial-quickstart-scikitlearn.rst:125 msgid ":code:`fit`" msgstr ":code:`fit`" #: ../../source/example-jax-from-centralized-to-federated.rst:166 #: ../../source/example-jax-from-centralized-to-federated.rst:170 -#: ../../source/example-mxnet-walk-through.rst:247 -#: ../../source/example-mxnet-walk-through.rst:251 #: ../../source/example-pytorch-from-centralized-to-federated.rst:224 #: ../../source/example-pytorch-from-centralized-to-federated.rst:228 #: ../../source/tutorial-quickstart-jax.rst:172 @@ -3143,7 +3056,6 @@ msgstr "" " serveur" #: ../../source/example-jax-from-centralized-to-federated.rst:167 -#: ../../source/example-mxnet-walk-through.rst:248 #: ../../source/example-pytorch-from-centralized-to-federated.rst:225 #: ../../source/tutorial-quickstart-jax.rst:173 msgid "train the model on the local training set" @@ -3157,17 +3069,14 @@ msgstr "" "serveur" #: ../../source/example-jax-from-centralized-to-federated.rst:172 -#: ../../source/example-mxnet-walk-through.rst:253 #: ../../source/example-pytorch-from-centralized-to-federated.rst:230 #: ../../source/tutorial-quickstart-jax.rst:178 -#: ../../source/tutorial-quickstart-mxnet.rst:180 #: ../../source/tutorial-quickstart-pytorch.rst:164 -#: ../../source/tutorial-quickstart-scikitlearn.rst:118 +#: ../../source/tutorial-quickstart-scikitlearn.rst:128 msgid ":code:`evaluate`" msgstr ":code:`évaluer`" #: ../../source/example-jax-from-centralized-to-federated.rst:171 -#: ../../source/example-mxnet-walk-through.rst:252 #: ../../source/example-pytorch-from-centralized-to-federated.rst:229 #: ../../source/tutorial-quickstart-jax.rst:177 msgid "evaluate the updated model on the local test set" @@ -3215,7 +3124,6 @@ msgid "Having defined the federation process, we can run it." msgstr "Après avoir défini le processus de fédération, nous pouvons l'exécuter." #: ../../source/example-jax-from-centralized-to-federated.rst:268 -#: ../../source/example-mxnet-walk-through.rst:347 #: ../../source/example-pytorch-from-centralized-to-federated.rst:301 #: ../../source/tutorial-quickstart-jax.rst:274 msgid "And that's it. You can now open two additional terminal windows and run" @@ -3260,417 +3168,112 @@ msgstr "" " modèle plus sophistiqué ou un ensemble de données différent ? Pourquoi " "ne pas ajouter d'autres clients ?" -#: ../../source/example-mxnet-walk-through.rst:2 -msgid "Example: MXNet - Run MXNet Federated" -msgstr "Exemple : MXNet - Exécuter MXNet Federated" +#: ../../source/example-pytorch-from-centralized-to-federated.rst:2 +msgid "Example: PyTorch - From Centralized To Federated" +msgstr "Exemple : PyTorch - De la centralisation à la fédération" -#: ../../source/example-mxnet-walk-through.rst:4 -#, fuzzy +#: ../../source/example-pytorch-from-centralized-to-federated.rst:4 msgid "" "This tutorial will show you how to use Flower to build a federated " -"version of an existing MXNet workload. We are using MXNet to train a " -"Sequential model on the MNIST dataset. We will structure the example " -"similar to our `PyTorch - From Centralized To Federated " -"`_ walkthrough. MXNet and PyTorch are very " -"similar and a very good comparison between MXNet and PyTorch is given " -"`here `_. First, we build a centralized " -"training approach based on the `Handwritten Digit Recognition " -"`_" -" tutorial. Then, we build upon the centralized training code to run the " +"version of an existing machine learning workload. We are using PyTorch to" +" train a Convolutional Neural Network on the CIFAR-10 dataset. First, we " +"introduce this machine learning task with a centralized training approach" +" based on the `Deep Learning with PyTorch " +"`_ " +"tutorial. Then, we build upon the centralized training code to run the " "training in a federated fashion." msgstr "" "Ce tutoriel te montrera comment utiliser Flower pour construire une " -"version fédérée d'une charge de travail MXNet existante. Nous utilisons " -"MXNet pour former un modèle séquentiel sur l'ensemble de données MNIST. " -"Nous structurerons l'exemple de la même manière que notre présentation " -"`PyTorch - De la centralisation à la fédération " -"`_. MXNet et PyTorch sont très similaires et une" -" très bonne comparaison entre MXNet et PyTorch est donnée ici " -"`_. Tout d'abord, nous " -"construisons une approche de formation centralisée basée sur le tutoriel " -"`Handandwritten Digit Recognition " -"`_." -" Ensuite, nous nous basons sur le code de formation centralisé pour " -"exécuter la formation de manière fédérée." +"version fédérée d'une charge de travail d'apprentissage automatique " +"existante. Nous utilisons PyTorch pour entraîner un réseau neuronal " +"convolutif sur l'ensemble de données CIFAR-10. Tout d'abord, nous " +"présentons cette tâche d'apprentissage automatique avec une approche " +"d'entraînement centralisée basée sur le tutoriel `Deep Learning with " +"PyTorch " +"`_. " +"Ensuite, nous nous appuyons sur le code d'entraînement centralisé pour " +"exécuter l'entraînement de manière fédérée." -#: ../../source/example-mxnet-walk-through.rst:10 +#: ../../source/example-pytorch-from-centralized-to-federated.rst:12 msgid "" -"Before we start setting up our MXNet example, we install the " -":code:`mxnet` and :code:`flwr` packages:" +"We begin with a brief description of the centralized CNN training code. " +"If you want a more in-depth explanation of what's going on then have a " +"look at the official `PyTorch tutorial " +"`_." msgstr "" -"Avant de commencer à configurer notre exemple MXNet, nous installons les " -"paquets :code:`mxnet` et :code:`flwr` :" - -#: ../../source/example-mxnet-walk-through.rst:19 -msgid "MNIST Training with MXNet" -msgstr "Formation MNIST avec MXNet" +"Nous commençons par une brève description du code d'entraînement CNN " +"centralisé. Si tu veux une explication plus approfondie de ce qui se " +"passe, jette un coup d'œil au tutoriel officiel `PyTorch " +"`_." -#: ../../source/example-mxnet-walk-through.rst:21 +#: ../../source/example-pytorch-from-centralized-to-federated.rst:15 msgid "" -"We begin with a brief description of the centralized training code based " -"on a :code:`Sequential` model. If you want a more in-depth explanation of" -" what's going on then have a look at the official `MXNet tutorial " -"`_." +"Let's create a new file called :code:`cifar.py` with all the components " +"required for a traditional (centralized) training on CIFAR-10. First, all" +" required packages (such as :code:`torch` and :code:`torchvision`) need " +"to be imported. You can see that we do not import any package for " +"federated learning. You can keep all these imports as they are even when " +"we add the federated learning components at a later point." msgstr "" -"Nous commençons par une brève description du code d'entraînement " -"centralisé basé sur un modèle :code:`Sequential`. Si tu veux une " -"explication plus approfondie de ce qui se passe, jette un coup d'œil au " -"tutoriel officiel `MXNet " -"`_." +"Créons un nouveau fichier appelé :code:`cifar.py` avec tous les " +"composants requis pour une formation traditionnelle (centralisée) sur le " +"CIFAR-10. Tout d'abord, tous les paquets requis (tels que :code:`torch` " +"et :code:`torchvision`) doivent être importés. Tu peux voir que nous " +"n'importons aucun paquet pour l'apprentissage fédéré. Tu peux conserver " +"toutes ces importations telles quelles même lorsque nous ajouterons les " +"composants d'apprentissage fédéré à un moment ultérieur." -#: ../../source/example-mxnet-walk-through.rst:24 +#: ../../source/example-pytorch-from-centralized-to-federated.rst:32 msgid "" -"Let's create a new file called:code:`mxnet_mnist.py` with all the " -"components required for a traditional (centralized) MNIST training. " -"First, the MXNet package :code:`mxnet` needs to be imported. You can see " -"that we do not yet import the :code:`flwr` package for federated " -"learning. This will be done later." -msgstr "" -"Créons un nouveau fichier appelé :code:`mxnet_mnist.py` avec tous les " -"composants requis pour un apprentissage MNIST traditionnel (centralisé). " -"Tout d'abord, le package MXNet :code:`mxnet` doit être importé. Tu peux " -"voir que nous n'avons pas encore importé le package :code:`flwr` pour " -"l'apprentissage fédéré. Cela sera fait plus tard." - -#: ../../source/example-mxnet-walk-through.rst:42 -msgid "The :code:`load_data()` function loads the MNIST training and test sets." +"As already mentioned we will use the CIFAR-10 dataset for this machine " +"learning workload. The model architecture (a very simple Convolutional " +"Neural Network) is defined in :code:`class Net()`." msgstr "" -"La fonction :code:`load_data()` charge les ensembles d'entraînement et de" -" test MNIST." +"Comme nous l'avons déjà mentionné, nous utiliserons l'ensemble de données" +" CIFAR-10 pour cette charge de travail d'apprentissage automatique. " +"L'architecture du modèle (un réseau neuronal convolutif très simple) est " +"définie dans :code:`class Net()`." -#: ../../source/example-mxnet-walk-through.rst:57 +#: ../../source/example-pytorch-from-centralized-to-federated.rst:56 msgid "" -"As already mentioned, we will use the MNIST dataset for this machine " -"learning workload. The model architecture (a very simple " -":code:`Sequential` model) is defined in :code:`model()`." +"The :code:`load_data()` function loads the CIFAR-10 training and test " +"sets. The :code:`transform` normalized the data after loading." msgstr "" -"Comme nous l'avons déjà mentionné, nous utiliserons l'ensemble de données" -" MNIST pour cette charge de travail d'apprentissage automatique. " -"L'architecture du modèle (un modèle :code:`Séquentiel` très simple) est " -"définie dans :code:`model()`." +"La fonction :code:`load_data()` charge les ensembles d'entraînement et de" +" test CIFAR-10. La fonction :code:`transform` normalise les données après" +" leur chargement." -#: ../../source/example-mxnet-walk-through.rst:70 +#: ../../source/example-pytorch-from-centralized-to-federated.rst:74 msgid "" "We now need to define the training (function :code:`train()`) which loops" -" over the training set and measures the loss for each batch of training " -"examples." +" over the training set, measures the loss, backpropagates it, and then " +"takes one optimizer step for each batch of training examples." msgstr "" "Nous devons maintenant définir la formation (fonction :code:`train()`) " -"qui passe en boucle sur l'ensemble de la formation et mesure la perte " -"pour chaque lot d'exemples de formation." +"qui passe en boucle sur l'ensemble de la formation, mesure la perte, la " +"rétropropage, puis effectue une étape d'optimisation pour chaque lot " +"d'exemples de formation." -#: ../../source/example-mxnet-walk-through.rst:123 +#: ../../source/example-pytorch-from-centralized-to-federated.rst:76 msgid "" -"The evaluation of the model is defined in function :code:`test()`. The " -"function loops over all test samples and measures the loss and accuracy " -"of the model based on the test dataset." +"The evaluation of the model is defined in the function :code:`test()`. " +"The function loops over all test samples and measures the loss of the " +"model based on the test dataset." msgstr "" -"L'évaluation du modèle est définie dans la fonction :code:`test()`. Cette" -" fonction passe en boucle sur tous les échantillons de test et mesure la " -"perte et la précision du modèle en fonction de l'ensemble des données de " -"test." +"L'évaluation du modèle est définie dans la fonction :code:`test()`. La " +"fonction boucle sur tous les échantillons de test et mesure la perte du " +"modèle en fonction de l'ensemble des données de test." -#: ../../source/example-mxnet-walk-through.rst:158 +#: ../../source/example-pytorch-from-centralized-to-federated.rst:136 msgid "" "Having defined the data loading, model architecture, training, and " -"evaluation we can put everything together and train our model on MNIST. " -"Note that the GPU/CPU device for the training and testing is defined " -"within the :code:`ctx` (context)." +"evaluation we can put everything together and train our CNN on CIFAR-10." msgstr "" "Après avoir défini le chargement des données, l'architecture du modèle, " -"l'entraînement et l'évaluation, nous pouvons tout assembler et entraîner " -"notre modèle sur MNIST. Note que le dispositif GPU/CPU pour " -"l'entraînement et le test est défini dans le :code:`ctx` (contexte)." - -#: ../../source/example-mxnet-walk-through.rst:184 -msgid "You can now run your (centralized) MXNet machine learning workload:" -msgstr "" -"Tu peux maintenant exécuter ta charge de travail (centralisée) " -"d'apprentissage automatique MXNet :" +"la formation et l'évaluation, nous pouvons tout mettre ensemble et former" +" notre CNN sur CIFAR-10." -#: ../../source/example-mxnet-walk-through.rst:190 -msgid "" -"So far this should all look fairly familiar if you've used MXNet (or even" -" PyTorch) before. Let's take the next step and use what we've built to " -"create a simple federated learning system consisting of one server and " -"two clients." -msgstr "" -"Jusqu'à présent, tout cela devrait te sembler assez familier si tu as " -"déjà utilisé MXNet (ou même PyTorch). Passons à l'étape suivante et " -"utilisons ce que nous avons construit pour créer un simple système " -"d'apprentissage fédéré composé d'un serveur et de deux clients." - -#: ../../source/example-mxnet-walk-through.rst:194 -msgid "MXNet meets Flower" -msgstr "MXNet rencontre Flower" - -#: ../../source/example-mxnet-walk-through.rst:196 -msgid "" -"So far, it was not easily possible to use MXNet workloads for federated " -"learning because federated learning is not supported in MXNet. Since " -"Flower is fully agnostic towards the underlying machine learning " -"framework, it can be used to federated arbitrary machine learning " -"workloads. This section will show you how Flower can be used to federate " -"our centralized MXNet workload." -msgstr "" -"Jusqu'à présent, il n'était pas facile d'utiliser les charges de travail " -"MXNet pour l'apprentissage fédéré car l'apprentissage fédéré n'est pas " -"pris en charge dans MXNet. Comme Flower est totalement agnostique " -"vis-à-vis du cadre d'apprentissage automatique sous-jacent, il peut être " -"utilisé pour fédérer des charges de travail d'apprentissage automatique " -"arbitraires. Cette section te montrera comment Flower peut être utilisé " -"pour fédérer notre charge de travail MXNet centralisée." - -#: ../../source/example-mxnet-walk-through.rst:198 -msgid "" -"The concept to federate an existing workload is always the same and easy " -"to understand. We have to start a *server* and then use the code in " -":code:`mxnet_mnist.py` for the *clients* that are connected to the " -"*server*. The *server* sends model parameters to the clients. The " -"*clients* run the training and update the parameters. The updated " -"parameters are sent back to the *server* which averages all received " -"parameter updates. This describes one round of the federated learning " -"process and we repeat this for multiple rounds." -msgstr "" -"Le concept pour fédérer une charge de travail existante est toujours le " -"même et facile à comprendre. Nous devons démarrer un *serveur* et ensuite" -" utiliser le code dans :code:`mxnet_mnist.py` pour les *clients* qui sont" -" connectés au *serveur*. Le *serveur* envoie les paramètres du modèle aux" -" clients. Les *clients* exécutent la formation et mettent à jour les " -"paramètres. Les paramètres mis à jour sont renvoyés au *serveur* qui fait" -" la moyenne de toutes les mises à jour de paramètres reçues. Ceci décrit " -"un tour du processus d'apprentissage fédéré et nous répétons cette " -"opération pour plusieurs tours." - -#: ../../source/example-mxnet-walk-through.rst:220 -msgid "" -"Finally, we will define our *client* logic in :code:`client.py` and build" -" upon the previously defined MXNet training in :code:`mxnet_mnist.py`. " -"Our *client* needs to import :code:`flwr`, but also :code:`mxnet` to " -"update the parameters on our MXNet model:" -msgstr "" -"Enfin, nous allons définir la logique de notre *client* dans " -":code:`client.py` et nous appuyer sur l'entraînement MXNet défini " -"précédemment dans :code:`mxnet_mnist.py`. Notre *client* doit importer " -":code:`flwr`, mais aussi :code:`mxnet` pour mettre à jour les paramètres " -"de notre modèle MXNet :" - -#: ../../source/example-mxnet-walk-through.rst:235 -msgid "" -"Implementing a Flower *client* basically means implementing a subclass of" -" either :code:`flwr.client.Client` or :code:`flwr.client.NumPyClient`. " -"Our implementation will be based on :code:`flwr.client.NumPyClient` and " -"we'll call it :code:`MNISTClient`. :code:`NumPyClient` is slightly easier" -" to implement than :code:`Client` if you use a framework with good NumPy " -"interoperability (like PyTorch or MXNet) because it avoids some of the " -"boilerplate that would otherwise be necessary. :code:`MNISTClient` needs " -"to implement four methods, two methods for getting/setting model " -"parameters, one method for training the model, and one method for testing" -" the model:" -msgstr "" -"Implementing a Flower *client* basically means implementing a subclass of" -" either :code:`flwr.client.Client` or :code:`flwr.client.NumPyClient`. " -"Our implementation will be based on :code:`flwr.client.NumPyClient` and " -"we'll call it :code:`MNISTClient`. :code:`NumPyClient` is slightly easier" -" to implement than :code:`Client` if you use a framework with good NumPy " -"interoperability (like PyTorch or MXNet) because it avoids some of the " -"boilerplate that would otherwise be necessary. :code:`MNISTClient` needs " -"to implement four methods, two methods for getting/setting model " -"parameters, one method for training the model, and one method for testing" -" the model:" - -#: ../../source/example-mxnet-walk-through.rst:242 -msgid "transform MXNet :code:`NDArray`'s to NumPy :code:`ndarray`'s" -msgstr "transforme les :code:`NDArray` du MXNet en :code:`ndarray` de NumPy" - -#: ../../source/example-mxnet-walk-through.rst:249 -#: ../../source/example-pytorch-from-centralized-to-federated.rst:226 -msgid "get the updated local model weights and return them to the server" -msgstr "récupère les poids du modèle local mis à jour et les renvoie au serveur" - -#: ../../source/example-mxnet-walk-through.rst:253 -#: ../../source/example-pytorch-from-centralized-to-federated.rst:230 -msgid "return the local loss and accuracy to the server" -msgstr "renvoie la perte locale et la précision au serveur" - -#: ../../source/example-mxnet-walk-through.rst:255 -msgid "" -"The challenging part is to transform the MXNet parameters from " -":code:`NDArray` to :code:`NumPy Arrays` to make it readable for Flower." -msgstr "" -"La partie la plus difficile est de transformer les paramètres MXNet de " -":code:`NDArray` en :code:`NumPy Arrays` pour les rendre lisibles pour " -"Flower." - -#: ../../source/example-mxnet-walk-through.rst:257 -msgid "" -"The two :code:`NumPyClient` methods :code:`fit` and :code:`evaluate` make" -" use of the functions :code:`train()` and :code:`test()` previously " -"defined in :code:`mxnet_mnist.py`. So what we really do here is we tell " -"Flower through our :code:`NumPyClient` subclass which of our already " -"defined functions to call for training and evaluation. We included type " -"annotations to give you a better understanding of the data types that get" -" passed around." -msgstr "" -"Les deux méthodes :code:`NumPyClient` :code:`fit` et :code:`evaluate` " -"utilisent les fonctions :code:`train()` et :code:`test()` définies " -"précédemment dans :code:`mxnet_mnist.py`. Ce que nous faisons vraiment " -"ici, c'est que nous indiquons à Flower, par le biais de notre sous-classe" -" :code:`NumPyClient`, laquelle de nos fonctions déjà définies doit être " -"appelée pour l'entraînement et l'évaluation. Nous avons inclus des " -"annotations de type pour te donner une meilleure compréhension des types " -"de données qui sont transmis." - -#: ../../source/example-mxnet-walk-through.rst:319 -msgid "" -"Having defined data loading, model architecture, training, and evaluation" -" we can put everything together and train our :code:`Sequential` model on" -" MNIST." -msgstr "" -"Après avoir défini le chargement des données, l'architecture du modèle, " -"la formation et l'évaluation, nous pouvons tout rassembler et former " -"notre modèle :code:`Sequential` sur MNIST." - -#: ../../source/example-mxnet-walk-through.rst:353 -msgid "" -"in each window (make sure that the server is still running before you do " -"so) and see your MXNet project run federated learning across two clients." -" Congratulations!" -msgstr "" -"dans chaque fenêtre (assure-toi que le serveur est toujours en cours " -"d'exécution avant de le faire) et tu verras ton projet MXNet exécuter " -"l'apprentissage fédéré sur deux clients. Félicitations !" - -#: ../../source/example-mxnet-walk-through.rst:358 -#, fuzzy -msgid "" -"The full source code for this example: `MXNet: From Centralized To " -"Federated (Code) `_. Our example is of course " -"somewhat over-simplified because both clients load the exact same " -"dataset, which isn't realistic. You're now prepared to explore this topic" -" further. How about using a CNN or using a different dataset? How about " -"adding more clients?" -msgstr "" -"Le code source complet de cet exemple : `MXNet : From Centralized To " -"Federated (Code) `_. Notre exemple est bien sûr un " -"peu trop simplifié parce que les deux clients chargent exactement le même" -" ensemble de données, ce qui n'est pas réaliste. Tu es maintenant prêt à " -"explorer ce sujet plus en profondeur. Pourquoi ne pas utiliser un CNN ou " -"un ensemble de données différent ? Pourquoi ne pas ajouter d'autres " -"clients ?" - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:2 -msgid "Example: PyTorch - From Centralized To Federated" -msgstr "Exemple : PyTorch - De la centralisation à la fédération" - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:4 -msgid "" -"This tutorial will show you how to use Flower to build a federated " -"version of an existing machine learning workload. We are using PyTorch to" -" train a Convolutional Neural Network on the CIFAR-10 dataset. First, we " -"introduce this machine learning task with a centralized training approach" -" based on the `Deep Learning with PyTorch " -"`_ " -"tutorial. Then, we build upon the centralized training code to run the " -"training in a federated fashion." -msgstr "" -"Ce tutoriel te montrera comment utiliser Flower pour construire une " -"version fédérée d'une charge de travail d'apprentissage automatique " -"existante. Nous utilisons PyTorch pour entraîner un réseau neuronal " -"convolutif sur l'ensemble de données CIFAR-10. Tout d'abord, nous " -"présentons cette tâche d'apprentissage automatique avec une approche " -"d'entraînement centralisée basée sur le tutoriel `Deep Learning with " -"PyTorch " -"`_. " -"Ensuite, nous nous appuyons sur le code d'entraînement centralisé pour " -"exécuter l'entraînement de manière fédérée." - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:12 -msgid "" -"We begin with a brief description of the centralized CNN training code. " -"If you want a more in-depth explanation of what's going on then have a " -"look at the official `PyTorch tutorial " -"`_." -msgstr "" -"Nous commençons par une brève description du code d'entraînement CNN " -"centralisé. Si tu veux une explication plus approfondie de ce qui se " -"passe, jette un coup d'œil au tutoriel officiel `PyTorch " -"`_." - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:15 -msgid "" -"Let's create a new file called :code:`cifar.py` with all the components " -"required for a traditional (centralized) training on CIFAR-10. First, all" -" required packages (such as :code:`torch` and :code:`torchvision`) need " -"to be imported. You can see that we do not import any package for " -"federated learning. You can keep all these imports as they are even when " -"we add the federated learning components at a later point." -msgstr "" -"Créons un nouveau fichier appelé :code:`cifar.py` avec tous les " -"composants requis pour une formation traditionnelle (centralisée) sur le " -"CIFAR-10. Tout d'abord, tous les paquets requis (tels que :code:`torch` " -"et :code:`torchvision`) doivent être importés. Tu peux voir que nous " -"n'importons aucun paquet pour l'apprentissage fédéré. Tu peux conserver " -"toutes ces importations telles quelles même lorsque nous ajouterons les " -"composants d'apprentissage fédéré à un moment ultérieur." - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:32 -msgid "" -"As already mentioned we will use the CIFAR-10 dataset for this machine " -"learning workload. The model architecture (a very simple Convolutional " -"Neural Network) is defined in :code:`class Net()`." -msgstr "" -"Comme nous l'avons déjà mentionné, nous utiliserons l'ensemble de données" -" CIFAR-10 pour cette charge de travail d'apprentissage automatique. " -"L'architecture du modèle (un réseau neuronal convolutif très simple) est " -"définie dans :code:`class Net()`." - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:56 -msgid "" -"The :code:`load_data()` function loads the CIFAR-10 training and test " -"sets. The :code:`transform` normalized the data after loading." -msgstr "" -"La fonction :code:`load_data()` charge les ensembles d'entraînement et de" -" test CIFAR-10. La fonction :code:`transform` normalise les données après" -" leur chargement." - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:74 -msgid "" -"We now need to define the training (function :code:`train()`) which loops" -" over the training set, measures the loss, backpropagates it, and then " -"takes one optimizer step for each batch of training examples." -msgstr "" -"Nous devons maintenant définir la formation (fonction :code:`train()`) " -"qui passe en boucle sur l'ensemble de la formation, mesure la perte, la " -"rétropropage, puis effectue une étape d'optimisation pour chaque lot " -"d'exemples de formation." - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:76 -msgid "" -"The evaluation of the model is defined in the function :code:`test()`. " -"The function loops over all test samples and measures the loss of the " -"model based on the test dataset." -msgstr "" -"L'évaluation du modèle est définie dans la fonction :code:`test()`. La " -"fonction boucle sur tous les échantillons de test et mesure la perte du " -"modèle en fonction de l'ensemble des données de test." - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:136 -msgid "" -"Having defined the data loading, model architecture, training, and " -"evaluation we can put everything together and train our CNN on CIFAR-10." -msgstr "" -"Après avoir défini le chargement des données, l'architecture du modèle, " -"la formation et l'évaluation, nous pouvons tout mettre ensemble et former" -" notre CNN sur CIFAR-10." - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:163 +#: ../../source/example-pytorch-from-centralized-to-federated.rst:163 msgid "" "So far, this should all look fairly familiar if you've used PyTorch " "before. Let's take the next step and use what we've built to create a " @@ -3770,6 +3373,14 @@ msgstr "" msgid ":code:`set_parameters`" msgstr ":code:`set_parameters`" +#: ../../source/example-pytorch-from-centralized-to-federated.rst:226 +msgid "get the updated local model weights and return them to the server" +msgstr "récupère les poids du modèle local mis à jour et les renvoie au serveur" + +#: ../../source/example-pytorch-from-centralized-to-federated.rst:230 +msgid "return the local loss and accuracy to the server" +msgstr "renvoie la perte locale et la précision au serveur" + #: ../../source/example-pytorch-from-centralized-to-federated.rst:232 msgid "" "The two :code:`NumPyClient` methods :code:`fit` and :code:`evaluate` make" @@ -4923,75 +4534,235 @@ msgstr "" "Le serveur peut alors utiliser une stratégie personnalisée pour agréger " "les mesures fournies dans ces dictionnaires :" -#: ../../source/how-to-configure-clients.rst:2 -#, fuzzy -msgid "Configure clients" -msgstr "Configurer les clients" +#: ../../source/how-to-authenticate-supernodes.rst:2 +msgid "Authenticate SuperNodes" +msgstr "" -#: ../../source/how-to-configure-clients.rst:4 +#: ../../source/how-to-authenticate-supernodes.rst:4 msgid "" -"Along with model parameters, Flower can send configuration values to " -"clients. Configuration values can be used for various purposes. They are," -" for example, a popular way to control client-side hyperparameters from " -"the server." +"Flower has built-in support for authenticated SuperNodes that you can use" +" to verify the identities of each SuperNode connecting to a SuperLink. " +"Flower node authentication works similar to how GitHub SSH authentication" +" works:" msgstr "" -"En plus des paramètres du modèle, Flower peut envoyer des valeurs de " -"configuration aux clients. Les valeurs de configuration peuvent être " -"utilisées à diverses fins. Elles constituent, par exemple, un moyen " -"populaire de contrôler les hyperparamètres côté client à partir du " -"serveur." -#: ../../source/how-to-configure-clients.rst:7 -msgid "Configuration values" -msgstr "Valeurs de configuration" +#: ../../source/how-to-authenticate-supernodes.rst:7 +msgid "SuperLink (server) stores a list of known (client) node public keys" +msgstr "" -#: ../../source/how-to-configure-clients.rst:9 +#: ../../source/how-to-authenticate-supernodes.rst:8 msgid "" -"Configuration values are represented as a dictionary with ``str`` keys " -"and values of type ``bool``, ``bytes``, ``double`` (64-bit precision " -"float), ``int``, or ``str`` (or equivalent types in different languages)." -" Here is an example of a configuration dictionary in Python:" +"Using ECDH, both SuperNode and SuperLink independently derive a shared " +"secret" msgstr "" -"Les valeurs de configuration sont représentées sous forme de dictionnaire" -" avec des clés `str`` et des valeurs de type `bool`, `bytes`, `double` " -"(float de précision 64 bits), `int`, ou `str` (ou des types équivalents " -"dans d'autres langages). Voici un exemple de dictionnaire de " -"configuration en Python :" -#: ../../source/how-to-configure-clients.rst:20 +#: ../../source/how-to-authenticate-supernodes.rst:9 msgid "" -"Flower serializes these configuration dictionaries (or *config dict* for " -"short) to their ProtoBuf representation, transports them to the client " -"using gRPC, and then deserializes them back to Python dictionaries." +"Shared secret is used to compute the HMAC value of the message sent from " +"SuperNode to SuperLink as a token" msgstr "" -"Flower sérialise ces dictionnaires de configuration (ou *config dict* en " -"abrégé) dans leur représentation ProtoBuf, les transporte vers le client " -"à l'aide de gRPC, puis les désérialise à nouveau en dictionnaires Python." -#: ../../source/how-to-configure-clients.rst:24 +#: ../../source/how-to-authenticate-supernodes.rst:10 +msgid "SuperLink verifies the token" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:12 +#, fuzzy msgid "" -"Currently, there is no support for directly sending collection types " -"(e.g., ``Set``, ``List``, ``Map``) as values in configuration " -"dictionaries. There are several workarounds to send collections as values" -" by converting them to one of the supported value types (and converting " -"them back on the client-side)." +"We recommend you to check out the complete `code example " +"`_ demonstrating federated learning with Flower in an " +"authenticated setting." msgstr "" -"Actuellement, il n'est pas possible d'envoyer directement des types de " -"collections (par exemple, ``Set``, ``List``, ``Map``) en tant que valeurs" -" dans les dictionnaires de configuration. Il existe plusieurs solutions " -"pour envoyer des collections en tant que valeurs en les convertissant en " -"l'un des types de valeurs pris en charge (et en les reconvertissant du " -"côté client)." +"Réfère-toi à l'exemple de code complet " +"`_ " +"pour en savoir plus." -#: ../../source/how-to-configure-clients.rst:26 +#: ../../source/how-to-authenticate-supernodes.rst:15 msgid "" -"One can, for example, convert a list of floating-point numbers to a JSON " -"string, then send the JSON string using the configuration dictionary, and" -" then convert the JSON string back to a list of floating-point numbers on" -" the client." +"This guide covers a preview feature that might change in future versions " +"of Flower." msgstr "" -"On peut, par exemple, convertir une liste de nombres à virgule flottante " -"en une chaîne JSON, puis envoyer la chaîne JSON à l'aide du dictionnaire " + +#: ../../source/how-to-authenticate-supernodes.rst:18 +msgid "" +"For increased security, node authentication can only be used when " +"encrypted connections (SSL/TLS) are enabled." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:21 +msgid "Enable node authentication in :code:`SuperLink`" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:23 +msgid "" +"To enable node authentication, first you need to configure SSL/TLS " +"connections to secure the SuperLink<>SuperNode communication. You can " +"find the complete guide `here `_. After configuring secure connections, you" +" can enable client authentication in a long-running Flower " +":code:`SuperLink`. Use the following terminal command to start a Flower " +":code:`SuperNode` that has both secure connections and node " +"authentication enabled:" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:36 +msgid "Let's break down the authentication flags:" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:38 +msgid "" +"The first flag :code:`--auth-list-public-keys` expects a path to a CSV " +"file storing all known node public keys. You need to store all known node" +" public keys that are allowed to participate in a federation in one CSV " +"file (:code:`.csv`)." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:40 +msgid "" +"A valid CSV file storing known node public keys should list the keys in " +"OpenSSH format, separated by commas and without any comments. For an " +"example, refer to our code sample, which contains a CSV file with two " +"known node public keys." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:42 +msgid "" +"The second and third flags :code:`--auth-superlink-private-key` and :code" +":`--auth-superlink-public-key` expect paths to the server's private and " +"public keys. For development purposes, you can generate a private and " +"public key pair using :code:`ssh-keygen -t ecdsa -b 384`." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:45 +msgid "" +"In Flower 1.9, there is no support for dynamically removing, editing, or " +"adding known node public keys to the SuperLink. To change the set of " +"known nodes, you need to shut the server down, edit the CSV file, and " +"start the server again. Support for dynamically changing the set of known" +" nodes is on the roadmap to be released in Flower 1.10 (ETA: June)." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:51 +msgid "Enable node authentication in :code:`SuperNode`" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:53 +msgid "" +"Similar to the long-running Flower server (:code:`SuperLink`), you can " +"easily enable node authentication in the long-running Flower client " +"(:code:`SuperNode`). Use the following terminal command to start an " +"authenticated :code:`SuperNode`:" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:64 +msgid "" +"The :code:`--auth-supernode-private-key` flag expects a path to the " +"node's private key file and the :code:`--auth-supernode-public-key` flag " +"expects a path to the node's public key file. For development purposes, " +"you can generate a private and public key pair using :code:`ssh-keygen -t" +" ecdsa -b 384`." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:68 +msgid "Security notice" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:70 +msgid "" +"The system's security relies on the credentials of the SuperLink and each" +" SuperNode. Therefore, it is imperative to safeguard and safely store the" +" credentials to avoid security risks such as Public Key Infrastructure " +"(PKI) impersonation attacks. The node authentication mechanism also " +"involves human interaction, so please ensure that all of the " +"communication is done in a secure manner, using trusted communication " +"methods." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:75 +#: ../../source/how-to-enable-ssl-connections.rst:65 +#: ../../source/how-to-use-built-in-mods.rst:85 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:287 +msgid "Conclusion" +msgstr "Conclusion" + +#: ../../source/how-to-authenticate-supernodes.rst:77 +msgid "" +"You should now have learned how to start a long-running Flower server " +"(:code:`SuperLink`) and client (:code:`SuperNode`) with node " +"authentication enabled. You should also know the significance of the " +"private key and store it safely to minimize security risks." +msgstr "" + +#: ../../source/how-to-configure-clients.rst:2 +#, fuzzy +msgid "Configure clients" +msgstr "Configurer les clients" + +#: ../../source/how-to-configure-clients.rst:4 +msgid "" +"Along with model parameters, Flower can send configuration values to " +"clients. Configuration values can be used for various purposes. They are," +" for example, a popular way to control client-side hyperparameters from " +"the server." +msgstr "" +"En plus des paramètres du modèle, Flower peut envoyer des valeurs de " +"configuration aux clients. Les valeurs de configuration peuvent être " +"utilisées à diverses fins. Elles constituent, par exemple, un moyen " +"populaire de contrôler les hyperparamètres côté client à partir du " +"serveur." + +#: ../../source/how-to-configure-clients.rst:7 +msgid "Configuration values" +msgstr "Valeurs de configuration" + +#: ../../source/how-to-configure-clients.rst:9 +msgid "" +"Configuration values are represented as a dictionary with ``str`` keys " +"and values of type ``bool``, ``bytes``, ``double`` (64-bit precision " +"float), ``int``, or ``str`` (or equivalent types in different languages)." +" Here is an example of a configuration dictionary in Python:" +msgstr "" +"Les valeurs de configuration sont représentées sous forme de dictionnaire" +" avec des clés `str`` et des valeurs de type `bool`, `bytes`, `double` " +"(float de précision 64 bits), `int`, ou `str` (ou des types équivalents " +"dans d'autres langages). Voici un exemple de dictionnaire de " +"configuration en Python :" + +#: ../../source/how-to-configure-clients.rst:20 +msgid "" +"Flower serializes these configuration dictionaries (or *config dict* for " +"short) to their ProtoBuf representation, transports them to the client " +"using gRPC, and then deserializes them back to Python dictionaries." +msgstr "" +"Flower sérialise ces dictionnaires de configuration (ou *config dict* en " +"abrégé) dans leur représentation ProtoBuf, les transporte vers le client " +"à l'aide de gRPC, puis les désérialise à nouveau en dictionnaires Python." + +#: ../../source/how-to-configure-clients.rst:24 +msgid "" +"Currently, there is no support for directly sending collection types " +"(e.g., ``Set``, ``List``, ``Map``) as values in configuration " +"dictionaries. There are several workarounds to send collections as values" +" by converting them to one of the supported value types (and converting " +"them back on the client-side)." +msgstr "" +"Actuellement, il n'est pas possible d'envoyer directement des types de " +"collections (par exemple, ``Set``, ``List``, ``Map``) en tant que valeurs" +" dans les dictionnaires de configuration. Il existe plusieurs solutions " +"pour envoyer des collections en tant que valeurs en les convertissant en " +"l'un des types de valeurs pris en charge (et en les reconvertissant du " +"côté client)." + +#: ../../source/how-to-configure-clients.rst:26 +msgid "" +"One can, for example, convert a list of floating-point numbers to a JSON " +"string, then send the JSON string using the configuration dictionary, and" +" then convert the JSON string back to a list of floating-point numbers on" +" the client." +msgstr "" +"On peut, par exemple, convertir une liste de nombres à virgule flottante " +"en une chaîne JSON, puis envoyer la chaîne JSON à l'aide du dictionnaire " "de configuration, et enfin reconvertir la chaîne JSON en une liste de " "nombres à virgule flottante sur le client." @@ -5197,9 +4968,11 @@ msgid "Enable SSL connections" msgstr "Collecte centralisée des données" #: ../../source/how-to-enable-ssl-connections.rst:4 +#, fuzzy msgid "" -"This guide describes how to a SSL-enabled secure Flower server can be " -"started and how a Flower client can establish a secure connections to it." +"This guide describes how to a SSL-enabled secure Flower server " +"(:code:`SuperLink`) can be started and how a Flower client " +"(:code:`SuperNode`) can establish a secure connections to it." msgstr "" "Ce guide décrit comment démarrer un serveur Flower sécurisé par SSL et " "comment un client Flower peut établir une connexion sécurisée avec lui." @@ -5216,11 +4989,12 @@ msgstr "" "tensorflow>`_." #: ../../source/how-to-enable-ssl-connections.rst:10 +#, fuzzy msgid "" -"The code example comes with a README.md file which will explain how to " -"start it. Although it is already SSL-enabled, it might be less " -"descriptive on how. Stick to this guide for a deeper introduction to the " -"topic." +"The code example comes with a :code:`README.md` file which explains how " +"to start it. Although it is already SSL-enabled, it might be less " +"descriptive on how it does so. Stick to this guide for a deeper " +"introduction to the topic." msgstr "" "L'exemple de code est accompagné d'un fichier README.md qui t'expliquera " "comment le démarrer. Bien qu'il soit déjà activé par SSL, il peut être " @@ -5238,7 +5012,7 @@ msgid "" "server and client. For the purpose of this guide we are going to generate" " self-signed certificates. As this can become quite complex we are going " "to ask you to run the script in :code:`examples/advanced-" -"tensorflow/certificates/generate.sh`" +"tensorflow/certificates/generate.sh` with the following command sequence:" msgstr "" "L'utilisation de connexions compatibles avec le protocole SSL nécessite " "que des certificats soient transmis au serveur et au client. Pour les " @@ -5247,11 +5021,7 @@ msgstr "" "d'exécuter le script dans :code:`examples/advanced-" "tensorflow/certificates/generate.sh`" -#: ../../source/how-to-enable-ssl-connections.rst:23 -msgid "with the following command sequence:" -msgstr "avec la séquence de commandes suivante :" - -#: ../../source/how-to-enable-ssl-connections.rst:30 +#: ../../source/how-to-enable-ssl-connections.rst:29 #, fuzzy msgid "" "This will generate the certificates in :code:`examples/advanced-" @@ -5260,98 +5030,84 @@ msgstr "" "Cela générera les certificats dans :code:`examples/advanced-" "tensorflow/.cache/certificates`." -#: ../../source/how-to-enable-ssl-connections.rst:32 +#: ../../source/how-to-enable-ssl-connections.rst:31 +#, fuzzy msgid "" -"The approach how the SSL certificates are generated in this example can " -"serve as an inspiration and starting point but should not be taken as " -"complete for production environments. Please refer to other sources " -"regarding the issue of correctly generating certificates for production " -"environments." +"The approach for generating SSL certificates in the context of this " +"example can serve as an inspiration and starting point, but it should not" +" be used as a reference for production environments. Please refer to " +"other sources regarding the issue of correctly generating certificates " +"for production environments. For non-critical prototyping or research " +"projects, it might be sufficient to use the self-signed certificates " +"generated using the scripts mentioned in this guide." msgstr "" "L'approche de la génération des certificats SSL dans cet exemple peut " "servir d'inspiration et de point de départ, mais ne doit pas être " "considérée comme complète pour les environnements de production." -#: ../../source/how-to-enable-ssl-connections.rst:36 -msgid "" -"In case you are a researcher you might be just fine using the self-signed" -" certificates generated using the scripts which are part of this guide." -msgstr "" -"Si tu es un chercheur, tu peux très bien utiliser les certificats auto-" -"signés générés à l'aide des scripts qui font partie de ce guide." +#: ../../source/how-to-enable-ssl-connections.rst:39 +#, fuzzy +msgid "Server (SuperLink)" +msgstr "flower-superlink" #: ../../source/how-to-enable-ssl-connections.rst:41 -#: ../../source/ref-api/flwr.server.Server.rst:2 -msgid "Server" -msgstr "Serveur" - -#: ../../source/how-to-enable-ssl-connections.rst:43 +#, fuzzy msgid "" -"We are now going to show how to write a sever which uses the previously " -"generated scripts." +"Use the following terminal command to start a sever (SuperLink) that uses" +" the previously generated certificates:" msgstr "" -"Nous allons maintenant montrer comment écrire un serveur qui utilise les " -"scripts générés précédemment." +"Nous allons maintenant montrer comment écrire un client qui utilise les " +"scripts générés précédemment :" -#: ../../source/how-to-enable-ssl-connections.rst:61 +#: ../../source/how-to-enable-ssl-connections.rst:47 msgid "" "When providing certificates, the server expects a tuple of three " -"certificates. :code:`Path` can be used to easily read the contents of " -"those files into byte strings, which is the data type " -":code:`start_server` expects." +"certificates paths: CA certificate, server certificate and server private" +" key." msgstr "" -"Lorsqu'il fournit des certificats, le serveur attend un tuple de trois " -"certificats. :code:`Path` peut être utilisé pour lire facilement le " -"contenu de ces fichiers en chaînes d'octets, ce qui est le type de " -"données attendu par :code:`start_server`." -#: ../../source/how-to-enable-ssl-connections.rst:65 -#: ../../source/how-to-upgrade-to-flower-1.0.rst:37 -#: ../../source/ref-api/flwr.client.Client.rst:2 -msgid "Client" -msgstr "Client" +#: ../../source/how-to-enable-ssl-connections.rst:51 +#, fuzzy +msgid "Client (SuperNode)" +msgstr "Codes d'état du client." -#: ../../source/how-to-enable-ssl-connections.rst:67 +#: ../../source/how-to-enable-ssl-connections.rst:53 +#, fuzzy msgid "" -"We are now going to show how to write a client which uses the previously " -"generated scripts:" +"Use the following terminal command to start a client (SuperNode) that " +"uses the previously generated certificates:" msgstr "" "Nous allons maintenant montrer comment écrire un client qui utilise les " "scripts générés précédemment :" -#: ../../source/how-to-enable-ssl-connections.rst:84 +#: ../../source/how-to-enable-ssl-connections.rst:61 +#, fuzzy msgid "" -"When setting :code:`root_certificates`, the client expects the PEM-" -"encoded root certificates as a byte string. We are again using " -":code:`Path` to simplify reading those as byte strings." +"When setting :code:`root_certificates`, the client expects a file path to" +" PEM-encoded root certificates." msgstr "" "En définissant :code:`root_certificates`, le client s'attend à recevoir " "les certificats racine codés en PEM sous forme de chaîne d'octets. Nous " "utilisons à nouveau :code:`Path` pour simplifier la lecture de ces " "certificats sous forme de chaînes d'octets." -#: ../../source/how-to-enable-ssl-connections.rst:89 -#: ../../source/how-to-use-built-in-mods.rst:85 -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:287 -msgid "Conclusion" -msgstr "Conclusion" - -#: ../../source/how-to-enable-ssl-connections.rst:91 +#: ../../source/how-to-enable-ssl-connections.rst:67 +#, fuzzy msgid "" "You should now have learned how to generate self-signed certificates " -"using the given script, start a SSL-enabled server, and have a client " +"using the given script, start an SSL-enabled server and have a client " "establish a secure connection to it." msgstr "" "Tu devrais maintenant avoir appris à générer des certificats auto-signés " "à l'aide du script donné, à démarrer un serveur compatible SSL et à " "demander à un client d'établir une connexion sécurisée avec lui." -#: ../../source/how-to-enable-ssl-connections.rst:96 +#: ../../source/how-to-enable-ssl-connections.rst:72 #, fuzzy msgid "Additional resources" msgstr "Ressources supplémentaires" -#: ../../source/how-to-enable-ssl-connections.rst:98 +#: ../../source/how-to-enable-ssl-connections.rst:74 msgid "" "These additional sources might be relevant if you would like to dive " "deeper into the topic of certificates:" @@ -5359,11 +5115,11 @@ msgstr "" "Ces sources supplémentaires peuvent être pertinentes si tu souhaites " "approfondir le sujet des certificats :" -#: ../../source/how-to-enable-ssl-connections.rst:100 +#: ../../source/how-to-enable-ssl-connections.rst:76 msgid "`Let's Encrypt `_" msgstr "`Let's Encrypt `_" -#: ../../source/how-to-enable-ssl-connections.rst:101 +#: ../../source/how-to-enable-ssl-connections.rst:77 msgid "`certbot `_" msgstr "`certbot `_" @@ -5760,6 +5516,7 @@ msgid "Install stable release" msgstr "Installe la version stable" #: ../../source/how-to-install-flower.rst:15 +#: ../../source/how-to-upgrade-to-flower-next.rst:46 msgid "Using pip" msgstr "" @@ -6204,7 +5961,7 @@ msgstr "" msgid "" "The simplest way to get started with Flower is by using the pre-made " "Docker images, which you can find on `Docker Hub " -"`_." +"`__." msgstr "" #: ../../source/how-to-run-flower-using-docker.rst:7 @@ -6227,245 +5984,562 @@ msgid "" "official Docker website." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:25 +#: ../../source/how-to-run-flower-using-docker.rst:26 +msgid "" +"To ensure optimal performance and compatibility, the SuperLink, SuperNode" +" and ServerApp image must have the same version when running together. " +"This guarantees seamless integration and avoids potential conflicts or " +"issues that may arise from using different versions." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:31 #, fuzzy -msgid "Flower server" -msgstr "Serveur de Flower" +msgid "Flower SuperLink" +msgstr "flower-superlink" -#: ../../source/how-to-run-flower-using-docker.rst:28 +#: ../../source/how-to-run-flower-using-docker.rst:34 #, fuzzy msgid "Quickstart" msgstr "Démarrage rapide de JAX" -#: ../../source/how-to-run-flower-using-docker.rst:30 +#: ../../source/how-to-run-flower-using-docker.rst:36 msgid "If you're looking to try out Flower, you can use the following command:" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:37 +#: ../../source/how-to-run-flower-using-docker.rst:42 msgid "" -"The command will pull the Docker image with the tag " -"``1.7.0-py3.11-ubuntu22.04`` from Docker Hub. The tag contains the " -"information which Flower, Python and Ubuntu is used. In this case, it " -"uses Flower 1.7.0, Python 3.11 and Ubuntu 22.04. The ``--rm`` flag tells " -"Docker to remove the container after it exits." +"The command pulls the Docker image with the tag ``1.8.0`` from Docker " +"Hub. The tag specifies the Flower version. In this case, Flower 1.8.0. " +"The ``--rm`` flag tells Docker to remove the container after it exits." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:44 +#: ../../source/how-to-run-flower-using-docker.rst:48 msgid "" -"By default, the Flower server keeps state in-memory. When using the " +"By default, the Flower SuperLink keeps state in-memory. When using the " "Docker flag ``--rm``, the state is not persisted between container " "starts. We will show below how to save the state in a file on your host " "system." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:48 +#: ../../source/how-to-run-flower-using-docker.rst:52 msgid "" "The ``-p :`` flag tells Docker to map the ports " "``9091``/``9092`` of the host to ``9091``/``9092`` of the container, " "allowing you to access the Driver API on ``http://localhost:9091`` and " "the Fleet API on ``http://localhost:9092``. Lastly, any flag that comes " -"after the tag is passed to the Flower server. Here, we are passing the " -"flag ``--insecure``." +"after the tag is passed to the Flower SuperLink. Here, we are passing the" +" flag ``--insecure``." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:55 +#: ../../source/how-to-run-flower-using-docker.rst:59 +#: ../../source/how-to-run-flower-using-docker.rst:238 +#: ../../source/how-to-run-flower-using-docker.rst:354 msgid "" "The ``--insecure`` flag enables insecure communication (using HTTP, not " "HTTPS) and should only be used for testing purposes. We strongly " "recommend enabling `SSL `_ when " +"flower-using-docker.html#enabling-ssl-for-secure-connections>`__ when " "deploying to a production environment." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:60 +#: ../../source/how-to-run-flower-using-docker.rst:64 msgid "" -"You can use ``--help`` to view all available flags that the server " +"You can use ``--help`` to view all available flags that the SuperLink " "supports:" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:67 +#: ../../source/how-to-run-flower-using-docker.rst:71 msgid "Mounting a volume to store the state on the host system" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:69 +#: ../../source/how-to-run-flower-using-docker.rst:73 msgid "" -"If you want to persist the state of the server on your host system, all " -"you need to do is specify a path where you want to save the file on your " -"host system and a name for the database file. In the example below, we " -"tell Docker via the flag ``-v`` to mount the user's home directory " -"(``~/`` on your host) into the ``/app/`` directory of the container. " -"Furthermore, we use the flag ``--database`` to specify the name of the " -"database file." +"If you want to persist the state of the SuperLink on your host system, " +"all you need to do is specify a path where you want to save the file on " +"your host system and a name for the database file. In the example below, " +"we tell Docker via the flag ``--volume`` to mount the user's home " +"directory (``~/`` on your host) into the ``/app/`` directory of the " +"container. Furthermore, we use the flag ``--database`` to specify the " +"name of the database file." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:82 +#: ../../source/how-to-run-flower-using-docker.rst:86 msgid "" -"As soon as the server starts, the file ``state.db`` is created in the " +"As soon as the SuperLink starts, the file ``state.db`` is created in the " "user's home directory on your host system. If the file already exists, " -"the server tries to restore the state from the file. To start the server " -"with an empty database, simply remove the ``state.db`` file." +"the SuperLink tries to restore the state from the file. To start the " +"SuperLink with an empty database, simply remove the ``state.db`` file." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:87 +#: ../../source/how-to-run-flower-using-docker.rst:91 +#: ../../source/how-to-run-flower-using-docker.rst:260 +#: ../../source/how-to-run-flower-using-docker.rst:375 #, fuzzy msgid "Enabling SSL for secure connections" msgstr "Collecte centralisée des données" -#: ../../source/how-to-run-flower-using-docker.rst:89 +#: ../../source/how-to-run-flower-using-docker.rst:93 msgid "" -"To enable SSL, you will need a CA certificate, a server certificate and a" -" server private key." +"To enable SSL, you will need a PEM-encoded root certificate, a PEM-" +"encoded private key and a PEM-encoded certificate chain." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:92 +#: ../../source/how-to-run-flower-using-docker.rst:97 msgid "" "For testing purposes, you can generate your own self-signed certificates." " The `Enable SSL connections `_ page contains a section that " -"will guide you through the process." +"enable-ssl-connections.html#certificates>`__ page contains a section that" +" will guide you through the process." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:96 +#: ../../source/how-to-run-flower-using-docker.rst:101 msgid "" "Assuming all files we need are in the local ``certificates`` directory, " -"we can use the flag ``-v`` to mount the local directory into the " -"``/app/`` directory of the container. This allows the server to access " -"the files within the container. Finally, we pass the names of the " -"certificates to the server with the ``--certificates`` flag." +"we can use the flag ``--volume`` to mount the local directory into the " +"``/app/`` directory of the container. This allows the SuperLink to access" +" the files within the container. Finally, we pass the names of the " +"certificates to the SuperLink with the ``--certificates`` flag." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:108 -msgid "Using a different Flower or Python version" -msgstr "" +#: ../../source/how-to-run-flower-using-docker.rst:113 +#, fuzzy +msgid "Flower SuperNode" +msgstr "Serveur de Flower" -#: ../../source/how-to-run-flower-using-docker.rst:110 +#: ../../source/how-to-run-flower-using-docker.rst:115 msgid "" -"If you want to use a different version of Flower or Python, you can do so" -" by changing the tag. All versions we provide are available on `Docker " -"Hub `_." +"The SuperNode Docker image comes with a pre-installed version of Flower " +"and serves as a base for building your own SuperNode image." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:114 -msgid "Pinning a Docker image to a specific version" +#: ../../source/how-to-run-flower-using-docker.rst:120 +msgid "" +"The SuperNode Docker image currently works only with the 1.9.0-nightly " +"release. A stable version will be available when Flower 1.9.0 (stable) " +"gets released (ETA: May). A SuperNode nightly image must be paired with " +"the corresponding SuperLink and ServerApp nightly images released on the " +"same day. To ensure the versions are in sync, using the concrete tag, " +"e.g., ``1.9.0.dev20240501`` instead of ``nightly`` is recommended." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:116 +#: ../../source/how-to-run-flower-using-docker.rst:126 msgid "" -"It may happen that we update the images behind the tags. Such updates " -"usually include security updates of system dependencies that should not " -"change the functionality of Flower. However, if you want to ensure that " -"you always use the same image, you can specify the hash of the image " -"instead of the tag." +"We will use the ``quickstart-pytorch`` example, which you can find in the" +" Flower repository, to illustrate how you can dockerize your ClientApp." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:121 +#: ../../source/how-to-run-flower-using-docker.rst:134 msgid "" -"The following command returns the current image hash referenced by the " -"``server:1.7.0-py3.11-ubuntu22.04`` tag:" +"Before we can start, we need to meet a few prerequisites in our local " +"development environment. You can skip the first part if you want to run " +"your ClientApp instead of the ``quickstart-pytorch`` example." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:128 -msgid "Next, we can pin the hash when running a new server container:" +#: ../../source/how-to-run-flower-using-docker.rst:138 +#, fuzzy +msgid "Clone the Flower repository." +msgstr "**Fourche le dépôt de Flower**" + +#: ../../source/how-to-run-flower-using-docker.rst:152 +msgid "Creating a SuperNode Dockerfile" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:137 -#, fuzzy -msgid "Setting environment variables" -msgstr "Mise en place de l'environnement de codage" +#: ../../source/how-to-run-flower-using-docker.rst:154 +#: ../../source/how-to-run-flower-using-docker.rst:289 +msgid "Let's assume the following project layout:" +msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:139 +#: ../../source/how-to-run-flower-using-docker.rst:163 msgid "" -"To set a variable inside a Docker container, you can use the ``-e " -"=`` flag." +"First, we need to create a ``requirements.txt`` file in the directory " +"where the ``ClientApp`` code is located. In the file, we list all the " +"dependencies that the ClientApp requires." msgstr "" -#: ../../source/how-to-run-simulations.rst:2 -#, fuzzy -msgid "Run simulations" -msgstr "Simulation de moniteur" +#: ../../source/how-to-run-flower-using-docker.rst:175 +msgid "" +"Note that `flwr `__ is already installed " +"in the ``flwr/supernode`` base image, so you only need to include other " +"package dependencies in your ``requirements.txt``, such as ``torch``, " +"``tensorflow``, etc." +msgstr "" -#: ../../source/how-to-run-simulations.rst:8 +#: ../../source/how-to-run-flower-using-docker.rst:179 msgid "" -"Simulating Federated Learning workloads is useful for a multitude of use-" -"cases: you might want to run your workload on a large cohort of clients " -"but without having to source, configure and mange a large number of " -"physical devices; you might want to run your FL workloads as fast as " -"possible on the compute systems you have access to without having to go " -"through a complex setup process; you might want to validate your " -"algorithm on different scenarios at varying levels of data and system " -"heterogeneity, client availability, privacy budgets, etc. These are among" -" some of the use-cases where simulating FL workloads makes sense. Flower " -"can accommodate these scenarios by means of its `VirtualClientEngine " -"`_ or " -"VCE." +"Next, we create a Dockerfile. If you use the ``quickstart-pytorch`` " +"example, create a new file called ``Dockerfile.supernode`` in ``examples" +"/quickstart-pytorch``." msgstr "" -#: ../../source/how-to-run-simulations.rst:10 +#: ../../source/how-to-run-flower-using-docker.rst:182 msgid "" -"The :code:`VirtualClientEngine` schedules, launches and manages `virtual`" -" clients. These clients are identical to `non-virtual` clients (i.e. the " -"ones you launch via the command `flwr.client.start_client `_) in the sense that they can be configure by " -"creating a class inheriting, for example, from `flwr.client.NumPyClient " -"`_ and therefore behave in an " -"identical way. In addition to that, clients managed by the " -":code:`VirtualClientEngine` are:" +"The ``Dockerfile.supernode`` contains the instructions that assemble the " +"SuperNode image." msgstr "" -#: ../../source/how-to-run-simulations.rst:12 +#: ../../source/how-to-run-flower-using-docker.rst:196 msgid "" -"resource-aware: this means that each client gets assigned a portion of " -"the compute and memory on your system. You as a user can control this at " -"the beginning of the simulation and allows you to control the degree of " -"parallelism of your Flower FL simulation. The fewer the resources per " -"client, the more clients can run concurrently on the same hardware." +"In the first two lines, we instruct Docker to use the SuperNode image " +"tagged ``nightly`` as a base image and set our working directory to " +"``/app``. The following instructions will now be executed in the ``/app``" +" directory. Next, we install the ClientApp dependencies by copying the " +"``requirements.txt`` file into the image and run ``pip install``. In the " +"last two lines, we copy the ``client.py`` module into the image and set " +"the entry point to ``flower-client-app`` with the argument " +"``client:app``. The argument is the object reference of the ClientApp " +"(``:``) that will be run inside the ClientApp." msgstr "" -#: ../../source/how-to-run-simulations.rst:13 +#: ../../source/how-to-run-flower-using-docker.rst:205 +#, fuzzy +msgid "Building the SuperNode Docker image" +msgstr "Démarrer le serveur" + +#: ../../source/how-to-run-flower-using-docker.rst:207 msgid "" -"self-managed: this means that you as a user do not need to launch clients" -" manually, instead this gets delegated to :code:`VirtualClientEngine`'s " -"internals." +"Next, we build the SuperNode Docker image by running the following " +"command in the directory where Dockerfile and ClientApp code are located." msgstr "" -#: ../../source/how-to-run-simulations.rst:14 +#: ../../source/how-to-run-flower-using-docker.rst:214 msgid "" -"ephemeral: this means that a client is only materialized when it is " -"required in the FL process (e.g. to do `fit() `_). The object is destroyed afterwards," -" releasing the resources it was assigned and allowing in this way other " -"clients to participate." +"We gave the image the name ``flwr_supernode``, and the tag ``0.0.1``. " +"Remember that the here chosen values only serve as an example. You can " +"change them to your needs." msgstr "" -#: ../../source/how-to-run-simulations.rst:16 +#: ../../source/how-to-run-flower-using-docker.rst:219 +#, fuzzy +msgid "Running the SuperNode Docker image" +msgstr "Démarrer le serveur" + +#: ../../source/how-to-run-flower-using-docker.rst:221 +msgid "Now that we have built the SuperNode image, we can finally run it." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:229 +#: ../../source/how-to-run-flower-using-docker.rst:345 +msgid "Let's break down each part of this command:" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:231 +#: ../../source/how-to-run-flower-using-docker.rst:347 +msgid "``docker run``: This is the command to run a new Docker container." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:232 +#: ../../source/how-to-run-flower-using-docker.rst:348 msgid "" -"The :code:`VirtualClientEngine` implements `virtual` clients using `Ray " -"`_, an open-source framework for scalable Python " -"workloads. In particular, Flower's :code:`VirtualClientEngine` makes use " -"of `Actors `_ to " -"spawn `virtual` clients and run their workload." +"``--rm``: This option specifies that the container should be " +"automatically removed when it stops." msgstr "" -#: ../../source/how-to-run-simulations.rst:20 -msgid "Launch your Flower simulation" +#: ../../source/how-to-run-flower-using-docker.rst:233 +msgid "``flwr_supernode:0.0.1``: The name the tag of the Docker image to use." msgstr "" -#: ../../source/how-to-run-simulations.rst:22 +#: ../../source/how-to-run-flower-using-docker.rst:234 +#: ../../source/how-to-run-flower-using-docker.rst:350 +msgid "``--insecure``: This option enables insecure communication." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst msgid "" -"Running Flower simulations still require you to define your client class," -" a strategy, and utility functions to download and load (and potentially " -"partition) your dataset. With that out of the way, launching your " -"simulation is done with `start_simulation `_ and a minimal example looks" -" as follows:" +"``--server 192.168.1.100:9092``: This option specifies the address of the" +" SuperLinks Fleet" msgstr "" -#: ../../source/how-to-run-simulations.rst:44 -#, fuzzy -msgid "VirtualClientEngine resources" -msgstr "Moteur de client virtuel" +#: ../../source/how-to-run-flower-using-docker.rst +msgid "API to connect to. Remember to update it with your SuperLink IP." +msgstr "" -#: ../../source/how-to-run-simulations.rst:45 +#: ../../source/how-to-run-flower-using-docker.rst:248 +msgid "" +"To test running Flower locally, you can create a `bridge network " +"`__, use the ``--network`` argument and pass the " +"name of the Docker network to run your SuperNodes." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:252 +msgid "" +"Any argument that comes after the tag is passed to the Flower SuperNode " +"binary. To see all available flags that the SuperNode supports, run:" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:262 +msgid "" +"To enable SSL, we will need to mount a PEM-encoded root certificate into " +"your SuperNode container." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:264 +msgid "" +"Assuming the certificate already exists locally, we can use the flag " +"``--volume`` to mount the local certificate into the container's " +"``/app/`` directory. This allows the SuperNode to access the certificate " +"within the container. Use the ``--certificates`` flag when starting the " +"container." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:275 +#, fuzzy +msgid "Flower ServerApp" +msgstr "Serveur de Flower" + +#: ../../source/how-to-run-flower-using-docker.rst:277 +msgid "" +"The procedure for building and running a ServerApp image is almost " +"identical to the SuperNode image." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:279 +msgid "" +"Similar to the SuperNode image, the ServerApp Docker image comes with a " +"pre-installed version of Flower and serves as a base for building your " +"own ServerApp image." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:282 +msgid "" +"We will use the same ``quickstart-pytorch`` example as we do in the " +"Flower SuperNode section. If you have not already done so, please follow " +"the `SuperNode Prerequisites`_ before proceeding." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:287 +msgid "Creating a ServerApp Dockerfile" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:298 +msgid "" +"First, we need to create a Dockerfile in the directory where the " +"``ServerApp`` code is located. If you use the ``quickstart-pytorch`` " +"example, create a new file called ``Dockerfile.serverapp`` in ``examples" +"/quickstart-pytorch``." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:302 +msgid "" +"The ``Dockerfile.serverapp`` contains the instructions that assemble the " +"ServerApp image." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:313 +msgid "" +"In the first two lines, we instruct Docker to use the ServerApp image " +"tagged ``1.8.0`` as a base image and set our working directory to " +"``/app``. The following instructions will now be executed in the ``/app``" +" directory. In the last two lines, we copy the ``server.py`` module into " +"the image and set the entry point to ``flower-server-app`` with the " +"argument ``server:app``. The argument is the object reference of the " +"ServerApp (``:``) that will be run inside the " +"ServerApp container." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:321 +#, fuzzy +msgid "Building the ServerApp Docker image" +msgstr "Démarrer le serveur" + +#: ../../source/how-to-run-flower-using-docker.rst:323 +msgid "" +"Next, we build the ServerApp Docker image by running the following " +"command in the directory where Dockerfile and ServerApp code are located." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:330 +msgid "" +"We gave the image the name ``flwr_serverapp``, and the tag ``0.0.1``. " +"Remember that the here chosen values only serve as an example. You can " +"change them to your needs." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:335 +#, fuzzy +msgid "Running the ServerApp Docker image" +msgstr "Démarrer le serveur" + +#: ../../source/how-to-run-flower-using-docker.rst:337 +msgid "Now that we have built the ServerApp image, we can finally run it." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:349 +msgid "``flwr_serverapp:0.0.1``: The name the tag of the Docker image to use." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst +msgid "" +"``--server 192.168.1.100:9091``: This option specifies the address of the" +" SuperLinks Driver" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:363 +msgid "" +"To test running Flower locally, you can create a `bridge network " +"`__, use the ``--network`` argument and pass the " +"name of the Docker network to run your ServerApps." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:367 +msgid "" +"Any argument that comes after the tag is passed to the Flower ServerApp " +"binary. To see all available flags that the ServerApp supports, run:" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:377 +msgid "" +"To enable SSL, we will need to mount a PEM-encoded root certificate into " +"your ServerApp container." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:379 +msgid "" +"Assuming the certificate already exists locally, we can use the flag " +"``--volume`` to mount the local certificate into the container's " +"``/app/`` directory. This allows the ServerApp to access the certificate " +"within the container. Use the ``--certificates`` flag when starting the " +"container." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:390 +#, fuzzy +msgid "Advanced Docker options" +msgstr "Options d'installation avancées" + +#: ../../source/how-to-run-flower-using-docker.rst:393 +msgid "Using a different Flower version" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:395 +msgid "" +"If you want to use a different version of Flower, for example Flower " +"nightly, you can do so by changing the tag. All available versions are on" +" `Docker Hub `__." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:400 +msgid "Pinning a Docker image to a specific version" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:402 +msgid "" +"It may happen that we update the images behind the tags. Such updates " +"usually include security updates of system dependencies that should not " +"change the functionality of Flower. However, if you want to ensure that " +"you always use the same image, you can specify the hash of the image " +"instead of the tag." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:407 +msgid "" +"The following command returns the current image hash referenced by the " +"``superlink:1.8.0`` tag:" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:414 +msgid "Next, we can pin the hash when running a new SuperLink container:" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:423 +#, fuzzy +msgid "Setting environment variables" +msgstr "Mise en place de l'environnement de codage" + +#: ../../source/how-to-run-flower-using-docker.rst:425 +msgid "" +"To set a variable inside a Docker container, you can use the ``-e " +"=`` flag." +msgstr "" + +#: ../../source/how-to-run-simulations.rst:2 +#, fuzzy +msgid "Run simulations" +msgstr "Simulation de moniteur" + +#: ../../source/how-to-run-simulations.rst:8 +msgid "" +"Simulating Federated Learning workloads is useful for a multitude of use-" +"cases: you might want to run your workload on a large cohort of clients " +"but without having to source, configure and mange a large number of " +"physical devices; you might want to run your FL workloads as fast as " +"possible on the compute systems you have access to without having to go " +"through a complex setup process; you might want to validate your " +"algorithm on different scenarios at varying levels of data and system " +"heterogeneity, client availability, privacy budgets, etc. These are among" +" some of the use-cases where simulating FL workloads makes sense. Flower " +"can accommodate these scenarios by means of its `VirtualClientEngine " +"`_ or " +"VCE." +msgstr "" + +#: ../../source/how-to-run-simulations.rst:10 +msgid "" +"The :code:`VirtualClientEngine` schedules, launches and manages `virtual`" +" clients. These clients are identical to `non-virtual` clients (i.e. the " +"ones you launch via the command `flwr.client.start_client `_) in the sense that they can be configure by " +"creating a class inheriting, for example, from `flwr.client.NumPyClient " +"`_ and therefore behave in an " +"identical way. In addition to that, clients managed by the " +":code:`VirtualClientEngine` are:" +msgstr "" + +#: ../../source/how-to-run-simulations.rst:12 +msgid "" +"resource-aware: this means that each client gets assigned a portion of " +"the compute and memory on your system. You as a user can control this at " +"the beginning of the simulation and allows you to control the degree of " +"parallelism of your Flower FL simulation. The fewer the resources per " +"client, the more clients can run concurrently on the same hardware." +msgstr "" + +#: ../../source/how-to-run-simulations.rst:13 +msgid "" +"self-managed: this means that you as a user do not need to launch clients" +" manually, instead this gets delegated to :code:`VirtualClientEngine`'s " +"internals." +msgstr "" + +#: ../../source/how-to-run-simulations.rst:14 +msgid "" +"ephemeral: this means that a client is only materialized when it is " +"required in the FL process (e.g. to do `fit() `_). The object is destroyed afterwards," +" releasing the resources it was assigned and allowing in this way other " +"clients to participate." +msgstr "" + +#: ../../source/how-to-run-simulations.rst:16 +msgid "" +"The :code:`VirtualClientEngine` implements `virtual` clients using `Ray " +"`_, an open-source framework for scalable Python " +"workloads. In particular, Flower's :code:`VirtualClientEngine` makes use " +"of `Actors `_ to " +"spawn `virtual` clients and run their workload." +msgstr "" + +#: ../../source/how-to-run-simulations.rst:20 +msgid "Launch your Flower simulation" +msgstr "" + +#: ../../source/how-to-run-simulations.rst:22 +msgid "" +"Running Flower simulations still require you to define your client class," +" a strategy, and utility functions to download and load (and potentially " +"partition) your dataset. With that out of the way, launching your " +"simulation is done with `start_simulation `_ and a minimal example looks" +" as follows:" +msgstr "" + +#: ../../source/how-to-run-simulations.rst:44 +#, fuzzy +msgid "VirtualClientEngine resources" +msgstr "Moteur de client virtuel" + +#: ../../source/how-to-run-simulations.rst:45 msgid "" "By default the VCE has access to all system resources (i.e. all CPUs, all" " GPUs, etc) since that is also the default behavior when starting Ray. " @@ -6891,6 +6965,7 @@ msgstr "" "0.x existants." #: ../../source/how-to-upgrade-to-flower-1.0.rst:8 +#: ../../source/how-to-upgrade-to-flower-next.rst:43 msgid "Install update" msgstr "Installer la mise à jour" @@ -6947,6 +7022,7 @@ msgstr "" "l'utilisation de ``start_simulation``)" #: ../../source/how-to-upgrade-to-flower-1.0.rst:24 +#: ../../source/how-to-upgrade-to-flower-next.rst:100 msgid "Required changes" msgstr "Changements nécessaires" @@ -6986,6 +7062,11 @@ msgstr "" "``start_client(server_address=\"127.0.0.1:8080\", " "client=FlowerClient())``" +#: ../../source/how-to-upgrade-to-flower-1.0.rst:37 +#: ../../source/ref-api/flwr.client.Client.rst:2 +msgid "Client" +msgstr "Client" + #: ../../source/how-to-upgrade-to-flower-1.0.rst:39 msgid "" "Subclasses of ``NumPyClient``: change ``def get_parameters(self):``` to " @@ -7206,6 +7287,7 @@ msgstr "" "round_timeout=600.0), ...)``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:89 +#: ../../source/how-to-upgrade-to-flower-next.rst:317 msgid "Further help" msgstr "Aide supplémentaire" @@ -7224,81 +7306,284 @@ msgstr "" "Flower `_ et utilise le canal " "``#questions``." -#: ../../source/how-to-use-built-in-mods.rst:2 -msgid "Use Built-in Mods" -msgstr "" +#: ../../source/how-to-upgrade-to-flower-next.rst:2 +#, fuzzy +msgid "Upgrade to Flower Next" +msgstr "Passe à Flower 1.0" -#: ../../source/how-to-use-built-in-mods.rst:4 +#: ../../source/how-to-upgrade-to-flower-next.rst:4 msgid "" -"**Note: This tutorial covers experimental features. The functionality and" -" interfaces may change in future versions.**" +"Welcome to the migration guide for updating Flower to Flower Next! " +"Whether you're a seasoned user or just getting started, this guide will " +"help you smoothly transition your existing setup to take advantage of the" +" latest features and improvements in Flower Next, starting from version " +"1.8." msgstr "" -#: ../../source/how-to-use-built-in-mods.rst:6 +#: ../../source/how-to-upgrade-to-flower-next.rst:9 msgid "" -"In this tutorial, we will learn how to utilize built-in mods to augment " -"the behavior of a ``ClientApp``. Mods (sometimes also called Modifiers) " -"allow us to perform operations before and after a task is processed in " -"the ``ClientApp``." +"This guide shows how to reuse pre-``1.8`` Flower code with minimum code " +"changes by using the *compatibility layer* in Flower Next. In another " +"guide, we will show how to run Flower Next end-to-end with pure Flower " +"Next APIs." msgstr "" -#: ../../source/how-to-use-built-in-mods.rst:9 -msgid "What are Mods?" +#: ../../source/how-to-upgrade-to-flower-next.rst:13 +msgid "Let's dive in!" msgstr "" -#: ../../source/how-to-use-built-in-mods.rst:11 +#: ../../source/how-to-upgrade-to-flower-next.rst:48 +#, fuzzy msgid "" -"A Mod is a callable that wraps around a ``ClientApp``. It can manipulate " -"or inspect the incoming ``Message`` and the resulting outgoing " -"``Message``. The signature for a ``Mod`` is as follows:" +"Here's how to update an existing installation of Flower to Flower Next " +"with ``pip``:" msgstr "" +"Voici comment mettre à jour une installation existante vers Flower 1.0 en" +" utilisant soit pip soit Poetry :" -#: ../../source/how-to-use-built-in-mods.rst:18 -msgid "A typical mod function might look something like this:" +#: ../../source/how-to-upgrade-to-flower-next.rst:54 +msgid "or if you need Flower Next with simulation:" msgstr "" -#: ../../source/how-to-use-built-in-mods.rst:31 -msgid "Using Mods" +#: ../../source/how-to-upgrade-to-flower-next.rst:61 +msgid "" +"Ensure you set the following version constraint in your " +"``requirements.txt``" msgstr "" -#: ../../source/how-to-use-built-in-mods.rst:33 -msgid "To use mods in your ``ClientApp``, you can follow these steps:" +#: ../../source/how-to-upgrade-to-flower-next.rst:71 +msgid "or ``pyproject.toml``:" msgstr "" -#: ../../source/how-to-use-built-in-mods.rst:36 -msgid "1. Import the required mods" -msgstr "" +#: ../../source/how-to-upgrade-to-flower-next.rst:82 +#, fuzzy +msgid "Using Poetry" +msgstr "Utiliser la poésie (recommandé)" -#: ../../source/how-to-use-built-in-mods.rst:38 -msgid "First, import the built-in mod you intend to use:" +#: ../../source/how-to-upgrade-to-flower-next.rst:84 +#, fuzzy +msgid "" +"Update the ``flwr`` dependency in ``pyproject.toml`` and then reinstall " +"(don't forget to delete ``poetry.lock`` via ``rm poetry.lock`` before " +"running ``poetry install``)." msgstr "" +"Poetry : mettez à jour la dépendance ``flwr`` dans ``pyproject.toml`` " +"puis réinstallez (n'oubliez pas de supprimer ``poetry.lock`` via ``rm " +"poetry.lock`` avant d'exécuter ``poetry install``)." -#: ../../source/how-to-use-built-in-mods.rst:46 -msgid "2. Define your client function" -msgstr "" +#: ../../source/how-to-upgrade-to-flower-next.rst:86 +#, fuzzy +msgid "" +"Ensure you set the following version constraint in your " +"``pyproject.toml``:" +msgstr "Augmente la version mineure de ``pyproject.toml`` d'une unité." -#: ../../source/how-to-use-built-in-mods.rst:48 +#: ../../source/how-to-upgrade-to-flower-next.rst:102 msgid "" -"Define your client function (``client_fn``) that will be wrapped by the " -"mod(s):" +"In Flower Next, the *infrastructure* and *application layers* have been " +"decoupled. Instead of starting a client in code via ``start_client()``, " +"you create a |clientapp_link|_ and start it via the command line. Instead" +" of starting a server in code via ``start_server()``, you create a " +"|serverapp_link|_ and start it via the command line. The long-running " +"components of server and client are called SuperLink and SuperNode. The " +"following non-breaking changes that require manual updates and allow you " +"to run your project both in the traditional way and in the Flower Next " +"way:" msgstr "" -#: ../../source/how-to-use-built-in-mods.rst:57 -msgid "3. Create the ``ClientApp`` with mods" -msgstr "" +#: ../../source/how-to-upgrade-to-flower-next.rst:109 +#, fuzzy +msgid "|clientapp_link|_" +msgstr "client" -#: ../../source/how-to-use-built-in-mods.rst:59 +#: ../../source/how-to-upgrade-to-flower-next.rst:110 msgid "" -"Create your ``ClientApp`` and pass the mods as a list to the ``mods`` " -"argument. The order in which you provide the mods matters:" +"Wrap your existing client with |clientapp_link|_ instead of launching it " +"via |startclient_link|_. Here's an example:" msgstr "" -#: ../../source/how-to-use-built-in-mods.rst:72 +#: ../../source/how-to-upgrade-to-flower-next.rst:132 #, fuzzy -msgid "Order of execution" -msgstr "Dépréciations" +msgid "|serverapp_link|_" +msgstr "serveur" -#: ../../source/how-to-use-built-in-mods.rst:74 +#: ../../source/how-to-upgrade-to-flower-next.rst:133 +msgid "" +"Wrap your existing strategy with |serverapp_link|_ instead of starting " +"the server via |startserver_link|_. Here's an example:" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:154 +msgid "Deployment" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:155 +msgid "" +"Run the ``SuperLink`` using |flowernext_superlink_link|_ before running, " +"in sequence, |flowernext_clientapp_link|_ (2x) and " +"|flowernext_serverapp_link|_. There is no need to execute `client.py` and" +" `server.py` as Python scripts." +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:158 +msgid "" +"Here's an example to start the server without HTTPS (only for " +"prototyping):" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:174 +msgid "" +"Here's another example to start with HTTPS. Use the ``--certificates`` " +"command line argument to pass paths to (CA certificate, server " +"certificate, and server private key)." +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:201 +#, fuzzy +msgid "Simulation in CLI" +msgstr "Simulation de moniteur" + +#: ../../source/how-to-upgrade-to-flower-next.rst:202 +msgid "" +"Wrap your existing client and strategy with |clientapp_link|_ and " +"|serverapp_link|_, respectively. There is no need to use |startsim_link|_" +" anymore. Here's an example:" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:232 +msgid "" +"Run |flower_simulation_link|_ in CLI and point to the ``server_app`` / " +"``client_app`` object in the code instead of executing the Python script." +" Here's an example (assuming the ``server_app`` and ``client_app`` " +"objects are in a ``sim.py`` module):" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:249 +msgid "" +"Set default resources for each |clientapp_link|_ using the ``--backend-" +"config`` command line argument instead of setting the " +"``client_resources`` argument in |startsim_link|_. Here's an example:" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:275 +msgid "Simulation in a Notebook" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:276 +msgid "" +"Run |runsim_link|_ in your notebook instead of |startsim_link|_. Here's " +"an example:" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:319 +#, fuzzy +msgid "" +"Some official `Flower code examples `_ " +"are already updated to Flower Next so they can serve as a reference for " +"using the Flower Next API. If there are further questions, `join the " +"Flower Slack `_ and use the channel " +"``#questions``. You can also `participate in Flower Discuss " +"`_ where you can find us answering questions," +" or share and learn from others about migrating to Flower Next." +msgstr "" +"La plupart des `exemples de code Flower officiels " +"`_ sont déjà mis à " +"jour vers Flower 1.0, ils peuvent servir de référence pour l'utilisation " +"de l'API Flower 1.0. Si vous avez d'autres questions, `joins le Slack " +"Flower `_ et utilise le canal " +"``#questions``." + +#: ../../source/how-to-upgrade-to-flower-next.rst:325 +#, fuzzy +msgid "Important" +msgstr "Changements importants :" + +#: ../../source/how-to-upgrade-to-flower-next.rst:328 +msgid "" +"As we continuously enhance Flower Next at a rapid pace, we'll be " +"periodically updating this guide. Please feel free to share any feedback " +"with us!" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:334 +msgid "Happy migrating! 🚀" +msgstr "" + +#: ../../source/how-to-use-built-in-mods.rst:2 +msgid "Use Built-in Mods" +msgstr "" + +#: ../../source/how-to-use-built-in-mods.rst:4 +msgid "" +"**Note: This tutorial covers experimental features. The functionality and" +" interfaces may change in future versions.**" +msgstr "" + +#: ../../source/how-to-use-built-in-mods.rst:6 +msgid "" +"In this tutorial, we will learn how to utilize built-in mods to augment " +"the behavior of a ``ClientApp``. Mods (sometimes also called Modifiers) " +"allow us to perform operations before and after a task is processed in " +"the ``ClientApp``." +msgstr "" + +#: ../../source/how-to-use-built-in-mods.rst:9 +msgid "What are Mods?" +msgstr "" + +#: ../../source/how-to-use-built-in-mods.rst:11 +msgid "" +"A Mod is a callable that wraps around a ``ClientApp``. It can manipulate " +"or inspect the incoming ``Message`` and the resulting outgoing " +"``Message``. The signature for a ``Mod`` is as follows:" +msgstr "" + +#: ../../source/how-to-use-built-in-mods.rst:18 +msgid "A typical mod function might look something like this:" +msgstr "" + +#: ../../source/how-to-use-built-in-mods.rst:31 +msgid "Using Mods" +msgstr "" + +#: ../../source/how-to-use-built-in-mods.rst:33 +msgid "To use mods in your ``ClientApp``, you can follow these steps:" +msgstr "" + +#: ../../source/how-to-use-built-in-mods.rst:36 +msgid "1. Import the required mods" +msgstr "" + +#: ../../source/how-to-use-built-in-mods.rst:38 +msgid "First, import the built-in mod you intend to use:" +msgstr "" + +#: ../../source/how-to-use-built-in-mods.rst:46 +msgid "2. Define your client function" +msgstr "" + +#: ../../source/how-to-use-built-in-mods.rst:48 +msgid "" +"Define your client function (``client_fn``) that will be wrapped by the " +"mod(s):" +msgstr "" + +#: ../../source/how-to-use-built-in-mods.rst:57 +msgid "3. Create the ``ClientApp`` with mods" +msgstr "" + +#: ../../source/how-to-use-built-in-mods.rst:59 +msgid "" +"Create your ``ClientApp`` and pass the mods as a list to the ``mods`` " +"argument. The order in which you provide the mods matters:" +msgstr "" + +#: ../../source/how-to-use-built-in-mods.rst:72 +#, fuzzy +msgid "Order of execution" +msgstr "Dépréciations" + +#: ../../source/how-to-use-built-in-mods.rst:74 msgid "" "When the ``ClientApp`` runs, the mods are executed in the order they are " "provided in the list:" @@ -7643,15 +7928,15 @@ msgstr "Tutoriel" msgid "Quickstart tutorials" msgstr "Quickstart tutorials" -#: ../../source/index.rst:75 ../../source/index.rst:79 +#: ../../source/index.rst:74 ../../source/index.rst:78 msgid "How-to guides" msgstr "Guides" -#: ../../source/index.rst:98 +#: ../../source/index.rst:99 msgid "Legacy example guides" msgstr "" -#: ../../source/index.rst:109 ../../source/index.rst:113 +#: ../../source/index.rst:108 ../../source/index.rst:112 msgid "Explanations" msgstr "Explications" @@ -7659,26 +7944,26 @@ msgstr "Explications" msgid "API reference" msgstr "Référence pour l'API" -#: ../../source/index.rst:138 +#: ../../source/index.rst:137 msgid "Reference docs" msgstr "Référence pour la documentation" -#: ../../source/index.rst:154 +#: ../../source/index.rst:153 #, fuzzy msgid "Contributor tutorials" msgstr "Configuration du contributeur" -#: ../../source/index.rst:161 +#: ../../source/index.rst:160 #, fuzzy msgid "Contributor how-to guides" msgstr "Guide pour les contributeurs" -#: ../../source/index.rst:174 +#: ../../source/index.rst:173 #, fuzzy msgid "Contributor explanations" msgstr "Explications" -#: ../../source/index.rst:180 +#: ../../source/index.rst:179 #, fuzzy msgid "Contributor references" msgstr "Configuration du contributeur" @@ -7748,7 +8033,7 @@ msgstr "" "Une serie de tutoriels de Federated Learning, l'endroit parfait pour " "débuter." -#: ../../source/index.rst:62 +#: ../../source/index.rst:61 #, fuzzy msgid "" "QUICKSTART TUTORIALS: :doc:`PyTorch ` | " @@ -7756,10 +8041,10 @@ msgid "" " ` | :doc:`JAX ` | :doc:`Pandas ` | :doc:`fastai " "` | :doc:`PyTorch Lightning ` | :doc:`MXNet `" -" | :doc:`scikit-learn ` | :doc:`XGBoost " -"` | :doc:`Android ` | :doc:`iOS `" +"quickstart-pytorch-lightning>` | :doc:`scikit-learn ` | :doc:`XGBoost ` | " +":doc:`Android ` | :doc:`iOS `" msgstr "" "QUICKSTART TUTORIALS: :ref:`PyTorch ` | " ":ref:`TensorFlow ` | :ref:`🤗 Transformers " @@ -7770,16 +8055,16 @@ msgstr "" "` | :ref:`Android ` | :ref:`iOS " "`" -#: ../../source/index.rst:64 +#: ../../source/index.rst:63 msgid "We also made video tutorials for PyTorch:" msgstr "" -#: ../../source/index.rst:69 +#: ../../source/index.rst:68 #, fuzzy msgid "And TensorFlow:" msgstr "Exemples de TensorFlow" -#: ../../source/index.rst:77 +#: ../../source/index.rst:76 msgid "" "Problem-oriented how-to guides show step-by-step how to achieve a " "specific goal." @@ -7787,7 +8072,7 @@ msgstr "" "Guides orientés sur la résolutions étapes par étapes de problèmes ou " "objectifs specifiques." -#: ../../source/index.rst:111 +#: ../../source/index.rst:110 msgid "" "Understanding-oriented concept guides explain and discuss key topics and " "underlying ideas behind Flower and collaborative AI." @@ -7795,29 +8080,29 @@ msgstr "" "Guides orientés sur la compréhension et l'explication des sujets et idées" " de fonds sur lesquels sont construits Flower et l'IA collaborative." -#: ../../source/index.rst:121 +#: ../../source/index.rst:120 #, fuzzy msgid "References" msgstr "Référence" -#: ../../source/index.rst:123 +#: ../../source/index.rst:122 msgid "Information-oriented API reference and other reference material." msgstr "Référence de l'API orientée sur l'information pure." -#: ../../source/index.rst:132::1 +#: ../../source/index.rst:131::1 msgid ":py:obj:`flwr `\\" msgstr "" -#: ../../source/index.rst:132::1 flwr:1 of +#: ../../source/index.rst:131::1 flwr:1 of msgid "Flower main package." msgstr "" -#: ../../source/index.rst:149 +#: ../../source/index.rst:148 #, fuzzy msgid "Contributor docs" msgstr "Configuration du contributeur" -#: ../../source/index.rst:151 +#: ../../source/index.rst:150 #, fuzzy msgid "" "The Flower community welcomes contributions. The following docs are " @@ -7833,23 +8118,20 @@ msgid "Flower CLI reference" msgstr "Client de Flower" #: ../../source/ref-api-cli.rst:7 -msgid "flower-superlink" -msgstr "flower-superlink" +#, fuzzy +msgid "flower-simulation" +msgstr "Simulation de moniteur" #: ../../source/ref-api-cli.rst:17 -msgid "flower-driver-api" -msgstr "flower-driver-api" +msgid "flower-superlink" +msgstr "flower-superlink" #: ../../source/ref-api-cli.rst:27 -msgid "flower-fleet-api" -msgstr "flower-fleet-api" - -#: ../../source/ref-api-cli.rst:37 #, fuzzy msgid "flower-client-app" msgstr "Flower ClientApp." -#: ../../source/ref-api-cli.rst:47 +#: ../../source/ref-api-cli.rst:37 #, fuzzy msgid "flower-server-app" msgstr "flower-driver-api" @@ -7859,7 +8141,7 @@ msgstr "flower-driver-api" msgid "flwr" msgstr "Fleur" -#: ../../source/ref-api/flwr.rst:25 ../../source/ref-api/flwr.server.rst:52 +#: ../../source/ref-api/flwr.rst:25 ../../source/ref-api/flwr.server.rst:51 msgid "Modules" msgstr "" @@ -7885,7 +8167,7 @@ msgid ":py:obj:`flwr.server `\\" msgstr "" #: ../../source/ref-api/flwr.rst:35::1 -#: ../../source/ref-api/flwr.server.rst:41::1 flwr.server:1 +#: ../../source/ref-api/flwr.server.rst:40::1 flwr.server:1 #: flwr.server.server.Server:1 of #, fuzzy msgid "Flower server." @@ -7912,79 +8194,91 @@ msgstr "client" msgid "Functions" msgstr "Les quatre fonctions :" -#: ../../source/ref-api/flwr.client.rst:24::1 +#: ../../source/ref-api/flwr.client.rst:25::1 msgid ":py:obj:`run_client_app `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.client.rst:24::1 -#: flwr.client.app.run_client_app:1 of +#: ../../source/ref-api/flwr.client.rst:25::1 +#: flwr.client.supernode.app.run_client_app:1 of #, fuzzy msgid "Run Flower client app." msgstr "Client de Flower" -#: ../../source/ref-api/flwr.client.rst:24::1 +#: ../../source/ref-api/flwr.client.rst:25::1 +#, fuzzy +msgid ":py:obj:`run_supernode `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.client.rst:25::1 +#: flwr.client.supernode.app.run_supernode:1 of +#, fuzzy +msgid "Run Flower SuperNode." +msgstr "Serveur de Flower" + +#: ../../source/ref-api/flwr.client.rst:25::1 msgid "" ":py:obj:`start_client `\\ \\(\\*\\, " "server\\_address\\[\\, client\\_fn\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.client.rst:24::1 +#: ../../source/ref-api/flwr.client.rst:25::1 #: flwr.client.app.start_client:1 of msgid "Start a Flower client node which connects to a Flower server." msgstr "" -#: ../../source/ref-api/flwr.client.rst:24::1 +#: ../../source/ref-api/flwr.client.rst:25::1 msgid "" ":py:obj:`start_numpy_client `\\ \\(\\*\\," " server\\_address\\, client\\)" msgstr "" -#: ../../source/ref-api/flwr.client.rst:24::1 +#: ../../source/ref-api/flwr.client.rst:25::1 #: flwr.client.app.start_numpy_client:1 of msgid "Start a Flower NumPyClient which connects to a gRPC server." msgstr "" -#: ../../source/ref-api/flwr.client.rst:26 +#: ../../source/ref-api/flwr.client.rst:27 #: ../../source/ref-api/flwr.common.rst:32 -#: ../../source/ref-api/flwr.server.rst:29 +#: ../../source/ref-api/flwr.server.rst:28 #: ../../source/ref-api/flwr.server.strategy.rst:17 #: ../../source/ref-api/flwr.server.workflow.rst:17 msgid "Classes" msgstr "" -#: ../../source/ref-api/flwr.client.rst:33::1 +#: ../../source/ref-api/flwr.client.rst:34::1 msgid ":py:obj:`Client `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.client.rst:33::1 +#: ../../source/ref-api/flwr.client.rst:34::1 #: flwr.client.client.Client:1 of msgid "Abstract base class for Flower clients." msgstr "" -#: ../../source/ref-api/flwr.client.rst:33::1 +#: ../../source/ref-api/flwr.client.rst:34::1 msgid "" ":py:obj:`ClientApp `\\ \\(\\[client\\_fn\\, " "mods\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.client.rst:33::1 +#: ../../source/ref-api/flwr.client.rst:34::1 #: flwr.client.client_app.ClientApp:1 of #, fuzzy msgid "Flower ClientApp." msgstr "Flower ClientApp." -#: ../../source/ref-api/flwr.client.rst:33::1 +#: ../../source/ref-api/flwr.client.rst:34::1 msgid ":py:obj:`NumPyClient `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.client.rst:33::1 +#: ../../source/ref-api/flwr.client.rst:34::1 #: flwr.client.numpy_client.NumPyClient:1 of msgid "Abstract base class for Flower clients using NumPy." msgstr "" #: flwr.client.client.Client:1 flwr.client.numpy_client.NumPyClient:1 #: flwr.server.client_manager.ClientManager:1 -#: flwr.server.strategy.strategy.Strategy:1 of +#: flwr.server.driver.driver.Driver:1 flwr.server.strategy.strategy.Strategy:1 +#: of msgid "Bases: :py:class:`~abc.ABC`" msgstr "" @@ -7999,6 +8293,7 @@ msgstr "" #: ../../source/ref-api/flwr.common.Error.rst:15 #: ../../source/ref-api/flwr.common.EvaluateIns.rst:15 #: ../../source/ref-api/flwr.common.EvaluateRes.rst:15 +#: ../../source/ref-api/flwr.common.EventType.rst:15 #: ../../source/ref-api/flwr.common.FitIns.rst:15 #: ../../source/ref-api/flwr.common.FitRes.rst:15 #: ../../source/ref-api/flwr.common.GetParametersIns.rst:15 @@ -8138,7 +8433,7 @@ msgstr "" #: ../../source/ref-api/flwr.common.Error.rst:25 #: ../../source/ref-api/flwr.common.EvaluateIns.rst:25 #: ../../source/ref-api/flwr.common.EvaluateRes.rst:25 -#: ../../source/ref-api/flwr.common.EventType.rst:19 +#: ../../source/ref-api/flwr.common.EventType.rst:165 #: ../../source/ref-api/flwr.common.FitIns.rst:25 #: ../../source/ref-api/flwr.common.FitRes.rst:25 #: ../../source/ref-api/flwr.common.GetParametersIns.rst:25 @@ -8181,7 +8476,6 @@ msgstr "" #: flwr.server.client_manager.SimpleClientManager.register #: flwr.server.client_manager.SimpleClientManager.unregister #: flwr.server.client_manager.SimpleClientManager.wait_for -#: flwr.server.compat.app.start_driver flwr.server.driver.driver.Driver #: flwr.server.driver.driver.Driver.create_message #: flwr.server.driver.driver.Driver.pull_messages #: flwr.server.driver.driver.Driver.push_messages @@ -8234,7 +8528,6 @@ msgstr "" #: flwr.server.client_manager.SimpleClientManager.num_available #: flwr.server.client_manager.SimpleClientManager.register #: flwr.server.client_manager.SimpleClientManager.wait_for -#: flwr.server.compat.app.start_driver #: flwr.server.driver.driver.Driver.create_message #: flwr.server.driver.driver.Driver.pull_messages #: flwr.server.driver.driver.Driver.push_messages @@ -8269,7 +8562,6 @@ msgstr "" #: flwr.server.client_manager.SimpleClientManager.num_available #: flwr.server.client_manager.SimpleClientManager.register #: flwr.server.client_manager.SimpleClientManager.wait_for -#: flwr.server.compat.app.start_driver #: flwr.server.driver.driver.Driver.create_message #: flwr.server.driver.driver.Driver.pull_messages #: flwr.server.driver.driver.Driver.push_messages @@ -8336,9 +8628,9 @@ msgstr "client" #: flwr.common.typing.GetParametersRes:1 flwr.common.typing.GetPropertiesIns:1 #: flwr.common.typing.GetPropertiesRes:1 flwr.common.typing.Parameters:1 #: flwr.common.typing.ReconnectIns:1 flwr.common.typing.ServerMessage:1 -#: flwr.common.typing.Status:1 flwr.server.driver.driver.Driver:1 -#: flwr.server.history.History:1 flwr.server.server.Server:1 -#: flwr.server.server_app.ServerApp:1 flwr.server.server_config.ServerConfig:1 +#: flwr.common.typing.Status:1 flwr.server.history.History:1 +#: flwr.server.server.Server:1 flwr.server.server_app.ServerApp:1 +#: flwr.server.server_config.ServerConfig:1 #: flwr.server.workflow.default_workflows.DefaultWorkflow:1 #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:1 #: of @@ -8350,8 +8642,7 @@ msgstr "" #: flwr.client.client_app.ClientApp.evaluate:4 #: flwr.client.client_app.ClientApp.query:4 #: flwr.client.client_app.ClientApp.train:4 flwr.server.app.start_server:41 -#: flwr.server.compat.app.start_driver:32 flwr.server.server_app.ServerApp:4 -#: flwr.server.server_app.ServerApp.main:4 +#: flwr.server.server_app.ServerApp:4 flwr.server.server_app.ServerApp.main:4 #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:29 #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:22 #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:21 @@ -8578,6 +8869,11 @@ msgstr "" msgid "run\\_client\\_app" msgstr "" +#: ../../source/ref-api/flwr.client.run_supernode.rst:2 +#, fuzzy +msgid "run\\_supernode" +msgstr "flower-superlink" + #: ../../source/ref-api/flwr.client.start_client.rst:2 #, fuzzy msgid "start\\_client" @@ -8610,8 +8906,7 @@ msgid "" "increased limit and block larger messages." msgstr "" -#: flwr.client.app.start_client:19 flwr.client.app.start_numpy_client:22 -#: flwr.server.compat.app.start_driver:21 of +#: flwr.client.app.start_client:19 flwr.client.app.start_numpy_client:22 of msgid "" "The PEM-encoded root certificates as a byte string or a path string. If " "provided, a secure connection using the certificates will be established " @@ -9199,12 +9494,11 @@ msgstr "Configurer les clients" #: flwr.common.record.configsrecord.ConfigsRecord:1 of msgid "" "Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " -"[:py:class:`str`, :py:obj:`~typing.Union`\\ [:py:class:`int`, " -":py:class:`float`, :py:class:`str`, :py:class:`bytes`, :py:class:`bool`, " -":py:class:`~typing.List`\\ [:py:class:`int`], :py:class:`~typing.List`\\ " -"[:py:class:`float`], :py:class:`~typing.List`\\ [:py:class:`str`], " -":py:class:`~typing.List`\\ [:py:class:`bytes`], " -":py:class:`~typing.List`\\ [:py:class:`bool`]]]" +"[:py:class:`str`, :py:class:`int` | :py:class:`float` | :py:class:`str` |" +" :py:class:`bytes` | :py:class:`bool` | :py:class:`~typing.List`\\ " +"[:py:class:`int`] | :py:class:`~typing.List`\\ [:py:class:`float`] | " +":py:class:`~typing.List`\\ [:py:class:`str`] | :py:class:`~typing.List`\\" +" [:py:class:`bytes`] | :py:class:`~typing.List`\\ [:py:class:`bool`]]" msgstr "" #: flwr.common.record.typeddict.TypedDict.clear:1::1 of @@ -9368,5646 +9662,5675 @@ msgstr "" msgid "Bases: :py:class:`str`, :py:class:`~enum.Enum`" msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid ":py:obj:`PING `\\" -msgstr "" - -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid ":py:obj:`START_CLIENT_ENTER `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid "" +":py:obj:`encode `\\ \\(\\[encoding\\, " +"errors\\]\\)" msgstr "" +"Flower 1.0 : ``start_server(..., " +"config=flwr.server.ServerConfig(num_rounds=3, round_timeout=600.0), " +"...)``" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid ":py:obj:`START_CLIENT_LEAVE `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.encode:1 of +msgid "Encode the string using the codec registered for encoding." msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid ":py:obj:`START_SERVER_ENTER `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`replace `\\ \\(old\\, new\\[\\, " +"count\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid ":py:obj:`START_SERVER_LEAVE `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.replace:1 of +msgid "Return a copy with all occurrences of substring old replaced by new." msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy msgid "" -":py:obj:`RUN_DRIVER_API_ENTER " -"`\\" -msgstr "" +":py:obj:`split `\\ \\(\\[sep\\, " +"maxsplit\\]\\)" +msgstr "serveur.stratégie.Stratégie" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.rsplit:1 flwr.common.EventType.split:1 of msgid "" -":py:obj:`RUN_DRIVER_API_LEAVE " -"`\\" +"Return a list of the substrings in the string, using sep as the separator" +" string." msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 msgid "" -":py:obj:`RUN_FLEET_API_ENTER " -"`\\" +":py:obj:`rsplit `\\ \\(\\[sep\\, " +"maxsplit\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid "" -":py:obj:`RUN_FLEET_API_LEAVE " -"`\\" -msgstr "" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`join `\\ \\(iterable\\, \\/\\)" +msgstr "serveur.stratégie.Stratégie" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid "" -":py:obj:`RUN_SUPERLINK_ENTER " -"`\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.join:1 of +msgid "Concatenate any number of strings." msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid "" -":py:obj:`RUN_SUPERLINK_LEAVE " -"`\\" -msgstr "" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`capitalize `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid "" -":py:obj:`START_SIMULATION_ENTER " -"`\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.capitalize:1 of +msgid "Return a capitalized version of the string." msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid "" -":py:obj:`START_SIMULATION_LEAVE " -"`\\" -msgstr "" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`casefold `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid ":py:obj:`DRIVER_CONNECT `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.casefold:1 of +msgid "Return a version of the string suitable for caseless comparisons." msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid ":py:obj:`DRIVER_DISCONNECT `\\" -msgstr "" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`title `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid ":py:obj:`START_DRIVER_ENTER `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.title:1 of +msgid "Return a version of the string where each word is titlecased." msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid ":py:obj:`START_DRIVER_LEAVE `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`center `\\ \\(width\\[\\, " +"fillchar\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid "" -":py:obj:`RUN_CLIENT_APP_ENTER " -"`\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.center:1 of +msgid "Return a centered string of length width." msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 msgid "" -":py:obj:`RUN_CLIENT_APP_LEAVE " -"`\\" +":py:obj:`count `\\ \\(sub\\[\\, start\\[\\, " +"end\\]\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 msgid "" -":py:obj:`RUN_SERVER_APP_ENTER " -"`\\" +"Return the number of non-overlapping occurrences of substring sub in " +"string S[start:end]." msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 msgid "" -":py:obj:`RUN_SERVER_APP_LEAVE " -"`\\" +":py:obj:`expandtabs `\\ " +"\\(\\[tabsize\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.FitIns.rst:2 -msgid "FitIns" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.expandtabs:1 of +msgid "Return a copy where all tab characters are expanded using spaces." msgstr "" -#: ../../source/ref-api/flwr.common.FitIns.rst:29::1 -msgid ":py:obj:`parameters `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`find `\\ \\(sub\\[\\, start\\[\\, " +"end\\]\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.FitIns.rst:29::1 -msgid ":py:obj:`config `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +"Return the lowest index in S where substring sub is found, such that sub " +"is contained within S[start:end]." msgstr "" -#: ../../source/ref-api/flwr.common.FitRes.rst:2 -msgid "FitRes" -msgstr "" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`partition `\\ \\(sep\\, \\/\\)" +msgstr "serveur.stratégie.Stratégie" -#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 -msgid ":py:obj:`status `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.partition:1 flwr.common.EventType.rpartition:1 of +msgid "Partition the string into three parts using the given separator." msgstr "" -#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 -msgid ":py:obj:`parameters `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`index `\\ \\(sub\\[\\, start\\[\\, " +"end\\]\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 -msgid ":py:obj:`num_examples `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`ljust `\\ \\(width\\[\\, " +"fillchar\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 -msgid ":py:obj:`metrics `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.ljust:1 of +msgid "Return a left-justified string of length width." msgstr "" -#: ../../source/ref-api/flwr.common.GetParametersIns.rst:2 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "GetParametersIns" -msgstr ":code:`get_parameters`" +msgid ":py:obj:`lower `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" -#: ../../source/ref-api/flwr.common.GetParametersIns.rst:28::1 -msgid ":py:obj:`config `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.lower:1 of +msgid "Return a copy of the string converted to lowercase." msgstr "" -#: ../../source/ref-api/flwr.common.GetParametersRes.rst:2 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "GetParametersRes" -msgstr ":code:`get_parameters`" +msgid ":py:obj:`lstrip `\\ \\(\\[chars\\]\\)" +msgstr "serveur.stratégie.Stratégie" -#: ../../source/ref-api/flwr.common.GetParametersRes.rst:29::1 -msgid ":py:obj:`status `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.lstrip:1 of +msgid "Return a copy of the string with leading whitespace removed." msgstr "" -#: ../../source/ref-api/flwr.common.GetParametersRes.rst:29::1 -msgid ":py:obj:`parameters `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`rfind `\\ \\(sub\\[\\, start\\[\\, " +"end\\]\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.GetPropertiesIns.rst:2 -msgid "GetPropertiesIns" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +"Return the highest index in S where substring sub is found, such that sub" +" is contained within S[start:end]." msgstr "" -#: ../../source/ref-api/flwr.common.GetPropertiesIns.rst:28::1 -msgid ":py:obj:`config `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`rindex `\\ \\(sub\\[\\, " +"start\\[\\, end\\]\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:2 -msgid "GetPropertiesRes" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`rjust `\\ \\(width\\[\\, " +"fillchar\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:29::1 -msgid ":py:obj:`status `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.rjust:1 of +msgid "Return a right-justified string of length width." msgstr "" -#: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:29::1 -msgid ":py:obj:`properties `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`rstrip `\\ \\(\\[chars\\]\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.rstrip:1 of +msgid "Return a copy of the string with trailing whitespace removed." msgstr "" -#: ../../source/ref-api/flwr.common.Message.rst:2 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "Message" -msgstr "Côté serveur" +msgid ":py:obj:`rpartition `\\ \\(sep\\, \\/\\)" +msgstr "serveur.stratégie.Stratégie" -#: flwr.common.Message.content:1::1 flwr.common.Message.metadata:1 -#: flwr.common.message.Message:3 of -msgid "A dataclass including information about the message to be executed." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid "" +":py:obj:`splitlines `\\ " +"\\(\\[keepends\\]\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.splitlines:1 of +msgid "Return a list of the lines in the string, breaking at line boundaries." msgstr "" -#: flwr.common.message.Message:5 of +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`strip `\\ \\(\\[chars\\]\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.strip:1 of +msgid "Return a copy of the string with leading and trailing whitespace removed." +msgstr "" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`swapcase `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.swapcase:1 of msgid "" -"Holds records either sent by another entity (e.g. sent by the server-side" -" logic to a client, or vice-versa) or that will be sent to it." +"Convert uppercase characters to lowercase and lowercase characters to " +"uppercase." msgstr "" -#: flwr.common.message.Message:8 of +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`translate `\\ \\(table\\, \\/\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.translate:1 of +msgid "Replace each character in the string using the given translation table." +msgstr "" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`upper `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.upper:1 of +msgid "Return a copy of the string converted to uppercase." +msgstr "" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 msgid "" -"A dataclass that captures information about an error that took place when" -" processing another message." +":py:obj:`startswith `\\ \\(prefix\\[\\," +" start\\[\\, end\\]\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "Return True if S starts with the specified prefix, False otherwise." +msgstr "" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 msgid "" -":py:obj:`create_error_reply `\\ " -"\\(error\\, ttl\\)" +":py:obj:`endswith `\\ \\(suffix\\[\\, " +"start\\[\\, end\\]\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 -#: flwr.common.message.Message.create_error_reply:1 of -msgid "Construct a reply message indicating an error happened." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "Return True if S ends with the specified suffix, False otherwise." msgstr "" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 msgid "" -":py:obj:`create_reply `\\ \\(content\\," -" ttl\\)" +":py:obj:`removeprefix `\\ " +"\\(prefix\\, \\/\\)" msgstr "" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 -#: flwr.common.message.Message.create_reply:1 of -msgid "Create a reply to this message with specified content and TTL." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.removeprefix:1 of +msgid "Return a str with the given prefix string removed if present." msgstr "" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 -msgid ":py:obj:`has_content `\\ \\(\\)" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`removesuffix `\\ " +"\\(suffix\\, \\/\\)" msgstr "" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 -#: flwr.common.message.Message.has_content:1 of -msgid "Return True if message has content, else False." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.removesuffix:1 of +msgid "Return a str with the given suffix string removed if present." msgstr "" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 -msgid ":py:obj:`has_error `\\ \\(\\)" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`isascii `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isascii:1 of +msgid "Return True if all characters in the string are ASCII, False otherwise." msgstr "" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 -#: flwr.common.message.Message.has_error:1 of -msgid "Return True if message has an error, else False." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`islower `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.islower:1 of +msgid "Return True if the string is a lowercase string, False otherwise." msgstr "" -#: flwr.common.Message.content:1::1 of -msgid ":py:obj:`content `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`isupper `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isupper:1 of +msgid "Return True if the string is an uppercase string, False otherwise." msgstr "" -#: flwr.common.Message.content:1 flwr.common.Message.content:1::1 -#: of +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "The content of this message." -msgstr "Évaluer la réponse d'un client." +msgid ":py:obj:`istitle `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" -#: flwr.common.Message.content:1::1 of -msgid ":py:obj:`error `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.istitle:1 of +msgid "Return True if the string is a title-cased string, False otherwise." msgstr "" -#: flwr.common.Message.content:1::1 flwr.common.Message.error:1 of -msgid "Error captured by this message." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`isspace `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isspace:1 of +msgid "Return True if the string is a whitespace string, False otherwise." msgstr "" -#: flwr.common.Message.content:1::1 of -msgid ":py:obj:`metadata `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`isdecimal `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isdecimal:1 of +msgid "Return True if the string is a decimal string, False otherwise." msgstr "" -#: flwr.common.message.Message.create_error_reply:3 of -msgid "The error that was encountered." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`isdigit `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isdigit:1 of +msgid "Return True if the string is a digit string, False otherwise." msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.ttl:1 flwr.common.message.Message.create_error_reply:5 -#: flwr.common.message.Message.create_reply:9 flwr.common.message.Metadata:16 -#: of -msgid "Time-to-live for this message." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`isnumeric `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isnumeric:1 of +msgid "Return True if the string is a numeric string, False otherwise." msgstr "" -#: flwr.common.message.Message.create_reply:3 of -msgid "" -"The method generates a new `Message` as a reply to this message. It " -"inherits 'run_id', 'src_node_id', 'dst_node_id', and 'message_type' from " -"this message and sets 'reply_to_message' to the ID of this message." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`isalpha `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isalpha:1 of +msgid "Return True if the string is an alphabetic string, False otherwise." msgstr "" -#: flwr.common.message.Message.create_reply:7 of -msgid "The content for the reply message." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`isalnum `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isalnum:1 of +msgid "Return True if the string is an alpha-numeric string, False otherwise." msgstr "" -#: flwr.common.message.Message.create_reply:12 of -msgid "A new `Message` instance representing the reply." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`isidentifier `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isidentifier:1 of +msgid "Return True if the string is a valid Python identifier, False otherwise." msgstr "" -#: ../../source/ref-api/flwr.common.MessageType.rst:2 -msgid "MessageType" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`isprintable `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isprintable:1 of +msgid "Return True if the string is printable, False otherwise." msgstr "" -#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 -msgid ":py:obj:`EVALUATE `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`zfill `\\ \\(width\\, \\/\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.zfill:1 of +msgid "" +"Pad a numeric string with zeros on the left, to fill a field of the given" +" width." msgstr "" -#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 -msgid ":py:obj:`QUERY `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`format `\\ \\(\\*args\\, " +"\\*\\*kwargs\\)" msgstr "" -#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 -msgid ":py:obj:`TRAIN `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "Return a formatted version of S, using substitutions from args and kwargs." msgstr "" -#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:2 -msgid "MessageTypeLegacy" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`format_map `\\ \\(mapping\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "Return a formatted version of S, using substitutions from mapping." msgstr "" -#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:29::1 -msgid ":py:obj:`GET_PARAMETERS `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`maketrans `\\" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.maketrans:1 of +msgid "Return a translation table usable for str.translate()." msgstr "" -#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:29::1 -msgid ":py:obj:`GET_PROPERTIES `\\" +#: flwr.common.EventType.capitalize:1::1 of +msgid ":py:obj:`PING `\\" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.run_id:1 flwr.common.message.Metadata:3 of -msgid "An identifier for the current run." +#: flwr.common.EventType.capitalize:1::1 of +msgid ":py:obj:`START_CLIENT_ENTER `\\" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.message_id:1 flwr.common.message.Metadata:5 of -msgid "An identifier for the current message." +#: flwr.common.EventType.capitalize:1::1 of +msgid ":py:obj:`START_CLIENT_LEAVE `\\" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.src_node_id:1 flwr.common.message.Metadata:7 of -msgid "An identifier for the node sending this message." +#: flwr.common.EventType.capitalize:1::1 of +msgid ":py:obj:`START_SERVER_ENTER `\\" msgstr "" -#: flwr.common.Metadata.dst_node_id:1 -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.message.Metadata:9 of -msgid "An identifier for the node receiving this message." +#: flwr.common.EventType.capitalize:1::1 of +msgid ":py:obj:`START_SERVER_LEAVE `\\" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.reply_to_message:1 flwr.common.message.Metadata:11 of -msgid "An identifier for the message this message replies to." +#: flwr.common.EventType.capitalize:1::1 of +msgid "" +":py:obj:`RUN_DRIVER_API_ENTER " +"`\\" msgstr "" -#: flwr.common.message.Metadata:13 of +#: flwr.common.EventType.capitalize:1::1 of msgid "" -"An identifier for grouping messages. In some settings, this is used as " -"the FL round." +":py:obj:`RUN_DRIVER_API_LEAVE " +"`\\" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.message_type:1 flwr.common.message.Metadata:18 of -msgid "A string that encodes the action to be executed on the receiving end." +#: flwr.common.EventType.capitalize:1::1 of +msgid "" +":py:obj:`RUN_FLEET_API_ENTER " +"`\\" msgstr "" -#: flwr.common.message.Metadata:21 of +#: flwr.common.EventType.capitalize:1::1 of msgid "" -"An identifier that can be used when loading a particular data partition " -"for a ClientApp. Making use of this identifier is more relevant when " -"conducting simulations." +":py:obj:`RUN_FLEET_API_LEAVE " +"`\\" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 of -msgid ":py:obj:`dst_node_id `\\" +#: flwr.common.EventType.capitalize:1::1 of +msgid "" +":py:obj:`RUN_SUPERLINK_ENTER " +"`\\" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 of -msgid ":py:obj:`group_id `\\" +#: flwr.common.EventType.capitalize:1::1 of +msgid "" +":py:obj:`RUN_SUPERLINK_LEAVE " +"`\\" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.group_id:1 of -msgid "An identifier for grouping messages." +#: flwr.common.EventType.capitalize:1::1 of +msgid "" +":py:obj:`START_SIMULATION_ENTER " +"`\\" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 of -msgid ":py:obj:`message_id `\\" +#: flwr.common.EventType.capitalize:1::1 of +msgid "" +":py:obj:`START_SIMULATION_LEAVE " +"`\\" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 of -msgid ":py:obj:`message_type `\\" +#: flwr.common.EventType.capitalize:1::1 of +msgid ":py:obj:`DRIVER_CONNECT `\\" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 of -msgid ":py:obj:`partition_id `\\" +#: flwr.common.EventType.capitalize:1::1 of +msgid ":py:obj:`DRIVER_DISCONNECT `\\" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.partition_id:1 of -msgid "An identifier telling which data partition a ClientApp should use." +#: flwr.common.EventType.capitalize:1::1 of +msgid ":py:obj:`START_DRIVER_ENTER `\\" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 of -msgid ":py:obj:`reply_to_message `\\" +#: flwr.common.EventType.capitalize:1::1 of +msgid ":py:obj:`START_DRIVER_LEAVE `\\" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 of -msgid ":py:obj:`run_id `\\" +#: flwr.common.EventType.capitalize:1::1 of +msgid "" +":py:obj:`RUN_CLIENT_APP_ENTER " +"`\\" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 of -msgid ":py:obj:`src_node_id `\\" +#: flwr.common.EventType.capitalize:1::1 of +msgid "" +":py:obj:`RUN_CLIENT_APP_LEAVE " +"`\\" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 of -msgid ":py:obj:`ttl `\\" +#: flwr.common.EventType.capitalize:1::1 of +msgid "" +":py:obj:`RUN_SERVER_APP_ENTER " +"`\\" msgstr "" -#: ../../source/ref-api/flwr.common.MetricsRecord.rst:2 -msgid "MetricsRecord" +#: flwr.common.EventType.capitalize:1::1 of +msgid "" +":py:obj:`RUN_SERVER_APP_LEAVE " +"`\\" msgstr "" -#: flwr.common.record.metricsrecord.MetricsRecord:1 of +#: flwr.common.EventType.capitalize:1::1 of msgid "" -"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " -"[:py:class:`str`, :py:obj:`~typing.Union`\\ [:py:class:`int`, " -":py:class:`float`, :py:class:`~typing.List`\\ [:py:class:`int`], " -":py:class:`~typing.List`\\ [:py:class:`float`]]]" +":py:obj:`RUN_SUPERNODE_ENTER " +"`\\" msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`clear `\\ \\(\\)" +#: flwr.common.EventType.capitalize:1::1 of +#, fuzzy +msgid "" +":py:obj:`RUN_SUPERNODE_LEAVE " +"`\\" +msgstr "serveur.stratégie.Stratégie" + +#: flwr.common.EventType.capitalize:3 of +msgid "" +"More specifically, make the first character have upper case and the rest " +"lower case." msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`count_bytes `\\ \\(\\)" +#: flwr.common.EventType.center:3 flwr.common.EventType.ljust:3 +#: flwr.common.EventType.rjust:3 of +msgid "Padding is done using the specified fill character (default is a space)." msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" +#: flwr.common.EventType.count:1 of +msgid "" +"Return the number of non-overlapping occurrences of substring sub in " +"string S[start:end]. Optional arguments start and end are interpreted as" +" in slice notation." msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`items `\\ \\(\\)" +#: flwr.common.EventType.encode:3 of +msgid "encoding" msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`keys `\\ \\(\\)" +#: flwr.common.EventType.encode:4 of +msgid "The encoding in which to encode the string." msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" +#: flwr.common.EventType.encode:9 of +msgid "errors" msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +#: flwr.common.EventType.encode:6 of msgid "" -":py:obj:`update `\\ \\(\\[E\\, " -"\\]\\*\\*F\\)" +"The error handling scheme to use for encoding errors. The default is " +"'strict' meaning that encoding errors raise a UnicodeEncodeError. Other " +"possible values are 'ignore', 'replace' and 'xmlcharrefreplace' as well " +"as any other name registered with codecs.register_error that can handle " +"UnicodeEncodeErrors." msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`values `\\ \\(\\)" +#: flwr.common.EventType.endswith:1 of +msgid "" +"Return True if S ends with the specified suffix, False otherwise. With " +"optional start, test S beginning at that position. With optional end, " +"stop comparing S at that position. suffix can also be a tuple of strings " +"to try." msgstr "" -#: ../../source/ref-api/flwr.common.NDArray.rst:2 -msgid "NDArray" +#: flwr.common.EventType.expandtabs:3 of +msgid "If tabsize is not given, a tab size of 8 characters is assumed." msgstr "" -#: ../../source/ref-api/flwr.common.Parameters.rst:29::1 -msgid ":py:obj:`tensors `\\" +#: flwr.common.EventType.find:1 flwr.common.EventType.index:1 of +msgid "" +"Return the lowest index in S where substring sub is found, such that sub " +"is contained within S[start:end]. Optional arguments start and end are " +"interpreted as in slice notation." msgstr "" -#: ../../source/ref-api/flwr.common.Parameters.rst:29::1 -msgid ":py:obj:`tensor_type `\\" +#: flwr.common.EventType.find:5 flwr.common.EventType.rfind:5 of +msgid "Return -1 on failure." msgstr "" -#: ../../source/ref-api/flwr.common.ParametersRecord.rst:2 -#, fuzzy -msgid "ParametersRecord" -msgstr "Paramètres du modèle." - -#: flwr.common.record.parametersrecord.ParametersRecord:1 of +#: flwr.common.EventType.format:1 of msgid "" -"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " -"[:py:class:`str`, :py:class:`~flwr.common.record.parametersrecord.Array`]" +"Return a formatted version of S, using substitutions from args and " +"kwargs. The substitutions are identified by braces ('{' and '}')." msgstr "" -#: flwr.common.record.parametersrecord.ParametersRecord:3 of +#: flwr.common.EventType.format_map:1 of msgid "" -"A dataclass storing named Arrays in order. This means that it holds " -"entries as an OrderedDict[str, Array]. ParametersRecord objects can be " -"viewed as an equivalent to PyTorch's state_dict, but holding serialised " -"tensors instead." -msgstr "" - -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`clear `\\ \\(\\)" +"Return a formatted version of S, using substitutions from mapping. The " +"substitutions are identified by braces ('{' and '}')." msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`count_bytes `\\ \\(\\)" +#: flwr.common.EventType.index:5 flwr.common.EventType.rindex:5 of +msgid "Raises ValueError when the substring is not found." msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" +#: flwr.common.EventType.isalnum:3 of +msgid "" +"A string is alpha-numeric if all characters in the string are alpha-" +"numeric and there is at least one character in the string." msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`items `\\ \\(\\)" +#: flwr.common.EventType.isalpha:3 of +msgid "" +"A string is alphabetic if all characters in the string are alphabetic and" +" there is at least one character in the string." msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`keys `\\ \\(\\)" +#: flwr.common.EventType.isascii:3 of +msgid "" +"ASCII characters have code points in the range U+0000-U+007F. Empty " +"string is ASCII too." msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" +#: flwr.common.EventType.isdecimal:3 of +msgid "" +"A string is a decimal string if all characters in the string are decimal " +"and there is at least one character in the string." msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +#: flwr.common.EventType.isdigit:3 of msgid "" -":py:obj:`update `\\ \\(\\[E\\, " -"\\]\\*\\*F\\)" +"A string is a digit string if all characters in the string are digits and" +" there is at least one character in the string." msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`values `\\ \\(\\)" +#: flwr.common.EventType.isidentifier:3 of +msgid "" +"Call keyword.iskeyword(s) to test whether string s is a reserved " +"identifier, such as \"def\" or \"class\"." msgstr "" -#: flwr.common.record.parametersrecord.ParametersRecord.count_bytes:3 of +#: flwr.common.EventType.islower:3 of msgid "" -"Note that a small amount of Bytes might also be included in this counting" -" that correspond to metadata of the serialized object (e.g. of NumPy " -"array) needed for deseralization." +"A string is lowercase if all cased characters in the string are lowercase" +" and there is at least one cased character in the string." msgstr "" -#: ../../source/ref-api/flwr.common.ReconnectIns.rst:2 -#, fuzzy -msgid "ReconnectIns" -msgstr "Collecte centralisée des données" - -#: ../../source/ref-api/flwr.common.ReconnectIns.rst:28::1 -msgid ":py:obj:`seconds `\\" +#: flwr.common.EventType.isnumeric:3 of +msgid "" +"A string is numeric if all characters in the string are numeric and there" +" is at least one character in the string." msgstr "" -#: ../../source/ref-api/flwr.common.RecordSet.rst:2 -msgid "RecordSet" +#: flwr.common.EventType.isprintable:3 of +msgid "" +"A string is printable if all of its characters are considered printable " +"in repr() or if it is empty." msgstr "" -#: flwr.common.RecordSet.configs_records:1::1 of -msgid ":py:obj:`configs_records `\\" +#: flwr.common.EventType.isspace:3 of +msgid "" +"A string is whitespace if all characters in the string are whitespace and" +" there is at least one character in the string." msgstr "" -#: flwr.common.RecordSet.configs_records:1 -#: flwr.common.RecordSet.configs_records:1::1 of -msgid "Dictionary holding ConfigsRecord instances." +#: flwr.common.EventType.istitle:3 of +msgid "" +"In a title-cased string, upper- and title-case characters may only follow" +" uncased characters and lowercase characters only cased ones." msgstr "" -#: flwr.common.RecordSet.configs_records:1::1 of -msgid ":py:obj:`metrics_records `\\" +#: flwr.common.EventType.isupper:3 of +msgid "" +"A string is uppercase if all cased characters in the string are uppercase" +" and there is at least one cased character in the string." msgstr "" -#: flwr.common.RecordSet.configs_records:1::1 -#: flwr.common.RecordSet.metrics_records:1 of -msgid "Dictionary holding MetricsRecord instances." +#: flwr.common.EventType.join:3 of +msgid "" +"The string whose method is called is inserted in between each given " +"string. The result is returned as a new string." msgstr "" -#: flwr.common.RecordSet.configs_records:1::1 of -msgid ":py:obj:`parameters_records `\\" +#: flwr.common.EventType.join:6 of +msgid "Example: '.'.join(['ab', 'pq', 'rs']) -> 'ab.pq.rs'" msgstr "" -#: flwr.common.RecordSet.configs_records:1::1 -#: flwr.common.RecordSet.parameters_records:1 of -msgid "Dictionary holding ParametersRecord instances." +#: flwr.common.EventType.lstrip:3 flwr.common.EventType.rstrip:3 +#: flwr.common.EventType.strip:3 of +msgid "If chars is given and not None, remove characters in chars instead." msgstr "" -#: ../../source/ref-api/flwr.common.ServerMessage.rst:2 -#, fuzzy -msgid "ServerMessage" -msgstr "Côté serveur" - -#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 -msgid ":py:obj:`evaluate_ins `\\" +#: flwr.common.EventType.maketrans:3 of +msgid "" +"If there is only one argument, it must be a dictionary mapping Unicode " +"ordinals (integers) or characters to Unicode ordinals, strings or None. " +"Character keys will be then converted to ordinals. If there are two " +"arguments, they must be strings of equal length, and in the resulting " +"dictionary, each character in x will be mapped to the character at the " +"same position in y. If there is a third argument, it must be a string, " +"whose characters will be mapped to None in the result." msgstr "" -#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 -msgid ":py:obj:`fit_ins `\\" +#: flwr.common.EventType.partition:3 of +msgid "" +"This will search for the separator in the string. If the separator is " +"found, returns a 3-tuple containing the part before the separator, the " +"separator itself, and the part after it." msgstr "" -#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 +#: flwr.common.EventType.partition:7 of msgid "" -":py:obj:`get_parameters_ins " -"`\\" +"If the separator is not found, returns a 3-tuple containing the original " +"string and two empty strings." msgstr "" -#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 +#: flwr.common.EventType.removeprefix:3 of msgid "" -":py:obj:`get_properties_ins " -"`\\" +"If the string starts with the prefix string, return string[len(prefix):]." +" Otherwise, return a copy of the original string." msgstr "" -#: ../../source/ref-api/flwr.common.Status.rst:2 -#, fuzzy -msgid "Status" -msgstr "Statut du client." - -#: ../../source/ref-api/flwr.common.Status.rst:29::1 -msgid ":py:obj:`code `\\" +#: flwr.common.EventType.removesuffix:3 of +msgid "" +"If the string ends with the suffix string and that suffix is not empty, " +"return string[:-len(suffix)]. Otherwise, return a copy of the original " +"string." msgstr "" -#: ../../source/ref-api/flwr.common.Status.rst:29::1 -msgid ":py:obj:`message `\\" +#: flwr.common.EventType.replace:5 of +msgid "count" msgstr "" -#: ../../source/ref-api/flwr.common.array_from_numpy.rst:2 -msgid "array\\_from\\_numpy" +#: flwr.common.EventType.replace:4 of +msgid "" +"Maximum number of occurrences to replace. -1 (the default value) means " +"replace all occurrences." msgstr "" -#: ../../source/ref-api/flwr.common.bytes_to_ndarray.rst:2 -msgid "bytes\\_to\\_ndarray" +#: flwr.common.EventType.replace:7 of +msgid "" +"If the optional argument count is given, only the first count occurrences" +" are replaced." msgstr "" -#: ../../source/ref-api/flwr.common.configure.rst:2 -#, fuzzy -msgid "configure" -msgstr "Configurer les clients" - -#: ../../source/ref-api/flwr.common.event.rst:2 -msgid "event" +#: flwr.common.EventType.rfind:1 flwr.common.EventType.rindex:1 of +msgid "" +"Return the highest index in S where substring sub is found, such that sub" +" is contained within S[start:end]. Optional arguments start and end are " +"interpreted as in slice notation." msgstr "" -#: ../../source/ref-api/flwr.common.log.rst:2 -msgid "log" +#: flwr.common.EventType.rpartition:3 of +msgid "" +"This will search for the separator in the string, starting at the end. If" +" the separator is found, returns a 3-tuple containing the part before the" +" separator, the separator itself, and the part after it." msgstr "" -#: logging.Logger.log:3 of +#: flwr.common.EventType.rpartition:7 of msgid "" -"To pass exception information, use the keyword argument exc_info with a " -"true value, e.g." +"If the separator is not found, returns a 3-tuple containing two empty " +"strings and the original string." msgstr "" -"Pour transmettre des informations sur les exceptions, utilise l'argument " -"mot-clé exc_info avec une valeur vraie, par ex." - -#: logging.Logger.log:6 of -#, python-format -msgid "logger.log(level, \"We have a %s\", \"mysterious problem\", exc_info=1)" -msgstr "logger.log(level, \"We have a %s\", \"mysterious problem\", exc_info=1)" -#: ../../source/ref-api/flwr.common.ndarray_to_bytes.rst:2 -msgid "ndarray\\_to\\_bytes" +#: flwr.common.EventType.rsplit:7 flwr.common.EventType.split:7 of +msgid "sep" msgstr "" -#: ../../source/ref-api/flwr.common.ndarrays_to_parameters.rst:2 -msgid "ndarrays\\_to\\_parameters" +#: flwr.common.EventType.rsplit:4 flwr.common.EventType.split:4 of +msgid "The separator used to split the string." msgstr "" -#: ../../source/ref-api/flwr.common.now.rst:2 -msgid "now" +#: flwr.common.EventType.rsplit:6 flwr.common.EventType.split:6 of +msgid "" +"When set to None (the default value), will split on any whitespace " +"character (including \\\\n \\\\r \\\\t \\\\f and spaces) and will discard" +" empty strings from the result." msgstr "" -#: ../../source/ref-api/flwr.common.parameters_to_ndarrays.rst:2 -msgid "parameters\\_to\\_ndarrays" +#: flwr.common.EventType.rsplit:11 flwr.common.EventType.split:11 of +msgid "maxsplit" msgstr "" -#: ../../source/ref-api/flwr.server.rst:2 -msgid "server" -msgstr "serveur" - -#: ../../source/ref-api/flwr.server.rst:27::1 -msgid ":py:obj:`run_driver_api `\\ \\(\\)" +#: flwr.common.EventType.rsplit:10 flwr.common.EventType.split:10 of +msgid "" +"Maximum number of splits (starting from the left). -1 (the default value)" +" means no limit." msgstr "" -#: ../../source/ref-api/flwr.server.rst:27::1 -#: flwr.server.app.run_driver_api:1 of -#, fuzzy -msgid "Run Flower server (Driver API)." -msgstr "flower-driver-api" - -#: ../../source/ref-api/flwr.server.rst:27::1 -msgid ":py:obj:`run_fleet_api `\\ \\(\\)" +#: flwr.common.EventType.rsplit:13 of +msgid "Splitting starts at the end of the string and works to the front." msgstr "" -#: ../../source/ref-api/flwr.server.rst:27::1 -#: flwr.server.app.run_fleet_api:1 of -#, fuzzy -msgid "Run Flower server (Fleet API)." -msgstr "flower-fleet-api" - -#: ../../source/ref-api/flwr.server.rst:27::1 -msgid ":py:obj:`run_server_app `\\ \\(\\)" +#: flwr.common.EventType.split:13 of +msgid "" +"Note, str.split() is mainly useful for data that has been intentionally " +"delimited. With natural text that includes punctuation, consider using " +"the regular expression module." msgstr "" -#: ../../source/ref-api/flwr.server.rst:27::1 -#: flwr.server.run_serverapp.run_server_app:1 of -#, fuzzy -msgid "Run Flower server app." -msgstr "Serveur de Flower" - -#: ../../source/ref-api/flwr.server.rst:27::1 -msgid ":py:obj:`run_superlink `\\ \\(\\)" +#: flwr.common.EventType.splitlines:3 of +msgid "" +"Line breaks are not included in the resulting list unless keepends is " +"given and true." msgstr "" -#: ../../source/ref-api/flwr.server.rst:27::1 -#: flwr.server.app.run_superlink:1 of -msgid "Run Flower server (Driver API and Fleet API)." +#: flwr.common.EventType.startswith:1 of +msgid "" +"Return True if S starts with the specified prefix, False otherwise. With " +"optional start, test S beginning at that position. With optional end, " +"stop comparing S at that position. prefix can also be a tuple of strings " +"to try." msgstr "" -#: ../../source/ref-api/flwr.server.rst:27::1 +#: flwr.common.EventType.title:3 of msgid "" -":py:obj:`start_driver `\\ \\(\\*\\[\\, " -"server\\_address\\, server\\, ...\\]\\)" +"More specifically, words start with uppercased characters and all " +"remaining cased characters have lower case." msgstr "" -#: ../../source/ref-api/flwr.server.rst:27::1 -#: flwr.server.compat.app.start_driver:1 of +#: flwr.common.EventType.translate:5 of #, fuzzy -msgid "Start a Flower Driver API server." -msgstr "Tout d'abord, démarre un serveur Flower :" +msgid "table" +msgstr "Database" -#: ../../source/ref-api/flwr.server.rst:27::1 +#: flwr.common.EventType.translate:4 of msgid "" -":py:obj:`start_server `\\ \\(\\*\\[\\, " -"server\\_address\\, server\\, ...\\]\\)" +"Translation table, which must be a mapping of Unicode ordinals to Unicode" +" ordinals, strings, or None." msgstr "" -#: ../../source/ref-api/flwr.server.rst:27::1 -#: flwr.server.app.start_server:1 of -msgid "Start a Flower server using the gRPC transport layer." +#: flwr.common.EventType.translate:7 of +msgid "" +"The table must implement lookup/indexing via __getitem__, for instance a " +"dictionary or list. If this operation raises LookupError, the character " +"is left untouched. Characters mapped to None are deleted." msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 -msgid ":py:obj:`ClientManager `\\ \\(\\)" +#: flwr.common.EventType.zfill:3 of +msgid "The string is never truncated." msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 -#: flwr.server.client_manager.ClientManager:1 of -msgid "Abstract base class for managing Flower clients." +#: ../../source/ref-api/flwr.common.FitIns.rst:2 +msgid "FitIns" msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 -#, fuzzy -msgid "" -":py:obj:`Driver `\\ " -"\\(\\[driver\\_service\\_address\\, ...\\]\\)" +#: ../../source/ref-api/flwr.common.FitIns.rst:29::1 +msgid ":py:obj:`parameters `\\" msgstr "" -"Flower 1.0 : ``start_server(..., " -"config=flwr.server.ServerConfig(num_rounds=3, round_timeout=600.0), " -"...)``" -#: ../../source/ref-api/flwr.server.rst:41::1 -#: flwr.server.driver.driver.Driver:1 of -msgid "`Driver` class provides an interface to the Driver API." +#: ../../source/ref-api/flwr.common.FitIns.rst:29::1 +msgid ":py:obj:`config `\\" msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 -msgid ":py:obj:`History `\\ \\(\\)" +#: ../../source/ref-api/flwr.common.FitRes.rst:2 +msgid "FitRes" msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 -#: flwr.server.history.History:1 of -msgid "History class for training and/or evaluation metrics collection." +#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 +msgid ":py:obj:`status `\\" msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 -msgid "" -":py:obj:`LegacyContext `\\ \\(state\\[\\, " -"config\\, strategy\\, ...\\]\\)" +#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 +msgid ":py:obj:`parameters `\\" msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 -#: flwr.server.compat.legacy_context.LegacyContext:1 of -msgid "Legacy Context." +#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 +msgid ":py:obj:`num_examples `\\" msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 -msgid "" -":py:obj:`Server `\\ \\(\\*\\, client\\_manager\\[\\, " -"strategy\\]\\)" +#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 +msgid ":py:obj:`metrics `\\" msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 -#, fuzzy -msgid "" -":py:obj:`ServerApp `\\ \\(\\[server\\, config\\, " -"strategy\\, ...\\]\\)" -msgstr "serveur.stratégie.Stratégie" - -#: ../../source/ref-api/flwr.server.rst:41::1 -#: flwr.server.server_app.ServerApp:1 of +#: ../../source/ref-api/flwr.common.GetParametersIns.rst:2 #, fuzzy -msgid "Flower ServerApp." -msgstr "Serveur de Flower" +msgid "GetParametersIns" +msgstr ":code:`get_parameters`" -#: ../../source/ref-api/flwr.server.rst:41::1 -#, fuzzy -msgid "" -":py:obj:`ServerConfig `\\ \\(\\[num\\_rounds\\," -" round\\_timeout\\]\\)" +#: ../../source/ref-api/flwr.common.GetParametersIns.rst:28::1 +msgid ":py:obj:`config `\\" msgstr "" -"Flower 1.0 : ``start_server(..., " -"config=flwr.server.ServerConfig(num_rounds=3, round_timeout=600.0), " -"...)``" -#: ../../source/ref-api/flwr.server.rst:41::1 -#: flwr.server.server_config.ServerConfig:1 of +#: ../../source/ref-api/flwr.common.GetParametersRes.rst:2 #, fuzzy -msgid "Flower server config." -msgstr "Serveur de Flower" +msgid "GetParametersRes" +msgstr ":code:`get_parameters`" -#: ../../source/ref-api/flwr.server.rst:41::1 -msgid ":py:obj:`SimpleClientManager `\\ \\(\\)" +#: ../../source/ref-api/flwr.common.GetParametersRes.rst:29::1 +msgid ":py:obj:`status `\\" msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 -#: flwr.server.client_manager.SimpleClientManager:1 of -msgid "Provides a pool of available clients." +#: ../../source/ref-api/flwr.common.GetParametersRes.rst:29::1 +msgid ":py:obj:`parameters `\\" msgstr "" -#: ../../source/ref-api/flwr.server.rst:60::1 -#, fuzzy -msgid ":py:obj:`flwr.server.strategy `\\" -msgstr "serveur.stratégie.Stratégie" +#: ../../source/ref-api/flwr.common.GetPropertiesIns.rst:2 +msgid "GetPropertiesIns" +msgstr "" -#: ../../source/ref-api/flwr.server.rst:60::1 -#: flwr.server.strategy:1 of -msgid "Contains the strategy abstraction and different implementations." +#: ../../source/ref-api/flwr.common.GetPropertiesIns.rst:28::1 +msgid ":py:obj:`config `\\" msgstr "" -#: ../../source/ref-api/flwr.server.rst:60::1 -#, fuzzy -msgid ":py:obj:`flwr.server.workflow `\\" -msgstr "serveur.stratégie.Stratégie" +#: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:2 +msgid "GetPropertiesRes" +msgstr "" -#: ../../source/ref-api/flwr.server.rst:60::1 -#: flwr.server.workflow:1 of -#, fuzzy -msgid "Workflows." -msgstr "Flux de travail" - -#: ../../source/ref-api/flwr.server.ClientManager.rst:2 -#, fuzzy -msgid "ClientManager" -msgstr "client" +#: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:29::1 +msgid ":py:obj:`status `\\" +msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 of -msgid ":py:obj:`all `\\ \\(\\)" +#: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:29::1 +msgid ":py:obj:`properties `\\" msgstr "" -#: flwr.server.client_manager.ClientManager.all:1 -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.all:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of -msgid "Return all available clients." +#: ../../source/ref-api/flwr.common.Message.rst:2 +#, fuzzy +msgid "Message" +msgstr "Côté serveur" + +#: flwr.common.Message.content:1::1 flwr.common.Message.metadata:1 +#: flwr.common.message.Message:3 of +msgid "A dataclass including information about the message to be executed." msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 of -msgid ":py:obj:`num_available `\\ \\(\\)" +#: flwr.common.message.Message:5 of +msgid "" +"Holds records either sent by another entity (e.g. sent by the server-side" +" logic to a client, or vice-versa) or that will be sent to it." msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.ClientManager.num_available:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.num_available:1 of -msgid "Return the number of available clients." +#: flwr.common.message.Message:8 of +msgid "" +"A dataclass that captures information about an error that took place when" +" processing another message." msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 of -msgid ":py:obj:`register `\\ \\(client\\)" +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +msgid "" +":py:obj:`create_error_reply `\\ " +"\\(error\\[\\, ttl\\]\\)" msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.ClientManager.register:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.register:1 of -msgid "Register Flower ClientProxy instance." +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: flwr.common.message.Message.create_error_reply:1 of +msgid "Construct a reply message indicating an error happened." msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 of +#: ../../source/ref-api/flwr.common.Message.rst:35::1 msgid "" -":py:obj:`sample `\\ " -"\\(num\\_clients\\[\\, min\\_num\\_clients\\, criterion\\]\\)" +":py:obj:`create_reply `\\ " +"\\(content\\[\\, ttl\\]\\)" msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.ClientManager.sample:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.sample:1 of -msgid "Sample a number of Flower ClientProxy instances." +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: flwr.common.message.Message.create_reply:1 of +msgid "Create a reply to this message with specified content and TTL." msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 of -msgid ":py:obj:`unregister `\\ \\(client\\)" +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +msgid ":py:obj:`has_content `\\ \\(\\)" msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.ClientManager.unregister:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.unregister:1 of -msgid "Unregister Flower ClientProxy instance." +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: flwr.common.message.Message.has_content:1 of +msgid "Return True if message has content, else False." msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 of -msgid "" -":py:obj:`wait_for `\\ " -"\\(num\\_clients\\, timeout\\)" +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +msgid ":py:obj:`has_error `\\ \\(\\)" msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.ClientManager.wait_for:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.wait_for:1 of -msgid "Wait until at least `num_clients` are available." +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: flwr.common.message.Message.has_error:1 of +msgid "Return True if message has an error, else False." msgstr "" -#: flwr.server.client_manager.ClientManager.num_available:3 -#: flwr.server.client_manager.SimpleClientManager.num_available:3 of -msgid "**num_available** -- The number of currently available clients." +#: flwr.common.Message.content:1::1 of +msgid ":py:obj:`content `\\" msgstr "" -#: flwr.server.client_manager.ClientManager.register:6 -#: flwr.server.client_manager.SimpleClientManager.register:6 of -msgid "" -"**success** -- Indicating if registration was successful. False if " -"ClientProxy is already registered or can not be registered for any " -"reason." +#: flwr.common.Message.content:1 flwr.common.Message.content:1::1 +#: of +#, fuzzy +msgid "The content of this message." +msgstr "Évaluer la réponse d'un client." + +#: flwr.common.Message.content:1::1 of +msgid ":py:obj:`error `\\" msgstr "" -#: flwr.server.client_manager.ClientManager.unregister:3 -#: flwr.server.client_manager.SimpleClientManager.unregister:3 of -msgid "This method is idempotent." +#: flwr.common.Message.content:1::1 flwr.common.Message.error:1 of +msgid "Error captured by this message." msgstr "" -#: ../../source/ref-api/flwr.server.Driver.rst:2 -#, fuzzy -msgid "Driver" -msgstr "serveur" +#: flwr.common.Message.content:1::1 of +msgid ":py:obj:`metadata `\\" +msgstr "" -#: flwr.server.driver.driver.Driver:3 of -msgid "" -"The IPv4 or IPv6 address of the Driver API server. Defaults to " -"`\"[::]:9091\"`." +#: flwr.common.message.Message.create_error_reply:3 of +msgid "The error that was encountered." msgstr "" -#: flwr.server.app.start_server:28 flwr.server.driver.driver.Driver:6 of +#: flwr.common.message.Message.create_error_reply:5 +#: flwr.common.message.Message.create_reply:9 of msgid "" -"Tuple containing root certificate, server certificate, and private key to" -" start a secure SSL-enabled server. The tuple is expected to have three " -"bytes elements in the following order: * CA certificate. * " -"server certificate. * server private key." +"Time-to-live for this message in seconds. If unset, it will be set based " +"on the remaining time for the received message before it expires. This " +"follows the equation: ttl = msg.meta.ttl - (reply.meta.created_at - " +"msg.meta.created_at)" msgstr "" -#: flwr.server.app.start_server:28 flwr.server.driver.driver.Driver:6 of +#: flwr.common.message.Message.create_error_reply:5 +#: flwr.common.message.Message.create_reply:9 of msgid "" -"Tuple containing root certificate, server certificate, and private key to" -" start a secure SSL-enabled server. The tuple is expected to have three " -"bytes elements in the following order:" +"Time-to-live for this message in seconds. If unset, it will be set based " +"on the remaining time for the received message before it expires. This " +"follows the equation:" msgstr "" -#: flwr.server.app.start_server:32 flwr.server.driver.driver.Driver:10 of -#, fuzzy -msgid "CA certificate." -msgstr "Certificats" +#: flwr.common.message.Message.create_error_reply:9 +#: flwr.common.message.Message.create_reply:13 of +msgid "ttl = msg.meta.ttl - (reply.meta.created_at - msg.meta.created_at)" +msgstr "" -#: flwr.server.app.start_server:33 flwr.server.driver.driver.Driver:11 of -#, fuzzy -msgid "server certificate." -msgstr "Certificats" +#: flwr.common.message.Message.create_reply:3 of +msgid "" +"The method generates a new `Message` as a reply to this message. It " +"inherits 'run_id', 'src_node_id', 'dst_node_id', and 'message_type' from " +"this message and sets 'reply_to_message' to the ID of this message." +msgstr "" -#: flwr.server.app.start_server:34 flwr.server.driver.driver.Driver:12 of -#, fuzzy -msgid "server private key." -msgstr "stratégie.du.serveur" +#: flwr.common.message.Message.create_reply:7 of +msgid "The content for the reply message." +msgstr "" -#: flwr.server.driver.driver.Driver.close:1::1 of -#, fuzzy -msgid ":py:obj:`close `\\ \\(\\)" -msgstr "serveur.stratégie.Stratégie" +#: flwr.common.message.Message.create_reply:16 of +msgid "A new `Message` instance representing the reply." +msgstr "" -#: flwr.server.driver.driver.Driver.close:1 -#: flwr.server.driver.driver.Driver.close:1::1 of -msgid "Disconnect from the SuperLink if connected." +#: ../../source/ref-api/flwr.common.MessageType.rst:2 +msgid "MessageType" msgstr "" -#: flwr.server.driver.driver.Driver.close:1::1 of -msgid "" -":py:obj:`create_message `\\ " -"\\(content\\, message\\_type\\, ...\\)" +#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 +msgid ":py:obj:`EVALUATE `\\" msgstr "" -#: flwr.server.driver.driver.Driver.close:1::1 -#: flwr.server.driver.driver.Driver.create_message:1 of -msgid "Create a new message with specified parameters." +#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 +msgid ":py:obj:`QUERY `\\" msgstr "" -#: flwr.server.driver.driver.Driver.close:1::1 of -msgid ":py:obj:`get_node_ids `\\ \\(\\)" +#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 +msgid ":py:obj:`TRAIN `\\" msgstr "" -#: flwr.server.driver.driver.Driver.close:1::1 -#: flwr.server.driver.driver.Driver.get_node_ids:1 of -msgid "Get node IDs." +#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:2 +msgid "MessageTypeLegacy" msgstr "" -#: flwr.server.driver.driver.Driver.close:1::1 of -msgid "" -":py:obj:`pull_messages `\\ " -"\\(message\\_ids\\)" +#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:29::1 +msgid ":py:obj:`GET_PARAMETERS `\\" msgstr "" -#: flwr.server.driver.driver.Driver.close:1::1 -#: flwr.server.driver.driver.Driver.pull_messages:1 of -msgid "Pull messages based on message IDs." +#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:29::1 +msgid ":py:obj:`GET_PROPERTIES `\\" msgstr "" -#: flwr.server.driver.driver.Driver.close:1::1 of -msgid "" -":py:obj:`push_messages `\\ " -"\\(messages\\)" +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.run_id:1 flwr.common.message.Metadata:3 of +msgid "An identifier for the current run." msgstr "" -#: flwr.server.driver.driver.Driver.close:1::1 -#: flwr.server.driver.driver.Driver.push_messages:1 of -msgid "Push messages to specified node IDs." +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.message_id:1 flwr.common.message.Metadata:5 of +msgid "An identifier for the current message." msgstr "" -#: flwr.server.driver.driver.Driver.close:1::1 of -#, fuzzy -msgid "" -":py:obj:`send_and_receive `\\ " -"\\(messages\\, \\*\\[\\, timeout\\]\\)" +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.src_node_id:1 flwr.common.message.Metadata:7 of +msgid "An identifier for the node sending this message." msgstr "" -"Flower 1.0 : ``start_server(..., " -"config=flwr.server.ServerConfig(num_rounds=3, round_timeout=600.0), " -"...)``" -#: flwr.server.driver.driver.Driver.close:1::1 -#: flwr.server.driver.driver.Driver.send_and_receive:1 of -msgid "Push messages to specified node IDs and pull the reply messages." +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.dst_node_id:1 flwr.common.message.Metadata:9 of +msgid "An identifier for the node receiving this message." msgstr "" -#: flwr.server.driver.driver.Driver.create_message:3 of -msgid "" -"This method constructs a new `Message` with given content and metadata. " -"The `run_id` and `src_node_id` will be set automatically." +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.reply_to_message:1 flwr.common.message.Metadata:11 of +msgid "An identifier for the message this message replies to." msgstr "" -#: flwr.server.driver.driver.Driver.create_message:6 of +#: flwr.common.message.Metadata:13 of msgid "" -"The content for the new message. This holds records that are to be sent " -"to the destination node." +"An identifier for grouping messages. In some settings, this is used as " +"the FL round." msgstr "" -#: flwr.server.driver.driver.Driver.create_message:9 of -msgid "" -"The type of the message, defining the action to be executed on the " -"receiving end." +#: flwr.common.message.Metadata:16 of +msgid "Time-to-live for this message in seconds." msgstr "" -#: flwr.server.driver.driver.Driver.create_message:12 of -msgid "The ID of the destination node to which the message is being sent." +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.message_type:1 flwr.common.message.Metadata:18 of +msgid "A string that encodes the action to be executed on the receiving end." msgstr "" -#: flwr.server.driver.driver.Driver.create_message:14 of +#: flwr.common.message.Metadata:21 of msgid "" -"The ID of the group to which this message is associated. In some " -"settings, this is used as the FL round." +"An identifier that can be used when loading a particular data partition " +"for a ClientApp. Making use of this identifier is more relevant when " +"conducting simulations." msgstr "" -#: flwr.server.driver.driver.Driver.create_message:17 of -msgid "" -"Time-to-live for the round trip of this message, i.e., the time from " -"sending this message to receiving a reply. It specifies the duration for " -"which the message and its potential reply are considered valid." +#: flwr.common.Metadata.created_at:1::1 of +#, fuzzy +msgid ":py:obj:`created_at `\\" +msgstr "serveur.stratégie.Stratégie" + +#: flwr.common.Metadata.created_at:1 +#: flwr.common.Metadata.created_at:1::1 of +msgid "Unix timestamp when the message was created." msgstr "" -#: flwr.server.driver.driver.Driver.create_message:22 of -msgid "" -"**message** -- A new `Message` instance with the specified content and " -"metadata." +#: flwr.common.Metadata.created_at:1::1 of +msgid ":py:obj:`dst_node_id `\\" msgstr "" -#: flwr.server.driver.driver.Driver.pull_messages:3 of -msgid "" -"This method is used to collect messages from the SuperLink that " -"correspond to a set of given message IDs." +#: flwr.common.Metadata.created_at:1::1 of +msgid ":py:obj:`group_id `\\" msgstr "" -#: flwr.server.driver.driver.Driver.pull_messages:6 of -msgid "An iterable of message IDs for which reply messages are to be retrieved." +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.group_id:1 of +msgid "An identifier for grouping messages." msgstr "" -#: flwr.server.driver.driver.Driver.pull_messages:9 of -msgid "**messages** -- An iterable of messages received." +#: flwr.common.Metadata.created_at:1::1 of +msgid ":py:obj:`message_id `\\" msgstr "" -#: flwr.server.driver.driver.Driver.push_messages:3 of -msgid "" -"This method takes an iterable of messages and sends each message to the " -"node specified in `dst_node_id`." +#: flwr.common.Metadata.created_at:1::1 of +msgid ":py:obj:`message_type `\\" msgstr "" -#: flwr.server.driver.driver.Driver.push_messages:6 -#: flwr.server.driver.driver.Driver.send_and_receive:7 of -msgid "An iterable of messages to be sent." +#: flwr.common.Metadata.created_at:1::1 of +msgid ":py:obj:`partition_id `\\" msgstr "" -#: flwr.server.driver.driver.Driver.push_messages:9 of -msgid "" -"**message_ids** -- An iterable of IDs for the messages that were sent, " -"which can be used to pull replies." +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.partition_id:1 of +msgid "An identifier telling which data partition a ClientApp should use." msgstr "" -#: flwr.server.driver.driver.Driver.send_and_receive:3 of -msgid "" -"This method sends a list of messages to their destination node IDs and " -"then waits for the replies. It continues to pull replies until either all" -" replies are received or the specified timeout duration is exceeded." +#: flwr.common.Metadata.created_at:1::1 of +msgid ":py:obj:`reply_to_message `\\" msgstr "" -#: flwr.server.driver.driver.Driver.send_and_receive:9 of -msgid "" -"The timeout duration in seconds. If specified, the method will wait for " -"replies for this duration. If `None`, there is no time limit and the " -"method will wait until replies for all messages are received." +#: flwr.common.Metadata.created_at:1::1 of +msgid ":py:obj:`run_id `\\" msgstr "" -#: flwr.server.driver.driver.Driver.send_and_receive:14 of -msgid "**replies** -- An iterable of reply messages received from the SuperLink." +#: flwr.common.Metadata.created_at:1::1 of +msgid ":py:obj:`src_node_id `\\" msgstr "" -#: flwr.server.driver.driver.Driver.send_and_receive:18 -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:53 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:60 -#: of -#, fuzzy -msgid "Notes" -msgstr "Aucun" +#: flwr.common.Metadata.created_at:1::1 of +msgid ":py:obj:`ttl `\\" +msgstr "" -#: flwr.server.driver.driver.Driver.send_and_receive:19 of -msgid "" -"This method uses `push_messages` to send the messages and `pull_messages`" -" to collect the replies. If `timeout` is set, the method may not return " -"replies for all sent messages. A message remains valid until its TTL, " -"which is not affected by `timeout`." +#: flwr.common.Metadata.created_at:1::1 flwr.common.Metadata.ttl:1 +#: of +msgid "Time-to-live for this message." msgstr "" -#: ../../source/ref-api/flwr.server.History.rst:2 -msgid "History" +#: ../../source/ref-api/flwr.common.MetricsRecord.rst:2 +msgid "MetricsRecord" msgstr "" -#: flwr.server.history.History.add_loss_centralized:1::1 of +#: flwr.common.record.metricsrecord.MetricsRecord:1 of msgid "" -":py:obj:`add_loss_centralized " -"`\\ \\(server\\_round\\, " -"loss\\)" +"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " +"[:py:class:`str`, :py:class:`int` | :py:class:`float` | " +":py:class:`~typing.List`\\ [:py:class:`int`] | :py:class:`~typing.List`\\" +" [:py:class:`float`]]" msgstr "" -#: flwr.server.history.History.add_loss_centralized:1 -#: flwr.server.history.History.add_loss_centralized:1::1 of -#, fuzzy -msgid "Add one loss entry (from centralized evaluation)." -msgstr "Évaluation centralisée" - -#: flwr.server.history.History.add_loss_centralized:1::1 of -msgid "" -":py:obj:`add_loss_distributed " -"`\\ \\(server\\_round\\, " -"loss\\)" +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`clear `\\ \\(\\)" msgstr "" -#: flwr.server.history.History.add_loss_centralized:1::1 -#: flwr.server.history.History.add_loss_distributed:1 of -msgid "Add one loss entry (from distributed evaluation)." +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`count_bytes `\\ \\(\\)" msgstr "" -#: flwr.server.history.History.add_loss_centralized:1::1 of -msgid "" -":py:obj:`add_metrics_centralized " -"`\\ \\(server\\_round\\, " -"metrics\\)" +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" msgstr "" -#: flwr.server.history.History.add_loss_centralized:1::1 -#: flwr.server.history.History.add_metrics_centralized:1 of -#, fuzzy -msgid "Add metrics entries (from centralized evaluation)." -msgstr "Évaluation centralisée" +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`items `\\ \\(\\)" +msgstr "" -#: flwr.server.history.History.add_loss_centralized:1::1 of -msgid "" -":py:obj:`add_metrics_distributed " -"`\\ \\(server\\_round\\, " -"metrics\\)" +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`keys `\\ \\(\\)" msgstr "" -#: flwr.server.history.History.add_loss_centralized:1::1 -#: flwr.server.history.History.add_metrics_distributed:1 of -msgid "Add metrics entries (from distributed evaluation)." +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" msgstr "" -#: flwr.server.history.History.add_loss_centralized:1::1 of +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of msgid "" -":py:obj:`add_metrics_distributed_fit " -"`\\ \\(server\\_round\\," -" ...\\)" +":py:obj:`update `\\ \\(\\[E\\, " +"\\]\\*\\*F\\)" msgstr "" -#: flwr.server.history.History.add_loss_centralized:1::1 -#: flwr.server.history.History.add_metrics_distributed_fit:1 of -msgid "Add metrics entries (from distributed fit)." +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`values `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.LegacyContext.rst:2 -msgid "LegacyContext" +#: ../../source/ref-api/flwr.common.NDArray.rst:2 +msgid "NDArray" msgstr "" -#: flwr.server.compat.legacy_context.LegacyContext:1 of -msgid "Bases: :py:class:`~flwr.common.context.Context`" +#: ../../source/ref-api/flwr.common.Parameters.rst:29::1 +msgid ":py:obj:`tensors `\\" msgstr "" -#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 -#, fuzzy -msgid ":py:obj:`config `\\" -msgstr "serveur.stratégie.Stratégie" - -#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 -#, fuzzy -msgid ":py:obj:`strategy `\\" -msgstr "serveur.stratégie.Stratégie" - -#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 -msgid ":py:obj:`client_manager `\\" +#: ../../source/ref-api/flwr.common.Parameters.rst:29::1 +msgid ":py:obj:`tensor_type `\\" msgstr "" -#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 -#, fuzzy -msgid ":py:obj:`history `\\" -msgstr "serveur.stratégie.Stratégie" - -#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +#: ../../source/ref-api/flwr.common.ParametersRecord.rst:2 #, fuzzy -msgid ":py:obj:`state `\\" -msgstr "serveur.stratégie.Stratégie" - -#: flwr.server.server.Server.client_manager:1::1 of -msgid ":py:obj:`client_manager `\\ \\(\\)" -msgstr "" - -#: flwr.server.server.Server.client_manager:1 -#: flwr.server.server.Server.client_manager:1::1 of -msgid "Return ClientManager." -msgstr "" +msgid "ParametersRecord" +msgstr "Paramètres du modèle." -#: flwr.server.server.Server.client_manager:1::1 of +#: flwr.common.record.parametersrecord.ParametersRecord:1 of msgid "" -":py:obj:`disconnect_all_clients " -"`\\ \\(timeout\\)" +"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " +"[:py:class:`str`, :py:class:`~flwr.common.record.parametersrecord.Array`]" msgstr "" -#: flwr.server.server.Server.client_manager:1::1 -#: flwr.server.server.Server.disconnect_all_clients:1 of -msgid "Send shutdown signal to all clients." +#: flwr.common.record.parametersrecord.ParametersRecord:3 of +msgid "" +"A dataclass storing named Arrays in order. This means that it holds " +"entries as an OrderedDict[str, Array]. ParametersRecord objects can be " +"viewed as an equivalent to PyTorch's state_dict, but holding serialised " +"tensors instead." msgstr "" -#: flwr.server.server.Server.client_manager:1::1 of -msgid "" -":py:obj:`evaluate_round `\\ " -"\\(server\\_round\\, timeout\\)" +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`clear `\\ \\(\\)" msgstr "" -#: flwr.server.server.Server.client_manager:1::1 -#: flwr.server.server.Server.evaluate_round:1 of -msgid "Validate current global model on a number of clients." +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`count_bytes `\\ \\(\\)" msgstr "" -#: flwr.server.server.Server.client_manager:1::1 of -msgid ":py:obj:`fit `\\ \\(num\\_rounds\\, timeout\\)" +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" msgstr "" -#: flwr.server.server.Server.client_manager:1::1 -#: flwr.server.server.Server.fit:1 of -msgid "Run federated averaging for a number of rounds." +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`items `\\ \\(\\)" msgstr "" -#: flwr.server.server.Server.client_manager:1::1 of -msgid "" -":py:obj:`fit_round `\\ \\(server\\_round\\," -" timeout\\)" +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`keys `\\ \\(\\)" msgstr "" -#: flwr.server.server.Server.client_manager:1::1 -#: flwr.server.server.Server.fit_round:1 of -msgid "Perform a single round of federated averaging." +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" msgstr "" -#: flwr.server.server.Server.client_manager:1::1 of +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of msgid "" -":py:obj:`set_max_workers `\\ " -"\\(max\\_workers\\)" +":py:obj:`update `\\ \\(\\[E\\, " +"\\]\\*\\*F\\)" msgstr "" -#: flwr.server.server.Server.client_manager:1::1 -#: flwr.server.server.Server.set_max_workers:1 of -msgid "Set the max_workers used by ThreadPoolExecutor." +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`values `\\ \\(\\)" msgstr "" -#: flwr.server.server.Server.client_manager:1::1 of -msgid ":py:obj:`set_strategy `\\ \\(strategy\\)" +#: flwr.common.record.parametersrecord.ParametersRecord.count_bytes:3 of +msgid "" +"Note that a small amount of Bytes might also be included in this counting" +" that correspond to metadata of the serialized object (e.g. of NumPy " +"array) needed for deseralization." msgstr "" -#: flwr.server.server.Server.client_manager:1::1 -#: flwr.server.server.Server.set_strategy:1 of +#: ../../source/ref-api/flwr.common.ReconnectIns.rst:2 #, fuzzy -msgid "Replace server strategy." -msgstr "stratégie.du.serveur" +msgid "ReconnectIns" +msgstr "Collecte centralisée des données" -#: ../../source/ref-api/flwr.server.ServerApp.rst:2 -#, fuzzy -msgid "ServerApp" -msgstr "serveur" +#: ../../source/ref-api/flwr.common.ReconnectIns.rst:28::1 +msgid ":py:obj:`seconds `\\" +msgstr "" -#: flwr.server.server_app.ServerApp:5 of -#, fuzzy -msgid "Use the `ServerApp` with an existing `Strategy`:" -msgstr "Utilise une stratégie existante" +#: ../../source/ref-api/flwr.common.RecordSet.rst:2 +msgid "RecordSet" +msgstr "" -#: flwr.server.server_app.ServerApp:15 of -msgid "Use the `ServerApp` with a custom main function:" +#: flwr.common.RecordSet.configs_records:1::1 of +msgid ":py:obj:`configs_records `\\" msgstr "" -#: flwr.server.server_app.ServerApp.main:1::1 of -#, fuzzy -msgid ":py:obj:`main `\\ \\(\\)" -msgstr "serveur.stratégie.Stratégie" +#: flwr.common.RecordSet.configs_records:1 +#: flwr.common.RecordSet.configs_records:1::1 of +msgid "Dictionary holding ConfigsRecord instances." +msgstr "" -#: flwr.server.server_app.ServerApp.main:1 -#: flwr.server.server_app.ServerApp.main:1::1 of -msgid "Return a decorator that registers the main fn with the server app." +#: flwr.common.RecordSet.configs_records:1::1 of +msgid ":py:obj:`metrics_records `\\" msgstr "" -#: ../../source/ref-api/flwr.server.ServerConfig.rst:2 +#: flwr.common.RecordSet.configs_records:1::1 +#: flwr.common.RecordSet.metrics_records:1 of +msgid "Dictionary holding MetricsRecord instances." +msgstr "" + +#: flwr.common.RecordSet.configs_records:1::1 of +msgid ":py:obj:`parameters_records `\\" +msgstr "" + +#: flwr.common.RecordSet.configs_records:1::1 +#: flwr.common.RecordSet.parameters_records:1 of +msgid "Dictionary holding ParametersRecord instances." +msgstr "" + +#: ../../source/ref-api/flwr.common.ServerMessage.rst:2 #, fuzzy -msgid "ServerConfig" -msgstr "serveur" +msgid "ServerMessage" +msgstr "Côté serveur" -#: flwr.server.server_config.ServerConfig:3 of -msgid "" -"All attributes have default values which allows users to configure just " -"the ones they care about." +#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 +msgid ":py:obj:`evaluate_ins `\\" msgstr "" -#: ../../source/ref-api/flwr.server.ServerConfig.rst:29::1 -msgid ":py:obj:`num_rounds `\\" +#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 +msgid ":py:obj:`fit_ins `\\" msgstr "" -#: ../../source/ref-api/flwr.server.ServerConfig.rst:29::1 -msgid ":py:obj:`round_timeout `\\" +#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 +msgid "" +":py:obj:`get_parameters_ins " +"`\\" msgstr "" -#: ../../source/ref-api/flwr.server.SimpleClientManager.rst:2 -msgid "SimpleClientManager" +#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 +msgid "" +":py:obj:`get_properties_ins " +"`\\" msgstr "" -#: flwr.server.client_manager.SimpleClientManager:1 of -msgid "Bases: :py:class:`~flwr.server.client_manager.ClientManager`" +#: ../../source/ref-api/flwr.common.Status.rst:2 +#, fuzzy +msgid "Status" +msgstr "Statut du client." + +#: ../../source/ref-api/flwr.common.Status.rst:29::1 +msgid ":py:obj:`code `\\" msgstr "" -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of -msgid ":py:obj:`all `\\ \\(\\)" +#: ../../source/ref-api/flwr.common.Status.rst:29::1 +msgid ":py:obj:`message `\\" msgstr "" -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of -msgid "" -":py:obj:`num_available `\\" -" \\(\\)" +#: ../../source/ref-api/flwr.common.array_from_numpy.rst:2 +msgid "array\\_from\\_numpy" msgstr "" -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of -msgid "" -":py:obj:`register `\\ " -"\\(client\\)" +#: ../../source/ref-api/flwr.common.bytes_to_ndarray.rst:2 +msgid "bytes\\_to\\_ndarray" msgstr "" -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of -msgid "" -":py:obj:`sample `\\ " -"\\(num\\_clients\\[\\, min\\_num\\_clients\\, criterion\\]\\)" +#: ../../source/ref-api/flwr.common.configure.rst:2 +#, fuzzy +msgid "configure" +msgstr "Configurer les clients" + +#: ../../source/ref-api/flwr.common.event.rst:2 +msgid "event" msgstr "" -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of -msgid "" -":py:obj:`unregister `\\ " -"\\(client\\)" +#: ../../source/ref-api/flwr.common.log.rst:2 +msgid "log" msgstr "" -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +#: logging.Logger.log:3 of msgid "" -":py:obj:`wait_for `\\ " -"\\(num\\_clients\\[\\, timeout\\]\\)" +"To pass exception information, use the keyword argument exc_info with a " +"true value, e.g." msgstr "" +"Pour transmettre des informations sur les exceptions, utilise l'argument " +"mot-clé exc_info avec une valeur vraie, par ex." -#: flwr.server.client_manager.SimpleClientManager.wait_for:3 of -msgid "" -"Blocks until the requested number of clients is available or until a " -"timeout is reached. Current timeout default: 1 day." +#: logging.Logger.log:6 of +#, python-format +msgid "logger.log(level, \"We have a %s\", \"mysterious problem\", exc_info=1)" +msgstr "logger.log(level, \"We have a %s\", \"mysterious problem\", exc_info=1)" + +#: ../../source/ref-api/flwr.common.ndarray_to_bytes.rst:2 +msgid "ndarray\\_to\\_bytes" msgstr "" -#: flwr.server.client_manager.SimpleClientManager.wait_for:6 of -msgid "The number of clients to wait for." +#: ../../source/ref-api/flwr.common.ndarrays_to_parameters.rst:2 +msgid "ndarrays\\_to\\_parameters" msgstr "" -#: flwr.server.client_manager.SimpleClientManager.wait_for:8 of -msgid "The time in seconds to wait for, defaults to 86400 (24h)." +#: ../../source/ref-api/flwr.common.now.rst:2 +msgid "now" msgstr "" -#: flwr.server.client_manager.SimpleClientManager.wait_for:11 of -msgid "**success**" +#: ../../source/ref-api/flwr.common.parameters_to_ndarrays.rst:2 +msgid "parameters\\_to\\_ndarrays" msgstr "" -#: ../../source/ref-api/flwr.server.run_driver_api.rst:2 -#, fuzzy -msgid "run\\_driver\\_api" -msgstr "flower-driver-api" +#: ../../source/ref-api/flwr.server.rst:2 +msgid "server" +msgstr "serveur" -#: ../../source/ref-api/flwr.server.run_fleet_api.rst:2 -msgid "run\\_fleet\\_api" +#: ../../source/ref-api/flwr.server.rst:26::1 +msgid ":py:obj:`run_driver_api `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.run_server_app.rst:2 -msgid "run\\_server\\_app" +#: ../../source/ref-api/flwr.server.rst:26::1 +#: flwr.server.app.run_driver_api:1 of +#, fuzzy +msgid "Run Flower server (Driver API)." +msgstr "flower-driver-api" + +#: ../../source/ref-api/flwr.server.rst:26::1 +msgid ":py:obj:`run_fleet_api `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.run_superlink.rst:2 +#: ../../source/ref-api/flwr.server.rst:26::1 +#: flwr.server.app.run_fleet_api:1 of #, fuzzy -msgid "run\\_superlink" -msgstr "flower-superlink" +msgid "Run Flower server (Fleet API)." +msgstr "flower-fleet-api" + +#: ../../source/ref-api/flwr.server.rst:26::1 +msgid ":py:obj:`run_server_app `\\ \\(\\)" +msgstr "" -#: ../../source/ref-api/flwr.server.start_driver.rst:2 +#: ../../source/ref-api/flwr.server.rst:26::1 +#: flwr.server.run_serverapp.run_server_app:1 of #, fuzzy -msgid "start\\_driver" -msgstr "start_client" +msgid "Run Flower server app." +msgstr "Serveur de Flower" -#: flwr.server.compat.app.start_driver:3 of -msgid "" -"The IPv4 or IPv6 address of the Driver API server. Defaults to " -"`\"[::]:8080\"`." +#: ../../source/ref-api/flwr.server.rst:26::1 +msgid ":py:obj:`run_superlink `\\ \\(\\)" msgstr "" -#: flwr.server.compat.app.start_driver:6 of -msgid "" -"A server implementation, either `flwr.server.Server` or a subclass " -"thereof. If no instance is provided, then `start_driver` will create one." -msgstr "" +#: ../../source/ref-api/flwr.server.rst:26::1 +#: flwr.server.app.run_superlink:1 of +#, fuzzy +msgid "Run Flower SuperLink (Driver API and Fleet API)." +msgstr "flower-fleet-api" -#: flwr.server.app.start_server:9 flwr.server.compat.app.start_driver:10 -#: flwr.simulation.app.start_simulation:28 of +#: ../../source/ref-api/flwr.server.rst:26::1 msgid "" -"Currently supported values are `num_rounds` (int, default: 1) and " -"`round_timeout` in seconds (float, default: None)." +":py:obj:`start_server `\\ \\(\\*\\[\\, " +"server\\_address\\, server\\, ...\\]\\)" msgstr "" -#: flwr.server.app.start_server:12 flwr.server.compat.app.start_driver:13 of -msgid "" -"An implementation of the abstract base class " -"`flwr.server.strategy.Strategy`. If no strategy is provided, then " -"`start_server` will use `flwr.server.strategy.FedAvg`." +#: ../../source/ref-api/flwr.server.rst:26::1 +#: flwr.server.app.start_server:1 of +msgid "Start a Flower server using the gRPC transport layer." msgstr "" -#: flwr.server.compat.app.start_driver:17 of -msgid "" -"An implementation of the class `flwr.server.ClientManager`. If no " -"implementation is provided, then `start_driver` will use " -"`flwr.server.SimpleClientManager`." +#: ../../source/ref-api/flwr.server.rst:40::1 +msgid ":py:obj:`ClientManager `\\ \\(\\)" msgstr "" -#: flwr.server.compat.app.start_driver:25 of -msgid "The Driver object to use." +#: ../../source/ref-api/flwr.server.rst:40::1 +#: flwr.server.client_manager.ClientManager:1 of +msgid "Abstract base class for managing Flower clients." msgstr "" -#: flwr.server.app.start_server:37 flwr.server.compat.app.start_driver:28 of -msgid "**hist** -- Object containing training and evaluation metrics." -msgstr "" +#: ../../source/ref-api/flwr.server.rst:40::1 +#, fuzzy +msgid ":py:obj:`Driver `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" -#: flwr.server.compat.app.start_driver:33 of -msgid "Starting a driver that connects to an insecure server:" +#: ../../source/ref-api/flwr.server.rst:40::1 +#: flwr.server.driver.driver.Driver:1 of +msgid "Abstract base Driver class for the Driver API." msgstr "" -#: flwr.server.compat.app.start_driver:37 of -msgid "Starting a driver that connects to an SSL-enabled server:" +#: ../../source/ref-api/flwr.server.rst:40::1 +msgid ":py:obj:`History `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.start_server.rst:2 -#, fuzzy -msgid "start\\_server" -msgstr "serveur.start_server" - -#: flwr.server.app.start_server:3 of -msgid "The IPv4 or IPv6 address of the server. Defaults to `\"[::]:8080\"`." +#: ../../source/ref-api/flwr.server.rst:40::1 +#: flwr.server.history.History:1 of +msgid "History class for training and/or evaluation metrics collection." msgstr "" -#: flwr.server.app.start_server:5 of +#: ../../source/ref-api/flwr.server.rst:40::1 msgid "" -"A server implementation, either `flwr.server.Server` or a subclass " -"thereof. If no instance is provided, then `start_server` will create one." +":py:obj:`LegacyContext `\\ \\(state\\[\\, " +"config\\, strategy\\, ...\\]\\)" msgstr "" -#: flwr.server.app.start_server:16 of -msgid "" -"An implementation of the abstract base class `flwr.server.ClientManager`." -" If no implementation is provided, then `start_server` will use " -"`flwr.server.client_manager.SimpleClientManager`." +#: ../../source/ref-api/flwr.server.rst:40::1 +#: flwr.server.compat.legacy_context.LegacyContext:1 of +msgid "Legacy Context." msgstr "" -#: flwr.server.app.start_server:21 of +#: ../../source/ref-api/flwr.server.rst:40::1 msgid "" -"The maximum length of gRPC messages that can be exchanged with the Flower" -" clients. The default should be sufficient for most models. Users who " -"train very large models might need to increase this value. Note that the " -"Flower clients need to be started with the same value (see " -"`flwr.client.start_client`), otherwise clients will not know about the " -"increased limit and block larger messages." +":py:obj:`Server `\\ \\(\\*\\, client\\_manager\\[\\, " +"strategy\\]\\)" msgstr "" -#: flwr.server.app.start_server:42 of +#: ../../source/ref-api/flwr.server.rst:40::1 #, fuzzy -msgid "Starting an insecure server:" -msgstr "Démarrer le serveur" +msgid "" +":py:obj:`ServerApp `\\ \\(\\[server\\, config\\, " +"strategy\\, ...\\]\\)" +msgstr "serveur.stratégie.Stratégie" -#: flwr.server.app.start_server:46 of +#: ../../source/ref-api/flwr.server.rst:40::1 +#: flwr.server.server_app.ServerApp:1 of #, fuzzy -msgid "Starting an SSL-enabled server:" -msgstr "Démarrer le serveur" +msgid "Flower ServerApp." +msgstr "Serveur de Flower" -#: ../../source/ref-api/flwr.server.strategy.rst:2 +#: ../../source/ref-api/flwr.server.rst:40::1 #, fuzzy -msgid "strategy" -msgstr "stratégie.du.serveur" - -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`Bulyan `\\ \\(\\*\\, " -"fraction\\_fit\\, fraction\\_evaluate\\, ...\\)" +":py:obj:`ServerConfig `\\ \\(\\[num\\_rounds\\," +" round\\_timeout\\]\\)" msgstr "" +"Flower 1.0 : ``start_server(..., " +"config=flwr.server.ServerConfig(num_rounds=3, round_timeout=600.0), " +"...)``" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.bulyan.Bulyan:1 of +#: ../../source/ref-api/flwr.server.rst:40::1 +#: flwr.server.server_config.ServerConfig:1 of #, fuzzy -msgid "Bulyan strategy." -msgstr "Stratégies intégrées" +msgid "Flower server config." +msgstr "Serveur de Flower" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`DPFedAvgAdaptive `\\ " -"\\(strategy\\, num\\_sampled\\_clients\\)" +#: ../../source/ref-api/flwr.server.rst:40::1 +msgid ":py:obj:`SimpleClientManager `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive:1 of -msgid "Wrapper for configuring a Strategy for DP with Adaptive Clipping." +#: ../../source/ref-api/flwr.server.rst:40::1 +#: flwr.server.client_manager.SimpleClientManager:1 of +msgid "Provides a pool of available clients." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`DPFedAvgFixed `\\ " -"\\(strategy\\, num\\_sampled\\_clients\\, ...\\)" -msgstr "" +#: ../../source/ref-api/flwr.server.rst:59::1 +#, fuzzy +msgid ":py:obj:`flwr.server.strategy `\\" +msgstr "serveur.stratégie.Stratégie" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed:1 of -msgid "Wrapper for configuring a Strategy for DP with Fixed Clipping." +#: ../../source/ref-api/flwr.server.rst:59::1 +#: flwr.server.strategy:1 of +msgid "Contains the strategy abstraction and different implementations." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`DifferentialPrivacyClientSideAdaptiveClipping " -"`\\ " -"\\(...\\)" +#: ../../source/ref-api/flwr.server.rst:59::1 +#, fuzzy +msgid ":py:obj:`flwr.server.workflow `\\" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.server.rst:59::1 +#: flwr.server.workflow:1 of +#, fuzzy +msgid "Workflows." +msgstr "Flux de travail" + +#: ../../source/ref-api/flwr.server.ClientManager.rst:2 +#, fuzzy +msgid "ClientManager" +msgstr "client" + +#: flwr.server.client_manager.ClientManager.all:1::1 of +msgid ":py:obj:`all `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:1 -#: of -msgid "Strategy wrapper for central DP with client-side adaptive clipping." +#: flwr.server.client_manager.ClientManager.all:1 +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.all:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +msgid "Return all available clients." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`DifferentialPrivacyServerSideAdaptiveClipping " -"`\\ " -"\\(...\\)" +#: flwr.server.client_manager.ClientManager.all:1::1 of +msgid ":py:obj:`num_available `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:1 -#: of -msgid "Strategy wrapper for central DP with server-side adaptive clipping." +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.ClientManager.num_available:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.num_available:1 of +msgid "Return the number of available clients." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`DifferentialPrivacyClientSideFixedClipping " -"`\\ " -"\\(...\\)" +#: flwr.server.client_manager.ClientManager.all:1::1 of +msgid ":py:obj:`register `\\ \\(client\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:1 -#: of -msgid "Strategy wrapper for central DP with client-side fixed clipping." +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.ClientManager.register:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.register:1 of +msgid "Register Flower ClientProxy instance." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.client_manager.ClientManager.all:1::1 of msgid "" -":py:obj:`DifferentialPrivacyServerSideFixedClipping " -"`\\ " -"\\(...\\)" +":py:obj:`sample `\\ " +"\\(num\\_clients\\[\\, min\\_num\\_clients\\, criterion\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:1 -#: of -msgid "Strategy wrapper for central DP with server-side fixed clipping." +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.ClientManager.sample:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.sample:1 of +msgid "Sample a number of Flower ClientProxy instances." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`FedAdagrad `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +#: flwr.server.client_manager.ClientManager.all:1::1 of +msgid ":py:obj:`unregister `\\ \\(client\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedadagrad.FedAdagrad:1 of -#, fuzzy -msgid "FedAdagrad strategy - Adaptive Federated Optimization using Adagrad." +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.ClientManager.unregister:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.unregister:1 of +msgid "Unregister Flower ClientProxy instance." msgstr "" -"`FedAdam` et `FedAdam` correspondent à la dernière version de l'article " -"sur l'optimisation fédérée adaptative." -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.client_manager.ClientManager.all:1::1 of msgid "" -":py:obj:`FedAdam `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +":py:obj:`wait_for `\\ " +"\\(num\\_clients\\, timeout\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedadam.FedAdam:1 of -msgid "FedAdam - Adaptive Federated Optimization using Adam." +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.ClientManager.wait_for:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.wait_for:1 of +msgid "Wait until at least `num_clients` are available." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`FedAvg `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" +#: flwr.server.client_manager.ClientManager.num_available:3 +#: flwr.server.client_manager.SimpleClientManager.num_available:3 of +msgid "**num_available** -- The number of currently available clients." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedavg.FedAvg:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:1 of -#, fuzzy -msgid "Federated Averaging strategy." -msgstr "Stratégie de moyenne fédérée." - -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.client_manager.ClientManager.register:6 +#: flwr.server.client_manager.SimpleClientManager.register:6 of msgid "" -":py:obj:`FedAvgAndroid `\\ " -"\\(\\*\\[\\, fraction\\_fit\\, ...\\]\\)" +"**success** -- Indicating if registration was successful. False if " +"ClientProxy is already registered or can not be registered for any " +"reason." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`FedAvgM `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +#: flwr.server.client_manager.ClientManager.unregister:3 +#: flwr.server.client_manager.SimpleClientManager.unregister:3 of +msgid "This method is idempotent." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedavgm.FedAvgM:1 of +#: ../../source/ref-api/flwr.server.Driver.rst:2 #, fuzzy -msgid "Federated Averaging with Momentum strategy." -msgstr "Stratégie de moyenne fédérée." +msgid "Driver" +msgstr "serveur" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.create_message:1::1 of +#, fuzzy msgid "" -":py:obj:`FedMedian `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +":py:obj:`create_message `\\ " +"\\(content\\, message\\_type\\, ...\\[\\, ttl\\]\\)" msgstr "" +"Flower 1.0 : ``start_server(..., " +"config=flwr.server.ServerConfig(num_rounds=3, round_timeout=600.0), " +"...)``" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedmedian.FedMedian:1 of -#, fuzzy -msgid "Configurable FedMedian strategy implementation." -msgstr "Configuration de l'évaluation fédérée" +#: flwr.server.driver.driver.Driver.create_message:1 +#: flwr.server.driver.driver.Driver.create_message:1::1 of +msgid "Create a new message with specified parameters." +msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`FedOpt `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" +#: flwr.server.driver.driver.Driver.create_message:1::1 of +msgid ":py:obj:`get_node_ids `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedopt.FedOpt:1 of -#, fuzzy -msgid "Federated Optim strategy." -msgstr "Stratégie de moyenne fédérée." +#: flwr.server.driver.driver.Driver.create_message:1::1 +#: flwr.server.driver.driver.Driver.get_node_ids:1 of +msgid "Get node IDs." +msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.create_message:1::1 of msgid "" -":py:obj:`FedProx `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +":py:obj:`pull_messages `\\ " +"\\(message\\_ids\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedprox.FedProx:1 of -#, fuzzy -msgid "Federated Optimization strategy." -msgstr "Stratégie de moyenne fédérée." +#: flwr.server.driver.driver.Driver.create_message:1::1 +#: flwr.server.driver.driver.Driver.pull_messages:1 of +msgid "Pull messages based on message IDs." +msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.create_message:1::1 of msgid "" -":py:obj:`FedTrimmedAvg `\\ " -"\\(\\*\\[\\, fraction\\_fit\\, ...\\]\\)" +":py:obj:`push_messages `\\ " +"\\(messages\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:1 of -msgid "Federated Averaging with Trimmed Mean [Dong Yin, et al., 2021]." +#: flwr.server.driver.driver.Driver.create_message:1::1 +#: flwr.server.driver.driver.Driver.push_messages:1 of +msgid "Push messages to specified node IDs." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.create_message:1::1 of +#, fuzzy msgid "" -":py:obj:`FedXgbBagging `\\ " -"\\(\\[evaluate\\_function\\]\\)" +":py:obj:`send_and_receive `\\ " +"\\(messages\\, \\*\\[\\, timeout\\]\\)" msgstr "" +"Flower 1.0 : ``start_server(..., " +"config=flwr.server.ServerConfig(num_rounds=3, round_timeout=600.0), " +"...)``" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging:1 of -msgid "Configurable FedXgbBagging strategy implementation." +#: flwr.server.driver.driver.Driver.create_message:1::1 +#: flwr.server.driver.driver.Driver.send_and_receive:1 of +msgid "Push messages to specified node IDs and pull the reply messages." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.create_message:3 of msgid "" -":py:obj:`FedXgbCyclic `\\ " -"\\(\\*\\*kwargs\\)" +"This method constructs a new `Message` with given content and metadata. " +"The `run_id` and `src_node_id` will be set automatically." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic:1 of -msgid "Configurable FedXgbCyclic strategy implementation." +#: flwr.server.driver.driver.Driver.create_message:6 of +msgid "" +"The content for the new message. This holds records that are to be sent " +"to the destination node." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.create_message:9 of msgid "" -":py:obj:`FedXgbNnAvg `\\ \\(\\*args\\, " -"\\*\\*kwargs\\)" +"The type of the message, defining the action to be executed on the " +"receiving end." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg:1 of -msgid "Configurable FedXgbNnAvg strategy implementation." +#: flwr.server.driver.driver.Driver.create_message:12 of +msgid "The ID of the destination node to which the message is being sent." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.create_message:14 of msgid "" -":py:obj:`FedYogi `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +"The ID of the group to which this message is associated. In some " +"settings, this is used as the FL round." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedyogi.FedYogi:1 of -msgid "FedYogi [Reddi et al., 2020] strategy." +#: flwr.server.driver.driver.Driver.create_message:17 of +msgid "" +"Time-to-live for the round trip of this message, i.e., the time from " +"sending this message to receiving a reply. It specifies in seconds the " +"duration for which the message and its potential reply are considered " +"valid. If unset, the default TTL (i.e., `common.DEFAULT_TTL`) will be " +"used." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.create_message:23 of msgid "" -":py:obj:`FaultTolerantFedAvg " -"`\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +"**message** -- A new `Message` instance with the specified content and " +"metadata." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg:1 of -msgid "Configurable fault-tolerant FedAvg strategy implementation." +#: flwr.server.driver.driver.Driver.pull_messages:3 of +msgid "" +"This method is used to collect messages from the SuperLink that " +"correspond to a set of given message IDs." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`Krum `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" +#: flwr.server.driver.driver.Driver.pull_messages:6 of +msgid "An iterable of message IDs for which reply messages are to be retrieved." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.krum.Krum:1 of -msgid "Krum [Blanchard et al., 2017] strategy." +#: flwr.server.driver.driver.Driver.pull_messages:9 of +msgid "**messages** -- An iterable of messages received." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.push_messages:3 of msgid "" -":py:obj:`QFedAvg `\\ \\(\\*\\[\\, " -"q\\_param\\, qffl\\_learning\\_rate\\, ...\\]\\)" +"This method takes an iterable of messages and sends each message to the " +"node specified in `dst_node_id`." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.qfedavg.QFedAvg:1 of -msgid "Configurable QFedAvg strategy implementation." +#: flwr.server.driver.driver.Driver.push_messages:6 +#: flwr.server.driver.driver.Driver.send_and_receive:7 of +msgid "An iterable of messages to be sent." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#, fuzzy -msgid ":py:obj:`Strategy `\\ \\(\\)" -msgstr "serveur.stratégie.Stratégie" - -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.strategy.Strategy:1 of -msgid "Abstract base class for server strategy implementations." +#: flwr.server.driver.driver.Driver.push_messages:9 of +msgid "" +"**message_ids** -- An iterable of IDs for the messages that were sent, " +"which can be used to pull replies." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.Bulyan.rst:2 -msgid "Bulyan" +#: flwr.server.driver.driver.Driver.send_and_receive:3 of +msgid "" +"This method sends a list of messages to their destination node IDs and " +"then waits for the replies. It continues to pull replies until either all" +" replies are received or the specified timeout duration is exceeded." msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:1 -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg:1 -#: flwr.server.strategy.fedavgm.FedAvgM:1 -#: flwr.server.strategy.fedmedian.FedMedian:1 -#: flwr.server.strategy.fedopt.FedOpt:1 flwr.server.strategy.fedprox.FedProx:1 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging:1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic:1 -#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg:1 -#: flwr.server.strategy.krum.Krum:1 flwr.server.strategy.qfedavg.QFedAvg:1 of -msgid "Bases: :py:class:`~flwr.server.strategy.fedavg.FedAvg`" +#: flwr.server.driver.driver.Driver.send_and_receive:9 of +msgid "" +"The timeout duration in seconds. If specified, the method will wait for " +"replies for this duration. If `None`, there is no time limit and the " +"method will wait until replies for all messages are received." msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:3 of -msgid "Implementation based on https://arxiv.org/abs/1802.07927." +#: flwr.server.driver.driver.Driver.send_and_receive:14 of +msgid "**replies** -- An iterable of reply messages received from the SuperLink." msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:5 -#: flwr.server.strategy.fedadagrad.FedAdagrad:5 -#: flwr.server.strategy.fedadam.FedAdam:5 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:5 -#: flwr.server.strategy.fedavgm.FedAvgM:5 flwr.server.strategy.fedopt.FedOpt:5 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:5 -#: flwr.server.strategy.fedyogi.FedYogi:5 flwr.server.strategy.krum.Krum:5 of -msgid "Fraction of clients used during training. Defaults to 1.0." -msgstr "" +#: flwr.server.driver.driver.Driver.send_and_receive:18 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:53 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:60 +#: of +#, fuzzy +msgid "Notes" +msgstr "Aucun" -#: flwr.server.strategy.bulyan.Bulyan:7 -#: flwr.server.strategy.fedadagrad.FedAdagrad:7 -#: flwr.server.strategy.fedadam.FedAdam:7 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:7 -#: flwr.server.strategy.fedavgm.FedAvgM:7 flwr.server.strategy.fedopt.FedOpt:7 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:7 -#: flwr.server.strategy.fedyogi.FedYogi:7 flwr.server.strategy.krum.Krum:7 of -msgid "Fraction of clients used during validation. Defaults to 1.0." +#: flwr.server.driver.driver.Driver.send_and_receive:19 of +msgid "" +"This method uses `push_messages` to send the messages and `pull_messages`" +" to collect the replies. If `timeout` is set, the method may not return " +"replies for all sent messages. A message remains valid until its TTL, " +"which is not affected by `timeout`." msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:9 -#: flwr.server.strategy.fedadagrad.FedAdagrad:9 -#: flwr.server.strategy.fedadam.FedAdam:9 flwr.server.strategy.fedavg.FedAvg:13 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:9 -#: flwr.server.strategy.fedavgm.FedAvgM:9 flwr.server.strategy.fedopt.FedOpt:9 -#: flwr.server.strategy.fedprox.FedProx:45 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:9 -#: flwr.server.strategy.fedyogi.FedYogi:9 flwr.server.strategy.krum.Krum:9 of -msgid "Minimum number of clients used during training. Defaults to 2." +#: ../../source/ref-api/flwr.server.History.rst:2 +msgid "History" msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:11 -#: flwr.server.strategy.fedadagrad.FedAdagrad:11 -#: flwr.server.strategy.fedadam.FedAdam:11 -#: flwr.server.strategy.fedavg.FedAvg:15 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:11 -#: flwr.server.strategy.fedavgm.FedAvgM:11 -#: flwr.server.strategy.fedopt.FedOpt:11 -#: flwr.server.strategy.fedprox.FedProx:47 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:11 -#: flwr.server.strategy.fedyogi.FedYogi:11 flwr.server.strategy.krum.Krum:11 of -msgid "Minimum number of clients used during validation. Defaults to 2." +#: flwr.server.history.History.add_loss_centralized:1::1 of +msgid "" +":py:obj:`add_loss_centralized " +"`\\ \\(server\\_round\\, " +"loss\\)" msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:13 -#: flwr.server.strategy.fedadagrad.FedAdagrad:13 -#: flwr.server.strategy.fedadam.FedAdam:13 -#: flwr.server.strategy.fedavg.FedAvg:17 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:13 -#: flwr.server.strategy.fedavgm.FedAvgM:13 -#: flwr.server.strategy.fedopt.FedOpt:13 -#: flwr.server.strategy.fedprox.FedProx:49 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:13 -#: flwr.server.strategy.fedyogi.FedYogi:13 flwr.server.strategy.krum.Krum:13 of -msgid "Minimum number of total clients in the system. Defaults to 2." -msgstr "" +#: flwr.server.history.History.add_loss_centralized:1 +#: flwr.server.history.History.add_loss_centralized:1::1 of +#, fuzzy +msgid "Add one loss entry (from centralized evaluation)." +msgstr "Évaluation centralisée" -#: flwr.server.strategy.bulyan.Bulyan:15 flwr.server.strategy.krum.Krum:15 of -msgid "Number of malicious clients in the system. Defaults to 0." +#: flwr.server.history.History.add_loss_centralized:1::1 of +msgid "" +":py:obj:`add_loss_distributed " +"`\\ \\(server\\_round\\, " +"loss\\)" msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:17 -#: flwr.server.strategy.fedadagrad.FedAdagrad:15 -#: flwr.server.strategy.fedadam.FedAdam:15 -#: flwr.server.strategy.fedavg.FedAvg:19 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:15 -#: flwr.server.strategy.fedavgm.FedAvgM:15 -#: flwr.server.strategy.fedopt.FedOpt:15 -#: flwr.server.strategy.fedprox.FedProx:51 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:15 -#: flwr.server.strategy.fedyogi.FedYogi:17 -#: flwr.server.strategy.fedyogi.FedYogi:18 -#: flwr.server.strategy.fedyogi.FedYogi:19 flwr.server.strategy.krum.Krum:20 of -msgid "Optional function used for validation. Defaults to None." +#: flwr.server.history.History.add_loss_centralized:1::1 +#: flwr.server.history.History.add_loss_distributed:1 of +msgid "Add one loss entry (from distributed evaluation)." msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:19 -#: flwr.server.strategy.fedadagrad.FedAdagrad:17 -#: flwr.server.strategy.fedadam.FedAdam:17 -#: flwr.server.strategy.fedavg.FedAvg:21 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:17 -#: flwr.server.strategy.fedavgm.FedAvgM:17 -#: flwr.server.strategy.fedopt.FedOpt:17 -#: flwr.server.strategy.fedprox.FedProx:53 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:17 -#: flwr.server.strategy.fedyogi.FedYogi:20 flwr.server.strategy.krum.Krum:22 of -msgid "Function used to configure training. Defaults to None." +#: flwr.server.history.History.add_loss_centralized:1::1 of +msgid "" +":py:obj:`add_metrics_centralized " +"`\\ \\(server\\_round\\, " +"metrics\\)" msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:21 -#: flwr.server.strategy.fedadagrad.FedAdagrad:19 -#: flwr.server.strategy.fedadam.FedAdam:19 -#: flwr.server.strategy.fedavg.FedAvg:23 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:19 -#: flwr.server.strategy.fedavgm.FedAvgM:19 -#: flwr.server.strategy.fedopt.FedOpt:19 -#: flwr.server.strategy.fedprox.FedProx:55 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:19 -#: flwr.server.strategy.fedyogi.FedYogi:22 flwr.server.strategy.krum.Krum:24 of -msgid "Function used to configure validation. Defaults to None." -msgstr "" +#: flwr.server.history.History.add_loss_centralized:1::1 +#: flwr.server.history.History.add_metrics_centralized:1 of +#, fuzzy +msgid "Add metrics entries (from centralized evaluation)." +msgstr "Évaluation centralisée" -#: flwr.server.strategy.bulyan.Bulyan:23 -#: flwr.server.strategy.fedadagrad.FedAdagrad:25 -#: flwr.server.strategy.fedadam.FedAdam:21 -#: flwr.server.strategy.fedavg.FedAvg:25 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:21 -#: flwr.server.strategy.fedavgm.FedAvgM:21 -#: flwr.server.strategy.fedopt.FedOpt:21 -#: flwr.server.strategy.fedprox.FedProx:57 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:21 -#: flwr.server.strategy.fedyogi.FedYogi:24 flwr.server.strategy.krum.Krum:26 of -msgid "Whether or not accept rounds containing failures. Defaults to True." +#: flwr.server.history.History.add_loss_centralized:1::1 of +msgid "" +":py:obj:`add_metrics_distributed " +"`\\ \\(server\\_round\\, " +"metrics\\)" msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:25 -#: flwr.server.strategy.fedadagrad.FedAdagrad:27 -#: flwr.server.strategy.fedadam.FedAdam:23 -#: flwr.server.strategy.fedavg.FedAvg:27 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:24 -#: flwr.server.strategy.fedavgm.FedAvgM:23 -#: flwr.server.strategy.fedopt.FedOpt:23 -#: flwr.server.strategy.fedprox.FedProx:59 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:23 -#: flwr.server.strategy.fedyogi.FedYogi:26 flwr.server.strategy.krum.Krum:28 of -#, fuzzy -msgid "Initial global model parameters." -msgstr "Initialise le modèle global" +#: flwr.server.history.History.add_loss_centralized:1::1 +#: flwr.server.history.History.add_metrics_distributed:1 of +msgid "Add metrics entries (from distributed evaluation)." +msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:27 of +#: flwr.server.history.History.add_loss_centralized:1::1 of msgid "" -"Byzantine resilient aggregation rule that is used as the first step of " -"the Bulyan (e.g., Krum)" +":py:obj:`add_metrics_distributed_fit " +"`\\ \\(server\\_round\\," +" ...\\)" msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:29 of -msgid "arguments to the first_aggregation rule" +#: flwr.server.history.History.add_loss_centralized:1::1 +#: flwr.server.history.History.add_metrics_distributed_fit:1 of +msgid "Add metrics entries (from distributed fit)." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\, " -"results\\, ...\\)" +#: ../../source/ref-api/flwr.server.LegacyContext.rst:2 +msgid "LegacyContext" msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1 -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.compat.legacy_context.LegacyContext:1 of +msgid "Bases: :py:class:`~flwr.common.context.Context`" +msgstr "" + +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 #, fuzzy -msgid "Aggregate evaluation losses using weighted average." -msgstr "Résultats globaux de l'évaluation." +msgid ":py:obj:`config `\\" +msgstr "serveur.stratégie.Stratégie" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +#, fuzzy +msgid ":py:obj:`strategy `\\" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +msgid ":py:obj:`client_manager `\\" msgstr "" -#: flwr.server.strategy.bulyan.Bulyan.aggregate_fit:1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 #, fuzzy -msgid "Aggregate fit results using Bulyan." -msgstr "Résultats globaux de l'évaluation." +msgid ":py:obj:`history `\\" +msgstr "serveur.stratégie.Stratégie" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\, " -"parameters\\, ...\\)" +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +#, fuzzy +msgid ":py:obj:`state `\\" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.server.Server.rst:2 +msgid "Server" +msgstr "Serveur" + +#: flwr.server.server.Server.client_manager:1::1 of +msgid ":py:obj:`client_manager `\\ \\(\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.configure_evaluate:1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.configure_evaluate:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.configure_evaluate:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.configure_evaluate:1 -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.configure_evaluate:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.configure_evaluate:1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.configure_evaluate:1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.qfedavg.QFedAvg.configure_evaluate:1 -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: flwr.server.strategy.strategy.Strategy.configure_evaluate:1 of -#, fuzzy -msgid "Configure the next round of evaluation." -msgstr "Configuration de l'évaluation côté serveur" +#: flwr.server.server.Server.client_manager:1 +#: flwr.server.server.Server.client_manager:1::1 of +msgid "Return ClientManager." +msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.server.Server.client_manager:1::1 of msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`disconnect_all_clients " +"`\\ \\(timeout\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.configure_fit:1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.configure_fit:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.configure_fit:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.configure_fit:1 -#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive.configure_fit:1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.configure_fit:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.configure_fit:1 -#: flwr.server.strategy.fedprox.FedProx.configure_fit:1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.configure_fit:1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.qfedavg.QFedAvg.configure_fit:1 -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: flwr.server.strategy.strategy.Strategy.configure_fit:1 of -msgid "Configure the next round of training." +#: flwr.server.server.Server.client_manager:1::1 +#: flwr.server.server.Server.disconnect_all_clients:1 of +msgid "Send shutdown signal to all clients." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.server.Server.client_manager:1::1 of msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +":py:obj:`evaluate_round `\\ " +"\\(server\\_round\\, timeout\\)" msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.evaluate:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.evaluate:1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.evaluate:1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg.evaluate:1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of -msgid "Evaluate model parameters using an evaluation function." +#: flwr.server.server.Server.client_manager:1::1 +#: flwr.server.server.Server.evaluate_round:1 of +msgid "Validate current global model on a number of clients." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +#: flwr.server.server.Server.client_manager:1::1 of +msgid ":py:obj:`fit `\\ \\(num\\_rounds\\, timeout\\)" msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.initialize_parameters:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.initialize_parameters:1 -#: flwr.server.strategy.fedavgm.FedAvgM.initialize_parameters:1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of -#, fuzzy -msgid "Initialize global model parameters." -msgstr "Initialise le modèle global" +#: flwr.server.server.Server.client_manager:1::1 +#: flwr.server.server.Server.fit:1 of +msgid "Run federated averaging for a number of rounds." +msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.server.Server.client_manager:1::1 of msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`fit_round `\\ \\(server\\_round\\," +" timeout\\)" msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.num_evaluation_clients:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.num_evaluation_clients:1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.qfedavg.QFedAvg.num_evaluation_clients:1 of -msgid "Use a fraction of available clients for evaluation." +#: flwr.server.server.Server.client_manager:1::1 +#: flwr.server.server.Server.fit_round:1 of +msgid "Perform a single round of federated averaging." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.server.Server.client_manager:1::1 of msgid "" -":py:obj:`num_fit_clients `\\" -" \\(num\\_available\\_clients\\)" +":py:obj:`set_max_workers `\\ " +"\\(max\\_workers\\)" msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.num_fit_clients:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.num_fit_clients:1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.qfedavg.QFedAvg.num_fit_clients:1 of -msgid "Return the sample size and the required number of available clients." +#: flwr.server.server.Server.client_manager:1::1 +#: flwr.server.server.Server.set_max_workers:1 of +msgid "Set the max_workers used by ThreadPoolExecutor." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.DPFedAvgAdaptive.rst:2 -msgid "DPFedAvgAdaptive" -msgstr "DPFedAvgAdaptive" - -#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive:1 of -msgid "Bases: :py:class:`~flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed`" +#: flwr.server.server.Server.client_manager:1::1 of +msgid ":py:obj:`set_strategy `\\ \\(strategy\\)" msgstr "" -#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive:3 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed:3 of -msgid "This class is deprecated and will be removed in a future release." +#: flwr.server.server.Server.client_manager:1::1 +#: flwr.server.server.Server.set_strategy:1 of +#, fuzzy +msgid "Replace server strategy." +msgstr "stratégie.du.serveur" + +#: ../../source/ref-api/flwr.server.ServerApp.rst:2 +#, fuzzy +msgid "ServerApp" +msgstr "serveur" + +#: flwr.server.server_app.ServerApp:5 of +#, fuzzy +msgid "Use the `ServerApp` with an existing `Strategy`:" +msgstr "Utilise une stratégie existante" + +#: flwr.server.server_app.ServerApp:15 of +msgid "Use the `ServerApp` with a custom main function:" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +#: flwr.server.server_app.ServerApp.main:1::1 of +#, fuzzy +msgid ":py:obj:`main `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" + +#: flwr.server.server_app.ServerApp.main:1 +#: flwr.server.server_app.ServerApp.main:1::1 of +msgid "Return a decorator that registers the main fn with the server app." msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.ServerConfig.rst:2 #, fuzzy -msgid "Aggregate evaluation losses using the given strategy." -msgstr "Résultats globaux de l'évaluation." +msgid "ServerConfig" +msgstr "serveur" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of +#: flwr.server.server_config.ServerConfig:3 of msgid "" -":py:obj:`aggregate_fit " -"`\\ " -"\\(server\\_round\\, results\\, failures\\)" +"All attributes have default values which allows users to configure just " +"the ones they care about." msgstr "" -#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive.aggregate_fit:1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of -msgid "Aggregate training results as in DPFedAvgFixed and update clip norms." +#: ../../source/ref-api/flwr.server.ServerConfig.rst:29::1 +msgid ":py:obj:`num_rounds `\\" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +#: ../../source/ref-api/flwr.server.ServerConfig.rst:29::1 +msgid ":py:obj:`round_timeout `\\" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate:1 of -msgid "Configure the next round of evaluation using the specified strategy." +#: ../../source/ref-api/flwr.server.SimpleClientManager.rst:2 +msgid "SimpleClientManager" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`configure_fit " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +#: flwr.server.client_manager.SimpleClientManager:1 of +msgid "Bases: :py:class:`~flwr.server.client_manager.ClientManager`" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +msgid ":py:obj:`all `\\ \\(\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.evaluate:1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.evaluate:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.evaluate:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.evaluate:1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.evaluate:1 of -msgid "Evaluate model parameters using an evaluation function from the strategy." +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +msgid "" +":py:obj:`num_available `\\" +" \\(\\)" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +":py:obj:`register `\\ " +"\\(client\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.initialize_parameters:1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.initialize_parameters:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.initialize_parameters:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.initialize_parameters:1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.initialize_parameters:1 of -msgid "Initialize global model parameters using given strategy." +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +msgid "" +":py:obj:`sample `\\ " +"\\(num\\_clients\\[\\, min\\_num\\_clients\\, criterion\\]\\)" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate:3 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:6 -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:3 -#: flwr.server.strategy.strategy.Strategy.aggregate_fit:3 -#: flwr.server.strategy.strategy.Strategy.configure_evaluate:3 -#: flwr.server.strategy.strategy.Strategy.configure_fit:3 -#: flwr.server.strategy.strategy.Strategy.evaluate:6 of -#, fuzzy -msgid "The current round of federated learning." -msgstr "Qu'est-ce que l'apprentissage fédéré ?" +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +msgid "" +":py:obj:`unregister `\\ " +"\\(client\\)" +msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate:7 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:10 -#: flwr.server.strategy.strategy.Strategy.configure_evaluate:7 -#: flwr.server.strategy.strategy.Strategy.configure_fit:7 -#: flwr.server.strategy.strategy.Strategy.initialize_parameters:3 of -msgid "The client manager which holds all currently connected clients." +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +msgid "" +":py:obj:`wait_for `\\ " +"\\(num\\_clients\\[\\, timeout\\]\\)" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate:10 -#: flwr.server.strategy.strategy.Strategy.configure_evaluate:10 of +#: flwr.server.client_manager.SimpleClientManager.wait_for:3 of msgid "" -"**evaluate_configuration** -- A list of tuples. Each tuple in the list " -"identifies a `ClientProxy` and the `EvaluateIns` for this particular " -"`ClientProxy`. If a particular `ClientProxy` is not included in this " -"list, it means that this `ClientProxy` will not participate in the next " -"round of federated evaluation." +"Blocks until the requested number of clients is available or until a " +"timeout is reached. Current timeout default: 1 day." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.DPFedAvgFixed.rst:2 -msgid "DPFedAvgFixed" -msgstr "DPFedAvgFixed" +#: flwr.server.client_manager.SimpleClientManager.wait_for:6 of +msgid "The number of clients to wait for." +msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed:1 -#: flwr.server.strategy.fedavg.FedAvg:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:1 of -msgid "Bases: :py:class:`~flwr.server.strategy.strategy.Strategy`" +#: flwr.server.client_manager.SimpleClientManager.wait_for:8 of +msgid "The time in seconds to wait for, defaults to 86400 (24h)." msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +#: flwr.server.client_manager.SimpleClientManager.wait_for:11 of +msgid "**success**" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`aggregate_fit " -"`\\ " -"\\(server\\_round\\, results\\, failures\\)" +#: ../../source/ref-api/flwr.server.run_driver_api.rst:2 +#, fuzzy +msgid "run\\_driver\\_api" +msgstr "flower-driver-api" + +#: ../../source/ref-api/flwr.server.run_fleet_api.rst:2 +msgid "run\\_fleet\\_api" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_fit:1 of -msgid "Aggregate training results using unweighted aggregation." +#: ../../source/ref-api/flwr.server.run_server_app.rst:2 +msgid "run\\_server\\_app" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +#: ../../source/ref-api/flwr.server.run_superlink.rst:2 +#, fuzzy +msgid "run\\_superlink" +msgstr "flower-superlink" + +#: ../../source/ref-api/flwr.server.start_server.rst:2 +#, fuzzy +msgid "start\\_server" +msgstr "serveur.start_server" + +#: flwr.server.app.start_server:3 of +msgid "The IPv4 or IPv6 address of the server. Defaults to `\"[::]:8080\"`." msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of +#: flwr.server.app.start_server:5 of msgid "" -":py:obj:`configure_fit " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +"A server implementation, either `flwr.server.Server` or a subclass " +"thereof. If no instance is provided, then `start_server` will create one." msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:1 of +#: flwr.server.app.start_server:9 flwr.simulation.app.start_simulation:28 of msgid "" -"Configure the next round of training incorporating Differential Privacy " -"(DP)." +"Currently supported values are `num_rounds` (int, default: 1) and " +"`round_timeout` in seconds (float, default: None)." msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of +#: flwr.server.app.start_server:12 of msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +"An implementation of the abstract base class " +"`flwr.server.strategy.Strategy`. If no strategy is provided, then " +"`start_server` will use `flwr.server.strategy.FedAvg`." msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of +#: flwr.server.app.start_server:16 of msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +"An implementation of the abstract base class `flwr.server.ClientManager`." +" If no implementation is provided, then `start_server` will use " +"`flwr.server.client_manager.SimpleClientManager`." msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:3 of +#: flwr.server.app.start_server:21 of msgid "" -"Configuration of the next training round includes information related to " -"DP, such as clip norm and noise stddev." +"The maximum length of gRPC messages that can be exchanged with the Flower" +" clients. The default should be sufficient for most models. Users who " +"train very large models might need to increase this value. Note that the " +"Flower clients need to be started with the same value (see " +"`flwr.client.start_client`), otherwise clients will not know about the " +"increased limit and block larger messages." msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:13 -#: flwr.server.strategy.strategy.Strategy.configure_fit:10 of +#: flwr.server.app.start_server:28 of msgid "" -"**fit_configuration** -- A list of tuples. Each tuple in the list " -"identifies a `ClientProxy` and the `FitIns` for this particular " -"`ClientProxy`. If a particular `ClientProxy` is not included in this " -"list, it means that this `ClientProxy` will not participate in the next " -"round of federated learning." +"Tuple containing root certificate, server certificate, and private key to" +" start a secure SSL-enabled server. The tuple is expected to have three " +"bytes elements in the following order: * CA certificate. * " +"server certificate. * server private key." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyClientSideAdaptiveClipping.rst:2 -msgid "DifferentialPrivacyClientSideAdaptiveClipping" +#: flwr.server.app.start_server:28 of +msgid "" +"Tuple containing root certificate, server certificate, and private key to" +" start a secure SSL-enabled server. The tuple is expected to have three " +"bytes elements in the following order:" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:3 -#: of -msgid "Use `adaptiveclipping_mod` modifier at the client side." -msgstr "" +#: flwr.server.app.start_server:32 of +#, fuzzy +msgid "CA certificate." +msgstr "Certificats" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:5 -#: of -msgid "" -"In comparison to `DifferentialPrivacyServerSideAdaptiveClipping`, which " -"performs clipping on the server-side, " -"`DifferentialPrivacyClientSideAdaptiveClipping` expects clipping to " -"happen on the client-side, usually by using the built-in " -"`adaptiveclipping_mod`." -msgstr "" +#: flwr.server.app.start_server:33 of +#, fuzzy +msgid "server certificate." +msgstr "Certificats" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:10 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:3 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:10 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:3 -#: of -msgid "The strategy to which DP functionalities will be added by this wrapper." +#: flwr.server.app.start_server:34 of +#, fuzzy +msgid "server private key." +msgstr "stratégie.du.serveur" + +#: flwr.server.app.start_server:37 of +msgid "**hist** -- Object containing training and evaluation metrics." msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:12 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:5 -#: of -msgid "The noise multiplier for the Gaussian mechanism for model updates." -msgstr "" +#: flwr.server.app.start_server:42 of +#, fuzzy +msgid "Starting an insecure server:" +msgstr "Démarrer le serveur" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:14 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:7 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:17 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:10 -#: of -msgid "The number of clients that are sampled on each round." -msgstr "" +#: flwr.server.app.start_server:46 of +#, fuzzy +msgid "Starting an SSL-enabled server:" +msgstr "Démarrer le serveur" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:16 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:9 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:2 +#, fuzzy +msgid "strategy" +msgstr "stratégie.du.serveur" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -"The initial value of clipping norm. Defaults to 0.1. Andrew et al. " -"recommends to set to 0.1." +":py:obj:`Bulyan `\\ \\(\\*\\, " +"fraction\\_fit\\, fraction\\_evaluate\\, ...\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:19 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:12 -#: of -msgid "The desired quantile of updates which should be clipped. Defaults to 0.5." -msgstr "" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.bulyan.Bulyan:1 of +#, fuzzy +msgid "Bulyan strategy." +msgstr "Stratégies intégrées" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:21 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:14 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -"The learning rate for the clipping norm adaptation. Defaults to 0.2. " -"Andrew et al. recommends to set to 0.2." +":py:obj:`DPFedAvgAdaptive `\\ " +"\\(strategy\\, num\\_sampled\\_clients\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:24 -#: of -msgid "" -"The stddev of the noise added to the count of updates currently below the" -" estimate. Andrew et al. recommends to set to `expected_num_records/20`" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive:1 of +msgid "Wrapper for configuring a Strategy for DP with Adaptive Clipping." msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:30 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:23 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:22 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:15 -#: of -#, fuzzy -msgid "Create a strategy:" -msgstr "stratégie.du.serveur" - -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:34 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -"Wrap the strategy with the " -"`DifferentialPrivacyClientSideAdaptiveClipping` wrapper:" +":py:obj:`DPFedAvgFixed `\\ " +"\\(strategy\\, num\\_sampled\\_clients\\, ...\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:40 -#: of -msgid "On the client, add the `adaptiveclipping_mod` to the client-side mods:" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed:1 of +msgid "Wrapper for configuring a Strategy for DP with Fixed Clipping." msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`aggregate_evaluate " -"`\\" -" \\(server\\_round\\, results\\, ...\\)" +":py:obj:`DifferentialPrivacyClientSideAdaptiveClipping " +"`\\ " +"\\(...\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:1 #: of -msgid "" -":py:obj:`aggregate_fit " -"`\\" -" \\(server\\_round\\, results\\, failures\\)" +msgid "Strategy wrapper for central DP with client-side adaptive clipping." msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_fit:1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_fit:1 -#: of -#, fuzzy -msgid "Aggregate training results and update clip norms." -msgstr "Résultats globaux de l'évaluation." - -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`configure_evaluate " -"`\\" -" \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`DifferentialPrivacyServerSideAdaptiveClipping " +"`\\ " +"\\(...\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:1 #: of -msgid "" -":py:obj:`configure_fit " -"`\\" -" \\(server\\_round\\, parameters\\, ...\\)" +msgid "Strategy wrapper for central DP with server-side adaptive clipping." msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`evaluate " -"`\\" -" \\(server\\_round\\, parameters\\)" +":py:obj:`DifferentialPrivacyClientSideFixedClipping " +"`\\ " +"\\(...\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:1 #: of -msgid "" -":py:obj:`initialize_parameters " -"`\\" -" \\(client\\_manager\\)" +msgid "Strategy wrapper for central DP with client-side fixed clipping." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyClientSideFixedClipping.rst:2 -#, fuzzy -msgid "DifferentialPrivacyClientSideFixedClipping" -msgstr "Confidentialité différentielle" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +msgid "" +":py:obj:`DifferentialPrivacyServerSideFixedClipping " +"`\\ " +"\\(...\\)" +msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:3 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:1 #: of -msgid "Use `fixedclipping_mod` modifier at the client side." +msgid "Strategy wrapper for central DP with server-side fixed clipping." msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:5 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -"In comparison to `DifferentialPrivacyServerSideFixedClipping`, which " -"performs clipping on the server-side, " -"`DifferentialPrivacyClientSideFixedClipping` expects clipping to happen " -"on the client-side, usually by using the built-in `fixedclipping_mod`." +":py:obj:`FedAdagrad `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:12 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:5 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedadagrad.FedAdagrad:1 of +#, fuzzy +msgid "FedAdagrad strategy - Adaptive Federated Optimization using Adagrad." +msgstr "" +"`FedAdam` et `FedAdam` correspondent à la dernière version de l'article " +"sur l'optimisation fédérée adaptative." + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -"The noise multiplier for the Gaussian mechanism for model updates. A " -"value of 1.0 or higher is recommended for strong privacy." +":py:obj:`FedAdam `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:15 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:8 -#: of -msgid "The value of the clipping norm." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedadam.FedAdam:1 of +msgid "FedAdam - Adaptive Federated Optimization using Adam." msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:26 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -"Wrap the strategy with the `DifferentialPrivacyClientSideFixedClipping` " -"wrapper:" +":py:obj:`FedAvg `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:32 -#: of -msgid "On the client, add the `fixedclipping_mod` to the client-side mods:" -msgstr "" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedavg.FedAvg:1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:1 of +#, fuzzy +msgid "Federated Averaging strategy." +msgstr "Stratégie de moyenne fédérée." -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`aggregate_evaluate " -"`\\" -" \\(server\\_round\\, results\\, ...\\)" +":py:obj:`FedAvgAndroid `\\ " +"\\(\\*\\[\\, fraction\\_fit\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`aggregate_fit " -"`\\" -" \\(server\\_round\\, results\\, failures\\)" +":py:obj:`FedAvgM `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_fit:1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedavgm.FedAvgM:1 of #, fuzzy -msgid "Add noise to the aggregated parameters." -msgstr "Puis sérialise le résultat agrégé :" +msgid "Federated Averaging with Momentum strategy." +msgstr "Stratégie de moyenne fédérée." -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`configure_evaluate " -"`\\" -" \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`FedMedian `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`configure_fit " -"`\\" -" \\(server\\_round\\, parameters\\, ...\\)" -msgstr "" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedmedian.FedMedian:1 of +#, fuzzy +msgid "Configurable FedMedian strategy implementation." +msgstr "Configuration de l'évaluation fédérée" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`evaluate " -"`\\" -" \\(server\\_round\\, parameters\\)" +":py:obj:`FedOpt `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedopt.FedOpt:1 of +#, fuzzy +msgid "Federated Optim strategy." +msgstr "Stratégie de moyenne fédérée." + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`initialize_parameters " -"`\\" -" \\(client\\_manager\\)" +":py:obj:`FedProx `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyServerSideAdaptiveClipping.rst:2 -msgid "DifferentialPrivacyServerSideAdaptiveClipping" -msgstr "" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedprox.FedProx:1 of +#, fuzzy +msgid "Federated Optimization strategy." +msgstr "Stratégie de moyenne fédérée." -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:17 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -"The standard deviation of the noise added to the count of updates below " -"the estimate. Andrew et al. recommends to set to " -"`expected_num_records/20`" +":py:obj:`FedTrimmedAvg `\\ " +"\\(\\*\\[\\, fraction\\_fit\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:27 -#: of -msgid "" -"Wrap the strategy with the DifferentialPrivacyServerSideAdaptiveClipping " -"wrapper" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:1 of +msgid "Federated Averaging with Trimmed Mean [Dong Yin, et al., 2021]." msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`aggregate_evaluate " -"`\\" -" \\(server\\_round\\, results\\, ...\\)" +":py:obj:`FedXgbBagging `\\ " +"\\(\\[evaluate\\_function\\]\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`aggregate_fit " -"`\\" -" \\(server\\_round\\, results\\, failures\\)" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging:1 of +msgid "Configurable FedXgbBagging strategy implementation." msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`configure_evaluate " -"`\\" -" \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`FedXgbCyclic `\\ " +"\\(\\*\\*kwargs\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`configure_fit " -"`\\" -" \\(server\\_round\\, parameters\\, ...\\)" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic:1 of +msgid "Configurable FedXgbCyclic strategy implementation." msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`evaluate " -"`\\" -" \\(server\\_round\\, parameters\\)" +":py:obj:`FedXgbNnAvg `\\ \\(\\*args\\, " +"\\*\\*kwargs\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`initialize_parameters " -"`\\" -" \\(client\\_manager\\)" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg:1 of +msgid "Configurable FedXgbNnAvg strategy implementation." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyServerSideFixedClipping.rst:2 -#, fuzzy -msgid "DifferentialPrivacyServerSideFixedClipping" -msgstr "Confidentialité différentielle" - -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:19 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -"Wrap the strategy with the DifferentialPrivacyServerSideFixedClipping " -"wrapper" +":py:obj:`FedYogi `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`aggregate_evaluate " -"`\\" -" \\(server\\_round\\, results\\, ...\\)" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedyogi.FedYogi:1 of +msgid "FedYogi [Reddi et al., 2020] strategy." msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`aggregate_fit " -"`\\" -" \\(server\\_round\\, results\\, failures\\)" -msgstr "" - -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_fit:1 -#: of -msgid "Compute the updates, clip, and pass them for aggregation." +":py:obj:`FaultTolerantFedAvg " +"`\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`configure_evaluate " -"`\\" -" \\(server\\_round\\, parameters\\, ...\\)" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg:1 of +msgid "Configurable fault-tolerant FedAvg strategy implementation." msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`configure_fit " -"`\\" -" \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`Krum `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`evaluate " -"`\\" -" \\(server\\_round\\, parameters\\)" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.krum.Krum:1 of +msgid "Krum [Blanchard et al., 2017] strategy." msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`initialize_parameters " -"`\\" -" \\(client\\_manager\\)" +":py:obj:`QFedAvg `\\ \\(\\*\\[\\, " +"q\\_param\\, qffl\\_learning\\_rate\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_fit:3 -#: of -msgid "Afterward, add noise to the aggregated parameters." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.qfedavg.QFedAvg:1 of +msgid "Configurable QFedAvg strategy implementation." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FaultTolerantFedAvg.rst:2 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 #, fuzzy -msgid "FaultTolerantFedAvg" -msgstr "server.strategy.FaultTolerantFedAvg" - -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" -msgstr "" +msgid ":py:obj:`Strategy `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`aggregate_fit " -"`\\ " -"\\(server\\_round\\, results\\, failures\\)" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.strategy.Strategy:1 of +msgid "Abstract base class for server strategy implementations." msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_fit:1 -#: flwr.server.strategy.fedadagrad.FedAdagrad.aggregate_fit:1 -#: flwr.server.strategy.fedadam.FedAdam.aggregate_fit:1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_fit:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_fit:1 -#: flwr.server.strategy.fedavgm.FedAvgM.aggregate_fit:1 -#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg.aggregate_fit:1 -#: flwr.server.strategy.fedyogi.FedYogi.aggregate_fit:1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_fit:1 of -msgid "Aggregate fit results using weighted average." +#: ../../source/ref-api/flwr.server.strategy.Bulyan.rst:2 +msgid "Bulyan" msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +#: flwr.server.strategy.bulyan.Bulyan:1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg:1 +#: flwr.server.strategy.fedavgm.FedAvgM:1 +#: flwr.server.strategy.fedmedian.FedMedian:1 +#: flwr.server.strategy.fedopt.FedOpt:1 flwr.server.strategy.fedprox.FedProx:1 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging:1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic:1 +#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg:1 +#: flwr.server.strategy.krum.Krum:1 flwr.server.strategy.qfedavg.QFedAvg:1 of +msgid "Bases: :py:class:`~flwr.server.strategy.fedavg.FedAvg`" msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`configure_fit " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +#: flwr.server.strategy.bulyan.Bulyan:3 of +msgid "Implementation based on https://arxiv.org/abs/1802.07927." msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +#: flwr.server.strategy.bulyan.Bulyan:5 +#: flwr.server.strategy.fedadagrad.FedAdagrad:5 +#: flwr.server.strategy.fedadam.FedAdam:5 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:5 +#: flwr.server.strategy.fedavgm.FedAvgM:5 flwr.server.strategy.fedopt.FedOpt:5 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:5 +#: flwr.server.strategy.fedyogi.FedYogi:5 flwr.server.strategy.krum.Krum:5 of +msgid "Fraction of clients used during training. Defaults to 1.0." msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +#: flwr.server.strategy.bulyan.Bulyan:7 +#: flwr.server.strategy.fedadagrad.FedAdagrad:7 +#: flwr.server.strategy.fedadam.FedAdam:7 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:7 +#: flwr.server.strategy.fedavgm.FedAvgM:7 flwr.server.strategy.fedopt.FedOpt:7 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:7 +#: flwr.server.strategy.fedyogi.FedYogi:7 flwr.server.strategy.krum.Krum:7 of +msgid "Fraction of clients used during validation. Defaults to 1.0." msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +#: flwr.server.strategy.bulyan.Bulyan:9 +#: flwr.server.strategy.fedadagrad.FedAdagrad:9 +#: flwr.server.strategy.fedadam.FedAdam:9 flwr.server.strategy.fedavg.FedAvg:13 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:9 +#: flwr.server.strategy.fedavgm.FedAvgM:9 flwr.server.strategy.fedopt.FedOpt:9 +#: flwr.server.strategy.fedprox.FedProx:45 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:9 +#: flwr.server.strategy.fedyogi.FedYogi:9 flwr.server.strategy.krum.Krum:9 of +msgid "Minimum number of clients used during training. Defaults to 2." msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +#: flwr.server.strategy.bulyan.Bulyan:11 +#: flwr.server.strategy.fedadagrad.FedAdagrad:11 +#: flwr.server.strategy.fedadam.FedAdam:11 +#: flwr.server.strategy.fedavg.FedAvg:15 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:11 +#: flwr.server.strategy.fedavgm.FedAvgM:11 +#: flwr.server.strategy.fedopt.FedOpt:11 +#: flwr.server.strategy.fedprox.FedProx:47 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:11 +#: flwr.server.strategy.fedyogi.FedYogi:11 flwr.server.strategy.krum.Krum:11 of +msgid "Minimum number of clients used during validation. Defaults to 2." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedAdagrad.rst:2 -#: ../../source/ref-changelog.md:839 -msgid "FedAdagrad" -msgstr "FedAdagrad" +#: flwr.server.strategy.bulyan.Bulyan:13 +#: flwr.server.strategy.fedadagrad.FedAdagrad:13 +#: flwr.server.strategy.fedadam.FedAdam:13 +#: flwr.server.strategy.fedavg.FedAvg:17 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:13 +#: flwr.server.strategy.fedavgm.FedAvgM:13 +#: flwr.server.strategy.fedopt.FedOpt:13 +#: flwr.server.strategy.fedprox.FedProx:49 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:13 +#: flwr.server.strategy.fedyogi.FedYogi:13 flwr.server.strategy.krum.Krum:13 of +msgid "Minimum number of total clients in the system. Defaults to 2." +msgstr "" -#: flwr.server.strategy.fedadagrad.FedAdagrad:1 -#: flwr.server.strategy.fedadam.FedAdam:1 -#: flwr.server.strategy.fedyogi.FedYogi:1 of -msgid "Bases: :py:class:`~flwr.server.strategy.fedopt.FedOpt`" +#: flwr.server.strategy.bulyan.Bulyan:15 flwr.server.strategy.krum.Krum:15 of +msgid "Number of malicious clients in the system. Defaults to 0." msgstr "" -#: flwr.server.strategy.fedadagrad.FedAdagrad:3 -#: flwr.server.strategy.fedadam.FedAdam:3 flwr.server.strategy.fedopt.FedOpt:3 -#: flwr.server.strategy.fedyogi.FedYogi:3 of -#, fuzzy -msgid "Implementation based on https://arxiv.org/abs/2003.00295v5" +#: flwr.server.strategy.bulyan.Bulyan:17 +#: flwr.server.strategy.fedadagrad.FedAdagrad:15 +#: flwr.server.strategy.fedadam.FedAdam:15 +#: flwr.server.strategy.fedavg.FedAvg:19 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:15 +#: flwr.server.strategy.fedavgm.FedAvgM:15 +#: flwr.server.strategy.fedopt.FedOpt:15 +#: flwr.server.strategy.fedprox.FedProx:51 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:15 +#: flwr.server.strategy.fedyogi.FedYogi:17 +#: flwr.server.strategy.fedyogi.FedYogi:18 +#: flwr.server.strategy.fedyogi.FedYogi:19 flwr.server.strategy.krum.Krum:20 of +msgid "Optional function used for validation. Defaults to None." msgstr "" -"FedYogi - Stratégie d'apprentissage fédéré utilisant Yogi côté serveur. " -"Mise en oeuvre basée sur https://arxiv.org/abs/2003.00295" -#: flwr.server.strategy.fedadagrad.FedAdagrad:21 -#: flwr.server.strategy.fedadagrad.FedAdagrad:23 -#: flwr.server.strategy.fedadam.FedAdam:25 -#: flwr.server.strategy.fedadam.FedAdam:27 -#: flwr.server.strategy.fedavg.FedAvg:29 flwr.server.strategy.fedavg.FedAvg:31 -#: flwr.server.strategy.fedopt.FedOpt:25 flwr.server.strategy.fedopt.FedOpt:27 -#: flwr.server.strategy.fedprox.FedProx:61 -#: flwr.server.strategy.fedprox.FedProx:63 -#: flwr.server.strategy.fedyogi.FedYogi:28 -#: flwr.server.strategy.fedyogi.FedYogi:30 of -msgid "Metrics aggregation function, optional." +#: flwr.server.strategy.bulyan.Bulyan:19 +#: flwr.server.strategy.fedadagrad.FedAdagrad:17 +#: flwr.server.strategy.fedadam.FedAdam:17 +#: flwr.server.strategy.fedavg.FedAvg:21 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:17 +#: flwr.server.strategy.fedavgm.FedAvgM:17 +#: flwr.server.strategy.fedopt.FedOpt:17 +#: flwr.server.strategy.fedprox.FedProx:53 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:17 +#: flwr.server.strategy.fedyogi.FedYogi:20 flwr.server.strategy.krum.Krum:22 of +msgid "Function used to configure training. Defaults to None." msgstr "" -#: flwr.server.strategy.fedadagrad.FedAdagrad:29 -#: flwr.server.strategy.fedadam.FedAdam:29 -#: flwr.server.strategy.fedopt.FedOpt:29 of -msgid "Server-side learning rate. Defaults to 1e-1." +#: flwr.server.strategy.bulyan.Bulyan:21 +#: flwr.server.strategy.fedadagrad.FedAdagrad:19 +#: flwr.server.strategy.fedadam.FedAdam:19 +#: flwr.server.strategy.fedavg.FedAvg:23 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:19 +#: flwr.server.strategy.fedavgm.FedAvgM:19 +#: flwr.server.strategy.fedopt.FedOpt:19 +#: flwr.server.strategy.fedprox.FedProx:55 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:19 +#: flwr.server.strategy.fedyogi.FedYogi:22 flwr.server.strategy.krum.Krum:24 of +msgid "Function used to configure validation. Defaults to None." msgstr "" -#: flwr.server.strategy.fedadagrad.FedAdagrad:31 -#: flwr.server.strategy.fedadam.FedAdam:31 -#: flwr.server.strategy.fedopt.FedOpt:31 of -msgid "Client-side learning rate. Defaults to 1e-1." +#: flwr.server.strategy.bulyan.Bulyan:23 +#: flwr.server.strategy.fedadagrad.FedAdagrad:25 +#: flwr.server.strategy.fedadam.FedAdam:21 +#: flwr.server.strategy.fedavg.FedAvg:25 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:21 +#: flwr.server.strategy.fedavgm.FedAvgM:21 +#: flwr.server.strategy.fedopt.FedOpt:21 +#: flwr.server.strategy.fedprox.FedProx:57 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:21 +#: flwr.server.strategy.fedyogi.FedYogi:24 flwr.server.strategy.krum.Krum:26 of +msgid "Whether or not accept rounds containing failures. Defaults to True." msgstr "" -#: flwr.server.strategy.fedadagrad.FedAdagrad:33 -#: flwr.server.strategy.fedadam.FedAdam:37 -#: flwr.server.strategy.fedopt.FedOpt:37 of -msgid "Controls the algorithm's degree of adaptability. Defaults to 1e-9." +#: flwr.server.strategy.bulyan.Bulyan:25 +#: flwr.server.strategy.fedadagrad.FedAdagrad:27 +#: flwr.server.strategy.fedadam.FedAdam:23 +#: flwr.server.strategy.fedavg.FedAvg:27 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:24 +#: flwr.server.strategy.fedavgm.FedAvgM:23 +#: flwr.server.strategy.fedopt.FedOpt:23 +#: flwr.server.strategy.fedprox.FedProx:59 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:23 +#: flwr.server.strategy.fedyogi.FedYogi:26 flwr.server.strategy.krum.Krum:28 of +#, fuzzy +msgid "Initial global model parameters." +msgstr "Initialise le modèle global" + +#: flwr.server.strategy.bulyan.Bulyan:27 of +msgid "" +"Byzantine resilient aggregation rule that is used as the first step of " +"the Bulyan (e.g., Krum)" +msgstr "" + +#: flwr.server.strategy.bulyan.Bulyan:29 of +msgid "arguments to the first_aggregation rule" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +"`\\ \\(server\\_round\\, " +"results\\, ...\\)" msgstr "" +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#, fuzzy +msgid "Aggregate evaluation losses using weighted average." +msgstr "Résultats globaux de l'évaluation." + #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`aggregate_fit `\\" -" \\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" +#: flwr.server.strategy.bulyan.Bulyan.aggregate_fit:1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#, fuzzy +msgid "Aggregate fit results using Bulyan." +msgstr "Résultats globaux de l'évaluation." + #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" msgstr "" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.configure_evaluate:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.configure_evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.configure_evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.configure_evaluate:1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.configure_evaluate:1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.configure_evaluate:1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.configure_evaluate:1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.qfedavg.QFedAvg.configure_evaluate:1 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: flwr.server.strategy.strategy.Strategy.configure_evaluate:1 of +#, fuzzy +msgid "Configure the next round of evaluation." +msgstr "Configuration de l'évaluation côté serveur" + #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`configure_fit `\\" -" \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit `\\ " +"\\(server\\_round\\, parameters\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.configure_fit:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.configure_fit:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.configure_fit:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.configure_fit:1 +#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive.configure_fit:1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.configure_fit:1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.configure_fit:1 +#: flwr.server.strategy.fedprox.FedProx.configure_fit:1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.configure_fit:1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.qfedavg.QFedAvg.configure_fit:1 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: flwr.server.strategy.strategy.Strategy.configure_fit:1 of +msgid "Configure the next round of training." msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.evaluate:1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.evaluate:1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.evaluate:1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg.evaluate:1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +msgid "Evaluate model parameters using an evaluation function." +msgstr "" + #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.initialize_parameters:1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.initialize_parameters:1 +#: flwr.server.strategy.fedavgm.FedAvgM.initialize_parameters:1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#, fuzzy +msgid "Initialize global model parameters." +msgstr "Initialise le modèle global" + +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`num_evaluation_clients " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.num_evaluation_clients:1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.num_evaluation_clients:1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.qfedavg.QFedAvg.num_evaluation_clients:1 of +msgid "Use a fraction of available clients for evaluation." +msgstr "" + #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients `\\" +" \\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedAdam.rst:2 -#, fuzzy -msgid "FedAdam" -msgstr "FedAdagrad" +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.num_fit_clients:1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.num_fit_clients:1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.qfedavg.QFedAvg.num_fit_clients:1 of +msgid "Return the sample size and the required number of available clients." +msgstr "" -#: flwr.server.strategy.fedadam.FedAdam:33 -#: flwr.server.strategy.fedyogi.FedYogi:36 of -msgid "Momentum parameter. Defaults to 0.9." +#: ../../source/ref-api/flwr.server.strategy.DPFedAvgAdaptive.rst:2 +msgid "DPFedAvgAdaptive" +msgstr "DPFedAvgAdaptive" + +#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive:1 of +msgid "Bases: :py:class:`~flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed`" msgstr "" -#: flwr.server.strategy.fedadam.FedAdam:35 -#: flwr.server.strategy.fedyogi.FedYogi:38 of -msgid "Second moment parameter. Defaults to 0.99." +#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive:3 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed:3 of +msgid "This class is deprecated and will be removed in a future release." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\," -" results\\, ...\\)" +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of +#, fuzzy +msgid "Aggregate evaluation losses using the given strategy." +msgstr "Résultats globaux de l'évaluation." + +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit " +"`\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive.aggregate_fit:1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of +msgid "Aggregate training results as in DPFedAvgFixed and update clip norms." +msgstr "" + +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\," -" parameters\\, ...\\)" +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate:1 of +msgid "Configure the next round of evaluation using the specified strategy." +msgstr "" + +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit " +"`\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.evaluate:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.evaluate:1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.evaluate:1 of +msgid "Evaluate model parameters using an evaluation function from the strategy." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.initialize_parameters:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.initialize_parameters:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.initialize_parameters:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.initialize_parameters:1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.initialize_parameters:1 of +msgid "Initialize global model parameters using given strategy." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedAvg.rst:2 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate:3 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:6 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:3 +#: flwr.server.strategy.strategy.Strategy.aggregate_fit:3 +#: flwr.server.strategy.strategy.Strategy.configure_evaluate:3 +#: flwr.server.strategy.strategy.Strategy.configure_fit:3 +#: flwr.server.strategy.strategy.Strategy.evaluate:6 of #, fuzzy -msgid "FedAvg" -msgstr "DP-FedAvg" +msgid "The current round of federated learning." +msgstr "Qu'est-ce que l'apprentissage fédéré ?" -#: flwr.server.strategy.fedavg.FedAvg:3 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:3 of -msgid "Implementation based on https://arxiv.org/abs/1602.05629" +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate:7 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:10 +#: flwr.server.strategy.strategy.Strategy.configure_evaluate:7 +#: flwr.server.strategy.strategy.Strategy.configure_fit:7 +#: flwr.server.strategy.strategy.Strategy.initialize_parameters:3 of +msgid "The client manager which holds all currently connected clients." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg:5 flwr.server.strategy.fedprox.FedProx:37 -#: of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate:10 +#: flwr.server.strategy.strategy.Strategy.configure_evaluate:10 of msgid "" -"Fraction of clients used during training. In case `min_fit_clients` is " -"larger than `fraction_fit * available_clients`, `min_fit_clients` will " -"still be sampled. Defaults to 1.0." +"**evaluate_configuration** -- A list of tuples. Each tuple in the list " +"identifies a `ClientProxy` and the `EvaluateIns` for this particular " +"`ClientProxy`. If a particular `ClientProxy` is not included in this " +"list, it means that this `ClientProxy` will not participate in the next " +"round of federated evaluation." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg:9 flwr.server.strategy.fedprox.FedProx:41 -#: of -msgid "" -"Fraction of clients used during validation. In case " -"`min_evaluate_clients` is larger than `fraction_evaluate * " -"available_clients`, `min_evaluate_clients` will still be sampled. " -"Defaults to 1.0." -msgstr "" +#: ../../source/ref-api/flwr.server.strategy.DPFedAvgFixed.rst:2 +msgid "DPFedAvgFixed" +msgstr "DPFedAvgFixed" -#: flwr.server.strategy.fedavg.FedAvg:33 of -msgid "Enable (True) or disable (False) in-place aggregation of model updates." +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed:1 +#: flwr.server.strategy.fedavg.FedAvg:1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:1 of +msgid "Bases: :py:class:`~flwr.server.strategy.strategy.Strategy`" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\, " -"results\\, ...\\)" +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit " +"`\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_fit:1 of +msgid "Aggregate training results using unweighted aggregation." +msgstr "" + +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\, " -"parameters\\, ...\\)" +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit " +"`\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:1 of msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +"Configure the next round of training incorporating Differential Privacy " +"(DP)." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" +msgstr "" + +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of +msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:3 of msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +"Configuration of the next training round includes information related to " +"DP, such as clip norm and noise stddev." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:13 +#: flwr.server.strategy.strategy.Strategy.configure_fit:10 of msgid "" -":py:obj:`num_fit_clients `\\" -" \\(num\\_available\\_clients\\)" +"**fit_configuration** -- A list of tuples. Each tuple in the list " +"identifies a `ClientProxy` and the `FitIns` for this particular " +"`ClientProxy`. If a particular `ClientProxy` is not included in this " +"list, it means that this `ClientProxy` will not participate in the next " +"round of federated learning." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedAvgAndroid.rst:2 -#, fuzzy -msgid "FedAvgAndroid" -msgstr "DPFedAvgAdaptive" +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyClientSideAdaptiveClipping.rst:2 +msgid "DifferentialPrivacyClientSideAdaptiveClipping" +msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:3 #: of -msgid "" -":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +msgid "Use `adaptiveclipping_mod` modifier at the client side." msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:5 #: of msgid "" -":py:obj:`aggregate_fit " -"`\\ " -"\\(server\\_round\\, results\\, failures\\)" +"In comparison to `DifferentialPrivacyServerSideAdaptiveClipping`, which " +"performs clipping on the server-side, " +"`DifferentialPrivacyClientSideAdaptiveClipping` expects clipping to " +"happen on the client-side, usually by using the built-in " +"`adaptiveclipping_mod`." msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:10 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:3 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:10 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:3 #: of -msgid "" -":py:obj:`bytes_to_ndarray " -"`\\ \\(tensor\\)" +msgid "The strategy to which DP functionalities will be added by this wrapper." msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.bytes_to_ndarray:1 of -#, fuzzy -msgid "Deserialize NumPy array from bytes." -msgstr "Désérialise le tableau numérique NumPy à partir d'octets." +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:12 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:5 +#: of +msgid "The noise multiplier for the Gaussian mechanism for model updates." +msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:14 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:7 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:17 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:10 #: of -msgid "" -":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +msgid "The number of clients that are sampled on each round." msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:16 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:9 #: of msgid "" -":py:obj:`configure_fit " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +"The initial value of clipping norm. Defaults to 0.1. Andrew et al. " +"recommends to set to 0.1." msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:19 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:12 #: of -msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +msgid "The desired quantile of updates which should be clipped. Defaults to 0.5." msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:21 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:14 #: of msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +"The learning rate for the clipping norm adaptation. Defaults to 0.2. " +"Andrew et al. recommends to set to 0.2." msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:24 #: of msgid "" -":py:obj:`ndarray_to_bytes " -"`\\ \\(ndarray\\)" +"The stddev of the noise added to the count of updates currently below the" +" estimate. Andrew et al. recommends to set to `expected_num_records/20`" msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.ndarray_to_bytes:1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:30 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:23 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:22 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:15 +#: of #, fuzzy -msgid "Serialize NumPy array to bytes." -msgstr "Sérialise le tableau numérique NumPy en octets." +msgid "Create a strategy:" +msgstr "stratégie.du.serveur" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:34 #: of msgid "" -":py:obj:`ndarrays_to_parameters " -"`\\ " -"\\(ndarrays\\)" +"Wrap the strategy with the " +"`DifferentialPrivacyClientSideAdaptiveClipping` wrapper:" msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:40 #: of -msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +msgid "On the client, add the `adaptiveclipping_mod` to the client-side mods:" msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 #: of msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`aggregate_evaluate " +"`\\" +" \\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 #: of msgid "" -":py:obj:`parameters_to_ndarrays " -"`\\ " -"\\(parameters\\)" +":py:obj:`aggregate_fit " +"`\\" +" \\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.parameters_to_ndarrays:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_fit:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_fit:1 #: of #, fuzzy -msgid "Convert parameters object to NumPy weights." -msgstr "Convertit l'objet des paramètres en ndarrays NumPy." - -#: ../../source/ref-api/flwr.server.strategy.FedAvgM.rst:2 -#, fuzzy -msgid "FedAvgM" -msgstr "DP-FedAvg" - -#: flwr.server.strategy.fedavgm.FedAvgM:3 of -msgid "Implementation based on https://arxiv.org/abs/1909.06335" -msgstr "" +msgid "Aggregate training results and update clip norms." +msgstr "Résultats globaux de l'évaluation." -#: flwr.server.strategy.fedavgm.FedAvgM:25 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" -"Server-side learning rate used in server-side optimization. Defaults to " -"1.0." -msgstr "" - -#: flwr.server.strategy.fedavgm.FedAvgM:28 of -msgid "Server-side momentum factor used for FedAvgM. Defaults to 0.0." +":py:obj:`configure_evaluate " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\," -" results\\, ...\\)" +":py:obj:`configure_fit " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\," -" parameters\\, ...\\)" +":py:obj:`initialize_parameters " +"`\\" +" \\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyClientSideFixedClipping.rst:2 +#, fuzzy +msgid "DifferentialPrivacyClientSideFixedClipping" +msgstr "Confidentialité différentielle" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:3 +#: of +msgid "Use `fixedclipping_mod` modifier at the client side." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:5 +#: of msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +"In comparison to `DifferentialPrivacyServerSideFixedClipping`, which " +"performs clipping on the server-side, " +"`DifferentialPrivacyClientSideFixedClipping` expects clipping to happen " +"on the client-side, usually by using the built-in `fixedclipping_mod`." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:12 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:5 +#: of msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +"The noise multiplier for the Gaussian mechanism for model updates. A " +"value of 1.0 or higher is recommended for strong privacy." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:15 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:8 +#: of +msgid "The value of the clipping norm." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:26 +#: of msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +"Wrap the strategy with the `DifferentialPrivacyClientSideFixedClipping` " +"wrapper:" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedMedian.rst:2 -msgid "FedMedian" +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:32 +#: of +msgid "On the client, add the `fixedclipping_mod` to the client-side mods:" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +"`\\" +" \\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit " +"`\\" +" \\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedmedian.FedMedian.aggregate_fit:1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_fit:1 +#: of #, fuzzy -msgid "Aggregate fit results using median." -msgstr "Résultats globaux de l'évaluation." +msgid "Add noise to the aggregated parameters." +msgstr "Puis sérialise le résultat agrégé :" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +"`\\" +" \\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyServerSideAdaptiveClipping.rst:2 +msgid "DifferentialPrivacyServerSideAdaptiveClipping" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:17 +#: of msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.FedOpt.rst:2 -msgid "FedOpt" -msgstr "" - -#: flwr.server.strategy.fedopt.FedOpt:33 of -msgid "Momentum parameter. Defaults to 0.0." +"The standard deviation of the noise added to the count of updates below " +"the estimate. Andrew et al. recommends to set to " +"`expected_num_records/20`" msgstr "" -#: flwr.server.strategy.fedopt.FedOpt:35 of -msgid "Second moment parameter. Defaults to 0.0." +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:27 +#: of +msgid "" +"Wrap the strategy with the DifferentialPrivacyServerSideAdaptiveClipping " +"wrapper" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\, " -"results\\, ...\\)" +"`\\" +" \\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit " +"`\\" +" \\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\, " -"parameters\\, ...\\)" +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +"`\\" +" \\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyServerSideFixedClipping.rst:2 +#, fuzzy +msgid "DifferentialPrivacyServerSideFixedClipping" +msgstr "Confidentialité différentielle" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:19 +#: of msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +"Wrap the strategy with the DifferentialPrivacyServerSideFixedClipping " +"wrapper" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`num_fit_clients `\\" -" \\(num\\_available\\_clients\\)" +":py:obj:`aggregate_evaluate " +"`\\" +" \\(server\\_round\\, results\\, ...\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedProx.rst:2 -msgid "FedProx" +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`aggregate_fit " +"`\\" +" \\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedprox.FedProx:3 of -msgid "Implementation based on https://arxiv.org/abs/1812.06127" +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_fit:1 +#: of +msgid "Compute the updates, clip, and pass them for aggregation." msgstr "" -#: flwr.server.strategy.fedprox.FedProx:5 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" -"The strategy in itself will not be different than FedAvg, the client " -"needs to be adjusted. A proximal term needs to be added to the loss " -"function during the training:" +":py:obj:`configure_evaluate " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedprox.FedProx:9 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" -"\\\\frac{\\\\mu}{2} || w - w^t ||^2\n" -"\n" +":py:obj:`configure_fit " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedprox.FedProx:12 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" -"Where $w^t$ are the global parameters and $w$ are the local weights the " -"function will be optimized with." -msgstr "" - -#: flwr.server.strategy.fedprox.FedProx:15 of -msgid "In PyTorch, for example, the loss would go from:" -msgstr "" - -#: flwr.server.strategy.fedprox.FedProx:21 of -msgid "To:" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.fedprox.FedProx:30 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" -"With `global_params` being a copy of the parameters before the training " -"takes place." +":py:obj:`initialize_parameters " +"`\\" +" \\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedprox.FedProx:65 of -msgid "" -"The weight of the proximal term used in the optimization. 0.0 makes this " -"strategy equivalent to FedAvg, and the higher the coefficient, the more " -"regularization will be used (that is, the client parameters will need to " -"be closer to the server parameters during training)." +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_fit:3 +#: of +msgid "Afterward, add noise to the aggregated parameters." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: ../../source/ref-api/flwr.server.strategy.FaultTolerantFedAvg.rst:2 +#, fuzzy +msgid "FaultTolerantFedAvg" +msgstr "server.strategy.FaultTolerantFedAvg" + +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\," -" results\\, ...\\)" +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit " +"`\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\," -" parameters\\, ...\\)" -msgstr "" - -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" -msgstr "" - -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" -msgstr "" +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_fit:1 +#: flwr.server.strategy.fedadagrad.FedAdagrad.aggregate_fit:1 +#: flwr.server.strategy.fedadam.FedAdam.aggregate_fit:1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_fit:1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_fit:1 +#: flwr.server.strategy.fedavgm.FedAvgM.aggregate_fit:1 +#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg.aggregate_fit:1 +#: flwr.server.strategy.fedyogi.FedYogi.aggregate_fit:1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_fit:1 of +msgid "Aggregate fit results using weighted average." +msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`configure_fit " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" +msgstr "" + +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`num_fit_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.strategy.fedprox.FedProx.configure_fit:3 of -msgid "Sends the proximal factor mu to the clients" +#: ../../source/ref-api/flwr.server.strategy.FedAdagrad.rst:2 +#: ../../source/ref-changelog.md:905 +msgid "FedAdagrad" +msgstr "FedAdagrad" + +#: flwr.server.strategy.fedadagrad.FedAdagrad:1 +#: flwr.server.strategy.fedadam.FedAdam:1 +#: flwr.server.strategy.fedyogi.FedYogi:1 of +msgid "Bases: :py:class:`~flwr.server.strategy.fedopt.FedOpt`" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedTrimmedAvg.rst:2 -msgid "FedTrimmedAvg" +#: flwr.server.strategy.fedadagrad.FedAdagrad:3 +#: flwr.server.strategy.fedadam.FedAdam:3 flwr.server.strategy.fedopt.FedOpt:3 +#: flwr.server.strategy.fedyogi.FedYogi:3 of +#, fuzzy +msgid "Implementation based on https://arxiv.org/abs/2003.00295v5" msgstr "" +"FedYogi - Stratégie d'apprentissage fédéré utilisant Yogi côté serveur. " +"Mise en oeuvre basée sur https://arxiv.org/abs/2003.00295" -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:3 of -msgid "Implemented based on: https://arxiv.org/abs/1803.01498" +#: flwr.server.strategy.fedadagrad.FedAdagrad:21 +#: flwr.server.strategy.fedadagrad.FedAdagrad:23 +#: flwr.server.strategy.fedadam.FedAdam:25 +#: flwr.server.strategy.fedadam.FedAdam:27 +#: flwr.server.strategy.fedavg.FedAvg:29 flwr.server.strategy.fedavg.FedAvg:31 +#: flwr.server.strategy.fedopt.FedOpt:25 flwr.server.strategy.fedopt.FedOpt:27 +#: flwr.server.strategy.fedprox.FedProx:61 +#: flwr.server.strategy.fedprox.FedProx:63 +#: flwr.server.strategy.fedyogi.FedYogi:28 +#: flwr.server.strategy.fedyogi.FedYogi:30 of +msgid "Metrics aggregation function, optional." msgstr "" -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:25 of -msgid "Fraction to cut off of both tails of the distribution. Defaults to 0.2." +#: flwr.server.strategy.fedadagrad.FedAdagrad:29 +#: flwr.server.strategy.fedadam.FedAdam:29 +#: flwr.server.strategy.fedopt.FedOpt:29 of +msgid "Server-side learning rate. Defaults to 1e-1." +msgstr "" + +#: flwr.server.strategy.fedadagrad.FedAdagrad:31 +#: flwr.server.strategy.fedadam.FedAdam:31 +#: flwr.server.strategy.fedopt.FedOpt:31 of +msgid "Client-side learning rate. Defaults to 1e-1." +msgstr "" + +#: flwr.server.strategy.fedadagrad.FedAdagrad:33 +#: flwr.server.strategy.fedadam.FedAdam:37 +#: flwr.server.strategy.fedopt.FedOpt:37 of +msgid "Controls the algorithm's degree of adaptability. Defaults to 1e-9." msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ " +"`\\ " "\\(server\\_round\\, results\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`aggregate_fit " -"`\\ " -"\\(server\\_round\\, results\\, failures\\)" -msgstr "" - -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg.aggregate_fit:1 of -msgid "Aggregate fit results using trimmed average." +":py:obj:`aggregate_fit `\\" +" \\(server\\_round\\, results\\, failures\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`configure_evaluate " -"`\\ " +"`\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`configure_fit " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit `\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_fit_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedXgbBagging.rst:2 -msgid "FedXgbBagging" +#: ../../source/ref-api/flwr.server.strategy.FedAdam.rst:2 +#, fuzzy +msgid "FedAdam" +msgstr "FedAdagrad" + +#: flwr.server.strategy.fedadam.FedAdam:33 +#: flwr.server.strategy.fedyogi.FedYogi:36 of +msgid "Momentum parameter. Defaults to 0.9." msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedadam.FedAdam:35 +#: flwr.server.strategy.fedyogi.FedYogi:38 of +msgid "Second moment parameter. Defaults to 0.99." +msgstr "" + +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +"`\\ \\(server\\_round\\," +" results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: of -#, fuzzy -msgid "Aggregate evaluation metrics using average." -msgstr "Résultats globaux de l'évaluation." - -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`aggregate_fit " -"`\\ " +":py:obj:`aggregate_fit `\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_fit:1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_fit:1 of -msgid "Aggregate fit results using bagging." -msgstr "" - -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`configure_fit " -"`\\ " +":py:obj:`configure_fit `\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_fit_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedXgbCyclic.rst:2 -msgid "FedXgbCyclic" -msgstr "" +#: ../../source/ref-api/flwr.server.strategy.FedAvg.rst:2 +#, fuzzy +msgid "FedAvg" +msgstr "DP-FedAvg" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +#: flwr.server.strategy.fedavg.FedAvg:3 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:3 of +msgid "Implementation based on https://arxiv.org/abs/1602.05629" msgstr "" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg:5 flwr.server.strategy.fedprox.FedProx:37 #: of msgid "" -":py:obj:`aggregate_fit " -"`\\ \\(server\\_round\\," -" results\\, failures\\)" +"Fraction of clients used during training. In case `min_fit_clients` is " +"larger than `fraction_fit * available_clients`, `min_fit_clients` will " +"still be sampled. Defaults to 1.0." msgstr "" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg:9 flwr.server.strategy.fedprox.FedProx:41 #: of msgid "" -":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +"Fraction of clients used during validation. In case " +"`min_evaluate_clients` is larger than `fraction_evaluate * " +"available_clients`, `min_evaluate_clients` will still be sampled. " +"Defaults to 1.0." msgstr "" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`configure_fit " -"`\\ \\(server\\_round\\," -" parameters\\, ...\\)" +#: flwr.server.strategy.fedavg.FedAvg:33 of +msgid "Enable (True) or disable (False) in-place aggregation of model updates." msgstr "" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\, " +"results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" +msgstr "" + +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`configure_fit `\\ " +"\\(server\\_round\\, parameters\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" +msgstr "" + +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients `\\" +" \\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedXgbNnAvg.rst:2 +#: ../../source/ref-api/flwr.server.strategy.FedAvgAndroid.rst:2 #, fuzzy -msgid "FedXgbNnAvg" -msgstr "DP-FedAvg" - -#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg:5 of -msgid "" -"This strategy is deprecated, but a copy of it is available in Flower " -"Baselines: " -"https://github.com/adap/flower/tree/main/baselines/hfedxgboost." -msgstr "" +msgid "FedAvgAndroid" +msgstr "DPFedAvgAdaptive" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ " +"`\\ " "\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`aggregate_fit " -"`\\ \\(server\\_round\\, " -"results\\, failures\\)" +"`\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`bytes_to_ndarray " +"`\\ \\(tensor\\)" +msgstr "" + +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.bytes_to_ndarray:1 of +#, fuzzy +msgid "Deserialize NumPy array from bytes." +msgstr "Désérialise le tableau numérique NumPy à partir d'octets." + +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`configure_evaluate " -"`\\ " +"`\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`configure_fit " -"`\\ \\(server\\_round\\, " -"parameters\\, ...\\)" +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`ndarray_to_bytes " +"`\\ \\(ndarray\\)" +msgstr "" + +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.ndarray_to_bytes:1 of +#, fuzzy +msgid "Serialize NumPy array to bytes." +msgstr "Sérialise le tableau numérique NumPy en octets." + +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`ndarrays_to_parameters " +"`\\ " +"\\(ndarrays\\)" +msgstr "" + +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`num_fit_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedYogi.rst:2 -msgid "FedYogi" +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`parameters_to_ndarrays " +"`\\ " +"\\(parameters\\)" msgstr "" -#: flwr.server.strategy.fedyogi.FedYogi:32 of -msgid "Server-side learning rate. Defaults to 1e-2." +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.parameters_to_ndarrays:1 +#: of +#, fuzzy +msgid "Convert parameters object to NumPy weights." +msgstr "Convertit l'objet des paramètres en ndarrays NumPy." + +#: ../../source/ref-api/flwr.server.strategy.FedAvgM.rst:2 +#, fuzzy +msgid "FedAvgM" +msgstr "DP-FedAvg" + +#: flwr.server.strategy.fedavgm.FedAvgM:3 of +msgid "Implementation based on https://arxiv.org/abs/1909.06335" msgstr "" -#: flwr.server.strategy.fedyogi.FedYogi:34 of -msgid "Client-side learning rate. Defaults to 0.0316." +#: flwr.server.strategy.fedavgm.FedAvgM:25 of +msgid "" +"Server-side learning rate used in server-side optimization. Defaults to " +"1.0." msgstr "" -#: flwr.server.strategy.fedyogi.FedYogi:40 of -msgid "Controls the algorithm's degree of adaptability. Defaults to 1e-3." +#: flwr.server.strategy.fedavgm.FedAvgM:28 of +msgid "Server-side momentum factor used for FedAvgM. Defaults to 0.0." msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\," +"`\\ \\(server\\_round\\," " results\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit `\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\," +"`\\ \\(server\\_round\\," " parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit `\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_fit_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.Krum.rst:2 -msgid "Krum" -msgstr "" - -#: flwr.server.strategy.krum.Krum:3 of -msgid "Implementation based on https://arxiv.org/abs/1703.02757" -msgstr "" - -#: flwr.server.strategy.krum.Krum:17 of -msgid "" -"Number of clients to keep before averaging (MultiKrum). Defaults to 0, in" -" that case classical Krum is applied." +#: ../../source/ref-api/flwr.server.strategy.FedMedian.rst:2 +msgid "FedMedian" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\, " -"results\\, ...\\)" +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit `\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.krum.Krum.aggregate_fit:1 of +#: flwr.server.strategy.fedmedian.FedMedian.aggregate_fit:1 of #, fuzzy -msgid "Aggregate fit results using Krum." +msgid "Aggregate fit results using median." msgstr "Résultats globaux de l'évaluation." #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\, " -"parameters\\, ...\\)" +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit `\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`num_fit_clients `\\ " +":py:obj:`num_fit_clients " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.QFedAvg.rst:2 -#, fuzzy -msgid "QFedAvg" -msgstr "DP-FedAvg" +#: ../../source/ref-api/flwr.server.strategy.FedOpt.rst:2 +msgid "FedOpt" +msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedopt.FedOpt:33 of +msgid "Momentum parameter. Defaults to 0.0." +msgstr "" + +#: flwr.server.strategy.fedopt.FedOpt:35 of +msgid "Second moment parameter. Defaults to 0.0." +msgstr "" + +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\," -" results\\, ...\\)" +"`\\ \\(server\\_round\\, " +"results\\, ...\\)" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit `\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\," -" parameters\\, ...\\)" +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit `\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients `\\" +" \\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.Strategy.rst:2 -#, fuzzy -msgid "Strategy" -msgstr "stratégie.du.serveur" +#: ../../source/ref-api/flwr.server.strategy.FedProx.rst:2 +msgid "FedProx" +msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedprox.FedProx:3 of +msgid "Implementation based on https://arxiv.org/abs/1812.06127" +msgstr "" + +#: flwr.server.strategy.fedprox.FedProx:5 of msgid "" -":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +"The strategy in itself will not be different than FedAvg, the client " +"needs to be adjusted. A proximal term needs to be added to the loss " +"function during the training:" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1 -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: of -#, fuzzy -msgid "Aggregate evaluation results." -msgstr "Résultats globaux de l'évaluation." +#: flwr.server.strategy.fedprox.FedProx:9 of +msgid "" +"\\\\frac{\\\\mu}{2} || w - w^t ||^2\n" +"\n" +msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedprox.FedProx:12 of msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +"Where $w^t$ are the global parameters and $w$ are the local weights the " +"function will be optimized with." msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: flwr.server.strategy.strategy.Strategy.aggregate_fit:1 of -#, fuzzy -msgid "Aggregate training results." -msgstr "Résultats globaux de l'évaluation." +#: flwr.server.strategy.fedprox.FedProx:15 of +msgid "In PyTorch, for example, the loss would go from:" +msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedprox.FedProx:21 of +msgid "To:" +msgstr "" + +#: flwr.server.strategy.fedprox.FedProx:30 of +msgid "" +"With `global_params` being a copy of the parameters before the training " +"takes place." +msgstr "" + +#: flwr.server.strategy.fedprox.FedProx:65 of +msgid "" +"The weight of the proximal term used in the optimization. 0.0 makes this " +"strategy equivalent to FedAvg, and the higher the coefficient, the more " +"regularization will be used (that is, the client parameters will need to " +"be closer to the server parameters during training)." +msgstr "" + +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\," +" results\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" +msgstr "" + +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit `\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: flwr.server.strategy.strategy.Strategy.evaluate:1 of -#, fuzzy -msgid "Evaluate the current model parameters." -msgstr "``get_parameters`` : renvoie les paramètres du modèle local actuel" - -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: flwr.server.strategy.strategy.Strategy.initialize_parameters:1 of -#, fuzzy -msgid "Initialize the (global) model parameters." -msgstr "Initialise le modèle global" +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" +msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:5 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"Successful updates from the previously selected and configured clients. " -"Each pair of `(ClientProxy, FitRes` constitutes a successful update from " -"one of the previously selected clients. Not that not all previously " -"selected clients are necessarily included in this list: a client might " -"drop out and not submit a result. For each client that did not submit an " -"update, there should be an `Exception` in `failures`." +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:13 -#: flwr.server.strategy.strategy.Strategy.aggregate_fit:13 of -msgid "Exceptions that occurred while the server was waiting for client updates." +#: flwr.server.strategy.fedprox.FedProx.configure_fit:3 of +msgid "Sends the proximal factor mu to the clients" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:16 of -msgid "" -"**aggregation_result** -- The aggregated evaluation result. Aggregation " -"typically uses some variant of a weighted average." +#: ../../source/ref-api/flwr.server.strategy.FedTrimmedAvg.rst:2 +msgid "FedTrimmedAvg" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_fit:5 of -msgid "" -"Successful updates from the previously selected and configured clients. " -"Each pair of `(ClientProxy, FitRes)` constitutes a successful update from" -" one of the previously selected clients. Not that not all previously " -"selected clients are necessarily included in this list: a client might " -"drop out and not submit a result. For each client that did not submit an " -"update, there should be an `Exception` in `failures`." +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:3 of +msgid "Implemented based on: https://arxiv.org/abs/1803.01498" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_fit:17 of -msgid "" -"**parameters** -- If parameters are returned, then the server will treat " -"these as the new global model parameters (i.e., it will replace the " -"previous parameters with the ones returned from this method). If `None` " -"is returned (e.g., because there were only failures and no viable " -"results) then the server will no update the previous model parameters, " -"the updates received in this round are discarded, and the global model " -"parameters remain the same." +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:25 of +msgid "Fraction to cut off of both tails of the distribution. Defaults to 0.2." msgstr "" -#: flwr.server.strategy.strategy.Strategy.evaluate:3 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"This function can be used to perform centralized (i.e., server-side) " -"evaluation of model parameters." +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.evaluate:11 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"**evaluation_result** -- The evaluation result, usually a Tuple " -"containing loss and a dictionary containing task-specific metrics (e.g., " -"accuracy)." +":py:obj:`aggregate_fit " +"`\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.initialize_parameters:6 of -msgid "" -"**parameters** -- If parameters are returned, then the server will treat " -"these as the initial global model parameters." +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg.aggregate_fit:1 of +msgid "Aggregate fit results using trimmed average." msgstr "" -#: ../../source/ref-api/flwr.server.workflow.rst:2 -#, fuzzy -msgid "workflow" -msgstr "Flux de travail" +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" +msgstr "" -#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`DefaultWorkflow `\\ " -"\\(\\[fit\\_workflow\\, ...\\]\\)" +":py:obj:`configure_fit " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: ../../source/ref-api/flwr.server.workflow.rst:24::1 -#: flwr.server.workflow.default_workflows.DefaultWorkflow:1 of -msgid "Default workflow in Flower." +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`SecAggPlusWorkflow `\\ " -"\\(num\\_shares\\, ...\\[\\, ...\\]\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: ../../source/ref-api/flwr.server.workflow.rst:24::1 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:1 -#: of -msgid "The workflow for the SecAgg+ protocol." +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`SecAggWorkflow `\\ " -"\\(reconstruction\\_threshold\\, \\*\\)" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.workflow.rst:24::1 -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:1 of -msgid "The workflow for the SecAgg protocol." +#: ../../source/ref-api/flwr.server.strategy.FedXgbBagging.rst:2 +msgid "FedXgbBagging" msgstr "" -#: ../../source/ref-api/flwr.server.workflow.DefaultWorkflow.rst:2 -#, fuzzy -msgid "DefaultWorkflow" -msgstr "Flux de travail" - -#: ../../source/ref-api/flwr.server.workflow.SecAggPlusWorkflow.rst:2 -#, fuzzy -msgid "SecAggPlusWorkflow" -msgstr "Flux de travail" - -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:3 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 #: of msgid "" -"The SecAgg+ protocol ensures the secure summation of integer vectors " -"owned by multiple parties, without accessing any individual integer " -"vector. This workflow allows the server to compute the weighted average " -"of model parameters across all clients, ensuring individual contributions" -" remain private. This is achieved by clients sending both, a weighting " -"factor and a weighted version of the locally updated parameters, both of " -"which are masked for privacy. Specifically, each client uploads \"[w, w *" -" params]\" with masks, where weighting factor 'w' is the number of " -"examples ('num_examples') and 'params' represents the model parameters " -"('parameters') from the client's `FitRes`. The server then aggregates " -"these contributions to compute the weighted average of model parameters." +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:14 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 #: of -msgid "" -"The protocol involves four main stages: - 'setup': Send SecAgg+ " -"configuration to clients and collect their public keys. - 'share keys': " -"Broadcast public keys among clients and collect encrypted secret" -msgstr "" +#, fuzzy +msgid "Aggregate evaluation metrics using average." +msgstr "Résultats globaux de l'évaluation." -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:17 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:17 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 #: of -msgid "key shares." +msgid "" +":py:obj:`aggregate_fit " +"`\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:18 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:18 -#: of -msgid "" -"'collect masked vectors': Forward encrypted secret key shares to target " -"clients and collect masked model parameters." +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_fit:1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_fit:1 of +msgid "Aggregate fit results using bagging." msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:20 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:20 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 #: of msgid "" -"'unmask': Collect secret key shares to decrypt and aggregate the model " -"parameters." +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:22 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:22 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 #: of msgid "" -"Only the aggregated model parameters are exposed and passed to " -"`Strategy.aggregate_fit`, ensuring individual data privacy." +":py:obj:`configure_fit " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:25 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 #: of msgid "" -"The number of shares into which each client's private key is split under " -"the SecAgg+ protocol. If specified as a float, it represents the " -"proportion of all selected clients, and the number of shares will be set " -"dynamically in the run time. A private key can be reconstructed from " -"these shares, allowing for the secure aggregation of model updates. Each " -"client sends one share to each of its neighbors while retaining one." +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:25 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:32 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 #: of msgid "" -"The minimum number of shares required to reconstruct a client's private " -"key, or, if specified as a float, it represents the proportion of the " -"total number of shares needed for reconstruction. This threshold ensures " -"privacy by allowing for the recovery of contributions from dropped " -"clients during aggregation, without compromising individual client data." +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:31 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:38 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 #: of msgid "" -"The maximum value of the weight that can be assigned to any single " -"client's update during the weighted average calculation on the server " -"side, e.g., in the FedAvg algorithm." +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:35 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:42 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 #: of msgid "" -"The range within which model parameters are clipped before quantization. " -"This parameter ensures each model parameter is bounded within " -"[-clipping_range, clipping_range], facilitating quantization." +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:39 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:46 -#: of -msgid "" -"The size of the range into which floating-point model parameters are " -"quantized, mapping each parameter to an integer in [0, " -"quantization_range-1]. This facilitates cryptographic operations on the " -"model updates." +#: ../../source/ref-api/flwr.server.strategy.FedXgbCyclic.rst:2 +msgid "FedXgbCyclic" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:43 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:50 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 #: of msgid "" -"The range of values from which random mask entries are uniformly sampled " -"([0, modulus_range-1]). `modulus_range` must be less than 4294967296. " -"Please use 2**n values for `modulus_range` to prevent overflow issues." +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:47 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:54 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 #: of msgid "" -"The timeout duration in seconds. If specified, the workflow will wait for" -" replies for this duration each time. If `None`, there is no time limit " -"and the workflow will wait until replies for all messages are received." +":py:obj:`aggregate_fit " +"`\\ \\(server\\_round\\," +" results\\, failures\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:61 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 #: of msgid "" -"Generally, higher `num_shares` means more robust to dropouts while " -"increasing the computational costs; higher `reconstruction_threshold` " -"means better privacy guarantees but less tolerance to dropouts." +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:58 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:64 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 #: of -msgid "Too large `max_weight` may compromise the precision of the quantization." +msgid "" +":py:obj:`configure_fit " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:59 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:65 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 #: of -msgid "`modulus_range` must be 2**n and larger than `quantization_range`." +msgid "" +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:66 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 #: of msgid "" -"When `num_shares` is a float, it is interpreted as the proportion of all " -"selected clients, and hence the number of shares will be determined in " -"the runtime. This allows for dynamic adjustment based on the total number" -" of participating clients." +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:69 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 #: of msgid "" -"Similarly, when `reconstruction_threshold` is a float, it is interpreted " -"as the proportion of the number of shares needed for the reconstruction " -"of a private key. This feature enables flexibility in setting the " -"security threshold relative to the number of distributed shares." +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:73 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 #: of msgid "" -"`num_shares`, `reconstruction_threshold`, and the quantization parameters" -" (`clipping_range`, `quantization_range`, `modulus_range`) play critical " -"roles in balancing privacy, robustness, and efficiency within the SecAgg+" -" protocol." +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.FedXgbNnAvg.rst:2 +#, fuzzy +msgid "FedXgbNnAvg" +msgstr "DP-FedAvg" + +#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg:5 of msgid "" -":py:obj:`collect_masked_vectors_stage " -"`\\" -" \\(driver\\, ...\\)" +"This strategy is deprecated, but a copy of it is available in Flower " +"Baselines: " +"https://github.com/adap/flower/tree/main/baselines/hfedxgboost." msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: of -msgid "Execute the 'collect masked vectors' stage." +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`setup_stage " -"`\\ \\(driver\\, " -"context\\, state\\)" +":py:obj:`aggregate_fit " +"`\\ \\(server\\_round\\, " +"results\\, failures\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.setup_stage:1 -#: of -msgid "Execute the 'setup' stage." +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`share_keys_stage " -"`\\ " -"\\(driver\\, context\\, state\\)" +":py:obj:`configure_fit " +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.share_keys_stage:1 -#: of -msgid "Execute the 'share keys' stage." +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`unmask_stage " -"`\\ \\(driver\\, " -"context\\, state\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.unmask_stage:1 -#: of -msgid "Execute the 'unmask' stage." +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.workflow.SecAggWorkflow.rst:2 -#, fuzzy -msgid "SecAggWorkflow" -msgstr "Flux de travail" - -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:1 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"Bases: " -":py:class:`~flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow`" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:3 of -msgid "" -"The SecAgg protocol ensures the secure summation of integer vectors owned" -" by multiple parties, without accessing any individual integer vector. " -"This workflow allows the server to compute the weighted average of model " -"parameters across all clients, ensuring individual contributions remain " -"private. This is achieved by clients sending both, a weighting factor and" -" a weighted version of the locally updated parameters, both of which are " -"masked for privacy. Specifically, each client uploads \"[w, w * params]\"" -" with masks, where weighting factor 'w' is the number of examples " -"('num_examples') and 'params' represents the model parameters " -"('parameters') from the client's `FitRes`. The server then aggregates " -"these contributions to compute the weighted average of model parameters." +#: ../../source/ref-api/flwr.server.strategy.FedYogi.rst:2 +msgid "FedYogi" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:14 of -msgid "" -"The protocol involves four main stages: - 'setup': Send SecAgg " -"configuration to clients and collect their public keys. - 'share keys': " -"Broadcast public keys among clients and collect encrypted secret" +#: flwr.server.strategy.fedyogi.FedYogi:32 of +msgid "Server-side learning rate. Defaults to 1e-2." msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:54 of -msgid "" -"Each client's private key is split into N shares under the SecAgg " -"protocol, where N is the number of selected clients." +#: flwr.server.strategy.fedyogi.FedYogi:34 of +msgid "Client-side learning rate. Defaults to 0.0316." msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:56 of -msgid "" -"Generally, higher `reconstruction_threshold` means better privacy " -"guarantees but less tolerance to dropouts." +#: flwr.server.strategy.fedyogi.FedYogi:40 of +msgid "Controls the algorithm's degree of adaptability. Defaults to 1e-3." msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:60 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"When `reconstruction_threshold` is a float, it is interpreted as the " -"proportion of the number of all selected clients needed for the " -"reconstruction of a private key. This feature enables flexibility in " -"setting the security threshold relative to the number of selected " -"clients." +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\," +" results\\, ...\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:64 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"`reconstruction_threshold`, and the quantization parameters " -"(`clipping_range`, `quantization_range`, `modulus_range`) play critical " -"roles in balancing privacy, robustness, and efficiency within the SecAgg " -"protocol." +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`collect_masked_vectors_stage " -"`\\ " -"\\(driver\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`setup_stage `\\" -" \\(driver\\, context\\, state\\)" +":py:obj:`configure_fit `\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`share_keys_stage " -"`\\ \\(driver\\, " -"context\\, state\\)" +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`unmask_stage " -"`\\ \\(driver\\, " -"context\\, state\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: ../../source/ref-api/flwr.simulation.rst:2 -#, fuzzy -msgid "simulation" -msgstr "Simulation de moniteur" +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" +msgstr "" -#: ../../source/ref-api/flwr.simulation.rst:19::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`start_simulation `\\ \\(\\*\\," -" client\\_fn\\[\\, ...\\]\\)" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.simulation.rst:19::1 -#: flwr.simulation.app.start_simulation:1 of -#, fuzzy -msgid "Start a Ray-based Flower simulation server." -msgstr "Simulation de moniteur" +#: ../../source/ref-api/flwr.server.strategy.Krum.rst:2 +msgid "Krum" +msgstr "" -#: ../../source/ref-api/flwr.simulation.rst:19::1 -msgid "" -":py:obj:`run_simulation_from_cli " -"`\\ \\(\\)" +#: flwr.server.strategy.krum.Krum:3 of +msgid "Implementation based on https://arxiv.org/abs/1703.02757" msgstr "" -#: ../../source/ref-api/flwr.simulation.rst:19::1 -#: flwr.simulation.run_simulation.run_simulation_from_cli:1 of -msgid "Run Simulation Engine from the CLI." +#: flwr.server.strategy.krum.Krum:17 of +msgid "" +"Number of clients to keep before averaging (MultiKrum). Defaults to 0, in" +" that case classical Krum is applied." msgstr "" -#: ../../source/ref-api/flwr.simulation.rst:19::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`run_simulation `\\ " -"\\(server\\_app\\, client\\_app\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\, " +"results\\, ...\\)" msgstr "" -#: ../../source/ref-api/flwr.simulation.rst:19::1 -#: flwr.simulation.run_simulation.run_simulation:1 of -msgid "Run a Flower App using the Simulation Engine." +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: ../../source/ref-api/flwr.simulation.run_simulation.rst:2 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.krum.Krum.aggregate_fit:1 of #, fuzzy -msgid "run\\_simulation" -msgstr "Simulation de moniteur" +msgid "Aggregate fit results using Krum." +msgstr "Résultats globaux de l'évaluation." -#: flwr.simulation.run_simulation.run_simulation:3 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"The `ServerApp` to be executed. It will send messages to different " -"`ClientApp` instances running on different (virtual) SuperNodes." +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" msgstr "" -#: flwr.simulation.run_simulation.run_simulation:6 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"The `ClientApp` to be executed by each of the SuperNodes. It will receive" -" messages sent by the `ServerApp`." +":py:obj:`configure_fit `\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.simulation.run_simulation.run_simulation:9 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"Number of nodes that run a ClientApp. They can be sampled by a Driver in " -"the ServerApp and receive a Message describing what the ClientApp should " -"perform." -msgstr "" - -#: flwr.simulation.run_simulation.run_simulation:13 of -msgid "A simulation backend that runs `ClientApp`s." +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.simulation.run_simulation.run_simulation:15 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"'A dictionary, e.g {\"\": , \"\": } to " -"configure a backend. Values supported in are those included by " -"`flwr.common.typing.ConfigsRecordValues`." +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: flwr.simulation.run_simulation.run_simulation:19 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"A boolean to indicate whether to enable GPU growth on the main thread. " -"This is desirable if you make use of a TensorFlow model on your " -"`ServerApp` while having your `ClientApp` running on the same GPU. " -"Without enabling this, you might encounter an out-of-memory error because" -" TensorFlow, by default, allocates all GPU memory. Read more about how " -"`tf.config.experimental.set_memory_growth()` works in the TensorFlow " -"documentation: https://www.tensorflow.org/api/stable." +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.simulation.run_simulation.run_simulation:26 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"When diabled, only INFO, WARNING and ERROR log messages will be shown. If" -" enabled, DEBUG-level logs will be displayed." +":py:obj:`num_fit_clients `\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.simulation.run_simulation_from_cli.rst:2 -#, fuzzy -msgid "run\\_simulation\\_from\\_cli" -msgstr "Simulation de moniteur" - -#: ../../source/ref-api/flwr.simulation.start_simulation.rst:2 +#: ../../source/ref-api/flwr.server.strategy.QFedAvg.rst:2 #, fuzzy -msgid "start\\_simulation" -msgstr "démarrer_simulation" +msgid "QFedAvg" +msgstr "DP-FedAvg" -#: flwr.simulation.app.start_simulation:3 of +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of msgid "" -"A function creating client instances. The function must take a single " -"`str` argument called `cid`. It should return a single client instance of" -" type Client. Note that the created client instances are ephemeral and " -"will often be destroyed after a single method invocation. Since client " -"instances are not long-lived, they should not attempt to carry state over" -" method invocations. Any state required by the instance (model, dataset, " -"hyperparameters, ...) should be (re-)created in either the call to " -"`client_fn` or the call to any of the client methods (e.g., load " -"evaluation data in the `evaluate` method itself)." +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\," +" results\\, ...\\)" msgstr "" -#: flwr.simulation.app.start_simulation:13 of +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of msgid "" -"The total number of clients in this simulation. This must be set if " -"`clients_ids` is not set and vice-versa." +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.simulation.app.start_simulation:16 of +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of msgid "" -"List `client_id`s for each client. This is only required if `num_clients`" -" is not set. Setting both `num_clients` and `clients_ids` with " -"`len(clients_ids)` not equal to `num_clients` generates an error." +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" msgstr "" -#: flwr.simulation.app.start_simulation:20 of +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of msgid "" -"CPU and GPU resources for a single client. Supported keys are `num_cpus` " -"and `num_gpus`. To understand the GPU utilization caused by `num_gpus`, " -"as well as using custom resources, please consult the Ray documentation." +":py:obj:`configure_fit `\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.simulation.app.start_simulation:25 of +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of msgid "" -"An implementation of the abstract base class `flwr.server.Server`. If no " -"instance is provided, then `start_server` will create one." +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.simulation.app.start_simulation:31 of +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of msgid "" -"An implementation of the abstract base class `flwr.server.Strategy`. If " -"no strategy is provided, then `start_server` will use " -"`flwr.server.strategy.FedAvg`." +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: flwr.simulation.app.start_simulation:35 of +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of msgid "" -"An implementation of the abstract base class `flwr.server.ClientManager`." -" If no implementation is provided, then `start_simulation` will use " -"`flwr.server.client_manager.SimpleClientManager`." +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.simulation.app.start_simulation:39 of +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of msgid "" -"Optional dictionary containing arguments for the call to `ray.init`. If " -"ray_init_args is None (the default), Ray will be initialized with the " -"following default args: { \"ignore_reinit_error\": True, " -"\"include_dashboard\": False } An empty dictionary can be used " -"(ray_init_args={}) to prevent any arguments from being passed to " -"ray.init." +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.simulation.app.start_simulation:39 of +#: ../../source/ref-api/flwr.server.strategy.Strategy.rst:2 +#, fuzzy +msgid "Strategy" +msgstr "stratégie.du.serveur" + +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: of msgid "" -"Optional dictionary containing arguments for the call to `ray.init`. If " -"ray_init_args is None (the default), Ray will be initialized with the " -"following default args:" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.simulation.app.start_simulation:43 of -msgid "{ \"ignore_reinit_error\": True, \"include_dashboard\": False }" -msgstr "" +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: of +#, fuzzy +msgid "Aggregate evaluation results." +msgstr "Résultats globaux de l'évaluation." -#: flwr.simulation.app.start_simulation:45 of +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: of msgid "" -"An empty dictionary can be used (ray_init_args={}) to prevent any " -"arguments from being passed to ray.init." +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.simulation.app.start_simulation:48 of -msgid "" -"Set to True to prevent `ray.shutdown()` in case " -"`ray.is_initialized()=True`." -msgstr "" +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: flwr.server.strategy.strategy.Strategy.aggregate_fit:1 of +#, fuzzy +msgid "Aggregate training results." +msgstr "Résultats globaux de l'évaluation." -#: flwr.simulation.app.start_simulation:50 of +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: of msgid "" -"Optionally specify the type of actor to use. The actor object, which " -"persists throughout the simulation, will be the process in charge of " -"executing a ClientApp wrapping input argument `client_fn`." +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.simulation.app.start_simulation:54 of +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: of msgid "" -"If you want to create your own Actor classes, you might need to pass some" -" input argument. You can use this dictionary for such purpose." +":py:obj:`configure_fit `\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.simulation.app.start_simulation:57 of +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: of msgid "" -"(default: \"DEFAULT\") Optional string (\"DEFAULT\" or \"SPREAD\") for " -"the VCE to choose in which node the actor is placed. If you are an " -"advanced user needed more control you can use lower-level scheduling " -"strategies to pin actors to specific compute nodes (e.g. via " -"NodeAffinitySchedulingStrategy). Please note this is an advanced feature." -" For all details, please refer to the Ray documentation: " -"https://docs.ray.io/en/latest/ray-core/scheduling/index.html" -msgstr "" - -#: flwr.simulation.app.start_simulation:66 of -msgid "**hist** -- Object containing metrics from training." +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: ../../source/ref-changelog.md:1 -msgid "Changelog" -msgstr "Changelog" - -#: ../../source/ref-changelog.md:3 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: flwr.server.strategy.strategy.Strategy.evaluate:1 of #, fuzzy -msgid "Unreleased" -msgstr "Inédit" - -#: ../../source/ref-changelog.md:5 ../../source/ref-changelog.md:17 -#: ../../source/ref-changelog.md:110 ../../source/ref-changelog.md:210 -#: ../../source/ref-changelog.md:294 ../../source/ref-changelog.md:358 -#: ../../source/ref-changelog.md:416 ../../source/ref-changelog.md:485 -#: ../../source/ref-changelog.md:614 ../../source/ref-changelog.md:656 -#: ../../source/ref-changelog.md:723 ../../source/ref-changelog.md:789 -#: ../../source/ref-changelog.md:834 ../../source/ref-changelog.md:873 -#: ../../source/ref-changelog.md:906 ../../source/ref-changelog.md:956 -msgid "What's new?" -msgstr "Quoi de neuf ?" +msgid "Evaluate the current model parameters." +msgstr "``get_parameters`` : renvoie les paramètres du modèle local actuel" -#: ../../source/ref-changelog.md:7 ../../source/ref-changelog.md:80 -#: ../../source/ref-changelog.md:192 ../../source/ref-changelog.md:282 -#: ../../source/ref-changelog.md:346 ../../source/ref-changelog.md:404 -#: ../../source/ref-changelog.md:473 ../../source/ref-changelog.md:535 -#: ../../source/ref-changelog.md:554 ../../source/ref-changelog.md:710 -#: ../../source/ref-changelog.md:781 ../../source/ref-changelog.md:818 -#: ../../source/ref-changelog.md:861 -msgid "Incompatible changes" -msgstr "Changements incompatibles" +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" +msgstr "" -#: ../../source/ref-changelog.md:9 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: flwr.server.strategy.strategy.Strategy.initialize_parameters:1 of #, fuzzy -msgid "v1.7.0 (2024-02-05)" -msgstr "v1.3.0 (2023-02-06)" - -#: ../../source/ref-changelog.md:11 ../../source/ref-changelog.md:104 -#: ../../source/ref-changelog.md:204 ../../source/ref-changelog.md:288 -#: ../../source/ref-changelog.md:352 ../../source/ref-changelog.md:410 -#: ../../source/ref-changelog.md:479 ../../source/ref-changelog.md:548 -msgid "Thanks to our contributors" -msgstr "Merci à nos contributeurs" +msgid "Initialize the (global) model parameters." +msgstr "Initialise le modèle global" -#: ../../source/ref-changelog.md:13 ../../source/ref-changelog.md:106 -#: ../../source/ref-changelog.md:206 ../../source/ref-changelog.md:290 -#: ../../source/ref-changelog.md:354 ../../source/ref-changelog.md:412 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:5 of msgid "" -"We would like to give our special thanks to all the contributors who made" -" the new version of Flower possible (in `git shortlog` order):" +"Successful updates from the previously selected and configured clients. " +"Each pair of `(ClientProxy, FitRes` constitutes a successful update from " +"one of the previously selected clients. Not that not all previously " +"selected clients are necessarily included in this list: a client might " +"drop out and not submit a result. For each client that did not submit an " +"update, there should be an `Exception` in `failures`." msgstr "" -"Nous tenons à remercier tout particulièrement tous les contributeurs qui " -"ont rendu possible la nouvelle version de Flower (dans l'ordre `git " -"shortlog`) :" -#: ../../source/ref-changelog.md:15 -msgid "" -"`Aasheesh Singh`, `Adam Narozniak`, `Aml Hassan Esmil`, `Charles " -"Beauville`, `Daniel J. Beutel`, `Daniel Nata Nugraha`, `Edoardo " -"Gabrielli`, `Gustavo Bertoli`, `HelinLin`, `Heng Pan`, `Javier`, `M S " -"Chaitanya Kumar`, `Mohammad Naseri`, `Nikos Vlachakis`, `Pritam Neog`, " -"`Robert Kuska`, `Robert Steiner`, `Taner Topal`, `Yahia Salaheldin " -"Shaaban`, `Yan Gao`, `Yasar Abbas` " +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:13 +#: flwr.server.strategy.strategy.Strategy.aggregate_fit:13 of +msgid "Exceptions that occurred while the server was waiting for client updates." msgstr "" -#: ../../source/ref-changelog.md:19 -#, fuzzy +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:16 of msgid "" -"**Introduce stateful clients (experimental)** " -"([#2770](https://github.com/adap/flower/pull/2770), " -"[#2686](https://github.com/adap/flower/pull/2686), " -"[#2696](https://github.com/adap/flower/pull/2696), " -"[#2643](https://github.com/adap/flower/pull/2643), " -"[#2769](https://github.com/adap/flower/pull/2769))" +"**aggregation_result** -- The aggregated evaluation result. Aggregation " +"typically uses some variant of a weighted average." msgstr "" -"**([#842](https://github.com/adap/flower/pull/842), " -"[#844](https://github.com/adap/flower/pull/844), " -"[#845](https://github.com/adap/flower/pull/845), " -"[#847](https://github.com/adap/flower/pull/847), " -"[#993](https://github.com/adap/flower/pull/993), " -"[#994](https://github.com/adap/flower/pull/994))" -#: ../../source/ref-changelog.md:21 +#: flwr.server.strategy.strategy.Strategy.aggregate_fit:5 of msgid "" -"Subclasses of `Client` and `NumPyClient` can now store local state that " -"remains on the client. Let's start with the highlight first: this new " -"feature is compatible with both simulated clients (via " -"`start_simulation`) and networked clients (via `start_client`). It's also" -" the first preview of new abstractions like `Context` and `RecordSet`. " -"Clients can access state of type `RecordSet` via `state: RecordSet = " -"self.context.state`. Changes to this `RecordSet` are preserved across " -"different rounds of execution to enable stateful computations in a " -"unified way across simulation and deployment." +"Successful updates from the previously selected and configured clients. " +"Each pair of `(ClientProxy, FitRes)` constitutes a successful update from" +" one of the previously selected clients. Not that not all previously " +"selected clients are necessarily included in this list: a client might " +"drop out and not submit a result. For each client that did not submit an " +"update, there should be an `Exception` in `failures`." msgstr "" -#: ../../source/ref-changelog.md:23 -#, fuzzy +#: flwr.server.strategy.strategy.Strategy.aggregate_fit:17 of msgid "" -"**Improve performance** " -"([#2293](https://github.com/adap/flower/pull/2293))" +"**parameters** -- If parameters are returned, then the server will treat " +"these as the new global model parameters (i.e., it will replace the " +"previous parameters with the ones returned from this method). If `None` " +"is returned (e.g., because there were only failures and no viable " +"results) then the server will no update the previous model parameters, " +"the updates received in this round are discarded, and the global model " +"parameters remain the same." msgstr "" -"**Supprimer les stratégies expérimentales** " -"([#1280](https://github.com/adap/flower/pull/1280))" -#: ../../source/ref-changelog.md:25 +#: flwr.server.strategy.strategy.Strategy.evaluate:3 of msgid "" -"Flower is faster than ever. All `FedAvg`-derived strategies now use in-" -"place aggregation to reduce memory consumption. The Flower client " -"serialization/deserialization has been rewritten from the ground up, " -"which results in significant speedups, especially when the client-side " -"training time is short." +"This function can be used to perform centralized (i.e., server-side) " +"evaluation of model parameters." msgstr "" -#: ../../source/ref-changelog.md:27 -#, fuzzy +#: flwr.server.strategy.strategy.Strategy.evaluate:11 of msgid "" -"**Support Federated Learning with Apple MLX and Flower** " -"([#2693](https://github.com/adap/flower/pull/2693))" +"**evaluation_result** -- The evaluation result, usually a Tuple " +"containing loss and a dictionary containing task-specific metrics (e.g., " +"accuracy)." msgstr "" -"**Ajouter un nouvel exemple d'apprentissage fédéré utilisant fastai et " -"Flower** ([#1598](https://github.com/adap/flower/pull/1598))" -#: ../../source/ref-changelog.md:29 +#: flwr.server.strategy.strategy.Strategy.initialize_parameters:6 of msgid "" -"Flower has official support for federated learning using [Apple " -"MLX](https://ml-explore.github.io/mlx) via the new `quickstart-mlx` code " -"example." +"**parameters** -- If parameters are returned, then the server will treat " +"these as the initial global model parameters." msgstr "" -#: ../../source/ref-changelog.md:31 +#: ../../source/ref-api/flwr.server.workflow.rst:2 #, fuzzy +msgid "workflow" +msgstr "Flux de travail" + +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 msgid "" -"**Introduce new XGBoost cyclic strategy** " -"([#2666](https://github.com/adap/flower/pull/2666), " -"[#2668](https://github.com/adap/flower/pull/2668))" +":py:obj:`DefaultWorkflow `\\ " +"\\(\\[fit\\_workflow\\, ...\\]\\)" msgstr "" -"**Introduction du SDK iOS (aperçu)** " -"([#1621](https://github.com/adap/flower/pull/1621), " -"[#1764](https://github.com/adap/flower/pull/1764))" -#: ../../source/ref-changelog.md:33 -msgid "" -"A new strategy called `FedXgbCyclic` supports a client-by-client style of" -" training (often called cyclic). The `xgboost-comprehensive` code example" -" shows how to use it in a full project. In addition to that, `xgboost-" -"comprehensive` now also supports simulation mode. With this, Flower " -"offers best-in-class XGBoost support." +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +#: flwr.server.workflow.default_workflows.DefaultWorkflow:1 of +msgid "Default workflow in Flower." msgstr "" -#: ../../source/ref-changelog.md:35 -#, fuzzy +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 msgid "" -"**Support Python 3.11** " -"([#2394](https://github.com/adap/flower/pull/2394))" -msgstr "" -"**Support Python 3.10** " -"([#1320](https://github.com/adap/flower/pull/1320))" +":py:obj:`SecAggPlusWorkflow `\\ " +"\\(num\\_shares\\, ...\\[\\, ...\\]\\)" +msgstr "" -#: ../../source/ref-changelog.md:37 -msgid "" -"Framework tests now run on Python 3.8, 3.9, 3.10, and 3.11. This will " -"ensure better support for users using more recent Python versions." +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:1 +#: of +msgid "The workflow for the SecAgg+ protocol." msgstr "" -#: ../../source/ref-changelog.md:39 -#, fuzzy +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 msgid "" -"**Update gRPC and ProtoBuf dependencies** " -"([#2814](https://github.com/adap/flower/pull/2814))" +":py:obj:`SecAggWorkflow `\\ " +"\\(reconstruction\\_threshold\\, \\*\\)" msgstr "" -"**Ajouter une nouvelle stratégie `FedProx`** " -"([#1619](https://github.com/adap/flower/pull/1619))" -#: ../../source/ref-changelog.md:41 -msgid "" -"The `grpcio` and `protobuf` dependencies were updated to their latest " -"versions for improved security and performance." +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:1 of +msgid "The workflow for the SecAgg protocol." msgstr "" -#: ../../source/ref-changelog.md:43 +#: ../../source/ref-api/flwr.server.workflow.DefaultWorkflow.rst:2 +#, fuzzy +msgid "DefaultWorkflow" +msgstr "Flux de travail" + +#: ../../source/ref-api/flwr.server.workflow.SecAggPlusWorkflow.rst:2 #, fuzzy +msgid "SecAggPlusWorkflow" +msgstr "Flux de travail" + +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:3 +#: of msgid "" -"**Introduce Docker image for Flower server** " -"([#2700](https://github.com/adap/flower/pull/2700), " -"[#2688](https://github.com/adap/flower/pull/2688), " -"[#2705](https://github.com/adap/flower/pull/2705), " -"[#2695](https://github.com/adap/flower/pull/2695), " -"[#2747](https://github.com/adap/flower/pull/2747), " -"[#2746](https://github.com/adap/flower/pull/2746), " -"[#2680](https://github.com/adap/flower/pull/2680), " -"[#2682](https://github.com/adap/flower/pull/2682), " -"[#2701](https://github.com/adap/flower/pull/2701))" +"The SecAgg+ protocol ensures the secure summation of integer vectors " +"owned by multiple parties, without accessing any individual integer " +"vector. This workflow allows the server to compute the weighted average " +"of model parameters across all clients, ensuring individual contributions" +" remain private. This is achieved by clients sending both, a weighting " +"factor and a weighted version of the locally updated parameters, both of " +"which are masked for privacy. Specifically, each client uploads \"[w, w *" +" params]\" with masks, where weighting factor 'w' is the number of " +"examples ('num_examples') and 'params' represents the model parameters " +"('parameters') from the client's `FitRes`. The server then aggregates " +"these contributions to compute the weighted average of model parameters." msgstr "" -"**([#842](https://github.com/adap/flower/pull/842), " -"[#844](https://github.com/adap/flower/pull/844), " -"[#845](https://github.com/adap/flower/pull/845), " -"[#847](https://github.com/adap/flower/pull/847), " -"[#993](https://github.com/adap/flower/pull/993), " -"[#994](https://github.com/adap/flower/pull/994))" -#: ../../source/ref-changelog.md:45 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:14 +#: of msgid "" -"The Flower server can now be run using an official Docker image. A new " -"how-to guide explains [how to run Flower using " -"Docker](https://flower.ai/docs/framework/how-to-run-flower-using-" -"docker.html). An official Flower client Docker image will follow." +"The protocol involves four main stages: - 'setup': Send SecAgg+ " +"configuration to clients and collect their public keys. - 'share keys': " +"Broadcast public keys among clients and collect encrypted secret" msgstr "" -#: ../../source/ref-changelog.md:47 -#, fuzzy -msgid "" -"**Introduce** `flower-via-docker-compose` **example** " -"([#2626](https://github.com/adap/flower/pull/2626))" +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:17 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:17 +#: of +msgid "key shares." msgstr "" -"**Introduire une nouvelle ligne de base pour les fleurs : FedAvg " -"FEMNIST** ([#1655](https://github.com/adap/flower/pull/1655))" -#: ../../source/ref-changelog.md:49 -#, fuzzy +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:18 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:18 +#: of msgid "" -"**Introduce** `quickstart-sklearn-tabular` **example** " -"([#2719](https://github.com/adap/flower/pull/2719))" +"'collect masked vectors': Forward encrypted secret key shares to target " +"clients and collect masked model parameters." msgstr "" -"**Ajouter une nouvelle stratégie `FedProx`** " -"([#1619](https://github.com/adap/flower/pull/1619))" -#: ../../source/ref-changelog.md:51 -#, fuzzy +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:20 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:20 +#: of msgid "" -"**Introduce** `custom-metrics` **example** " -"([#1958](https://github.com/adap/flower/pull/1958))" +"'unmask': Collect secret key shares to decrypt and aggregate the model " +"parameters." msgstr "" -"**Ajouter une nouvelle stratégie `FedProx`** " -"([#1619](https://github.com/adap/flower/pull/1619))" -#: ../../source/ref-changelog.md:53 -#, fuzzy +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:22 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:22 +#: of msgid "" -"**Update code examples to use Flower Datasets** " -"([#2450](https://github.com/adap/flower/pull/2450), " -"[#2456](https://github.com/adap/flower/pull/2456), " -"[#2318](https://github.com/adap/flower/pull/2318), " -"[#2712](https://github.com/adap/flower/pull/2712))" +"Only the aggregated model parameters are exposed and passed to " +"`Strategy.aggregate_fit`, ensuring individual data privacy." msgstr "" -"Mettre à jour les outils de développement " -"([#1231](https://github.com/adap/flower/pull/1231), " -"[#1276](https://github.com/adap/flower/pull/1276), " -"[#1301](https://github.com/adap/flower/pull/1301), " -"[#1310](https://github.com/adap/flower/pull/1310))" -#: ../../source/ref-changelog.md:55 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:25 +#: of msgid "" -"Several code examples were updated to use [Flower " -"Datasets](https://flower.ai/docs/datasets/)." +"The number of shares into which each client's private key is split under " +"the SecAgg+ protocol. If specified as a float, it represents the " +"proportion of all selected clients, and the number of shares will be set " +"dynamically in the run time. A private key can be reconstructed from " +"these shares, allowing for the secure aggregation of model updates. Each " +"client sends one share to each of its neighbors while retaining one." msgstr "" -#: ../../source/ref-changelog.md:57 -#, fuzzy +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:25 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:32 +#: of msgid "" -"**General updates to Flower Examples** " -"([#2381](https://github.com/adap/flower/pull/2381), " -"[#2805](https://github.com/adap/flower/pull/2805), " -"[#2782](https://github.com/adap/flower/pull/2782), " -"[#2806](https://github.com/adap/flower/pull/2806), " -"[#2829](https://github.com/adap/flower/pull/2829), " -"[#2825](https://github.com/adap/flower/pull/2825), " -"[#2816](https://github.com/adap/flower/pull/2816), " -"[#2726](https://github.com/adap/flower/pull/2726), " -"[#2659](https://github.com/adap/flower/pull/2659), " -"[#2655](https://github.com/adap/flower/pull/2655))" +"The minimum number of shares required to reconstruct a client's private " +"key, or, if specified as a float, it represents the proportion of the " +"total number of shares needed for reconstruction. This threshold ensures " +"privacy by allowing for the recovery of contributions from dropped " +"clients during aggregation, without compromising individual client data." msgstr "" -"**Améliorer l'API (expérimentale) du pilote** " -"([#1663](https://github.com/adap/flower/pull/1663), " -"[#1666](https://github.com/adap/flower/pull/1666), " -"[#1667](https://github.com/adap/flower/pull/1667), " -"[#1664](https://github.com/adap/flower/pull/1664), " -"[#1675](https://github.com/adap/flower/pull/1675), " -"[#1676](https://github.com/adap/flower/pull/1676), " -"[#1693](https://github.com/adap/flower/pull/1693), " -"[#1662](https://github.com/adap/flower/pull/1662), " -"[#1794](https://github.com/adap/flower/pull/1794))" -#: ../../source/ref-changelog.md:59 -msgid "Many Flower code examples received substantial updates." +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:31 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:38 +#: of +msgid "" +"The maximum value of the weight that can be assigned to any single " +"client's update during the weighted average calculation on the server " +"side, e.g., in the FedAvg algorithm." msgstr "" -#: ../../source/ref-changelog.md:61 ../../source/ref-changelog.md:154 -#, fuzzy -msgid "**Update Flower Baselines**" -msgstr "Demande pour une nouvelle Flower Baseline" +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:35 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:42 +#: of +msgid "" +"The range within which model parameters are clipped before quantization. " +"This parameter ensures each model parameter is bounded within " +"[-clipping_range, clipping_range], facilitating quantization." +msgstr "" -#: ../../source/ref-changelog.md:63 -#, fuzzy +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:39 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:46 +#: of msgid "" -"HFedXGBoost ([#2226](https://github.com/adap/flower/pull/2226), " -"[#2771](https://github.com/adap/flower/pull/2771))" +"The size of the range into which floating-point model parameters are " +"quantized, mapping each parameter to an integer in [0, " +"quantization_range-1]. This facilitates cryptographic operations on the " +"model updates." msgstr "" -"**Nouvel exemple de code JAX** " -"([#906](https://github.com/adap/flower/pull/906), " -"[#1143](https://github.com/adap/flower/pull/1143))" -#: ../../source/ref-changelog.md:64 -#, fuzzy -msgid "FedVSSL ([#2412](https://github.com/adap/flower/pull/2412))" +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:43 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:50 +#: of +msgid "" +"The range of values from which random mask entries are uniformly sampled " +"([0, modulus_range-1]). `modulus_range` must be less than 4294967296. " +"Please use 2**n values for `modulus_range` to prevent overflow issues." msgstr "" -"Amélioration de la documentation sur le serveur gRPC " -"([#841](https://github.com/adap/flower/pull/841))" -#: ../../source/ref-changelog.md:65 -#, fuzzy -msgid "FedNova ([#2179](https://github.com/adap/flower/pull/2179))" +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:47 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:54 +#: of +msgid "" +"The timeout duration in seconds. If specified, the workflow will wait for" +" replies for this duration each time. If `None`, there is no time limit " +"and the workflow will wait until replies for all messages are received." msgstr "" -"**Ajouter une nouvelle stratégie `FedProx`** " -"([#1619](https://github.com/adap/flower/pull/1619))" -#: ../../source/ref-changelog.md:66 -#, fuzzy -msgid "HeteroFL ([#2439](https://github.com/adap/flower/pull/2439))" -msgstr "Nouvelle référence API ([#554](https://github.com/adap/flower/pull/554))" +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:61 +#: of +msgid "" +"Generally, higher `num_shares` means more robust to dropouts while " +"increasing the computational costs; higher `reconstruction_threshold` " +"means better privacy guarantees but less tolerance to dropouts." +msgstr "" -#: ../../source/ref-changelog.md:67 -#, fuzzy -msgid "FedAvgM ([#2246](https://github.com/adap/flower/pull/2246))" -msgstr "Nouvelle référence API ([#554](https://github.com/adap/flower/pull/554))" +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:58 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:64 +#: of +msgid "Too large `max_weight` may compromise the precision of the quantization." +msgstr "" -#: ../../source/ref-changelog.md:68 -#, fuzzy -msgid "FedPara ([#2722](https://github.com/adap/flower/pull/2722))" +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:59 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:65 +#: of +msgid "`modulus_range` must be 2**n and larger than `quantization_range`." msgstr "" -"**Renommé stratégie q-FedAvg** " -"([#802](https://github.com/adap/flower/pull/802))" -#: ../../source/ref-changelog.md:70 -#, fuzzy -msgid "" -"**Improve documentation** " -"([#2674](https://github.com/adap/flower/pull/2674), " -"[#2480](https://github.com/adap/flower/pull/2480), " -"[#2826](https://github.com/adap/flower/pull/2826), " -"[#2727](https://github.com/adap/flower/pull/2727), " -"[#2761](https://github.com/adap/flower/pull/2761), " -"[#2900](https://github.com/adap/flower/pull/2900))" +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:66 +#: of +msgid "" +"When `num_shares` is a float, it is interpreted as the proportion of all " +"selected clients, and hence the number of shares will be determined in " +"the runtime. This allows for dynamic adjustment based on the total number" +" of participating clients." msgstr "" -"**Mise à jour de la documentation** " -"([#1629](https://github.com/adap/flower/pull/1629), " -"[#1628](https://github.com/adap/flower/pull/1628), " -"[#1620](https://github.com/adap/flower/pull/1620), " -"[#1618](https://github.com/adap/flower/pull/1618), " -"[#1617](https://github.com/adap/flower/pull/1617), " -"[#1613](https://github.com/adap/flower/pull/1613), " -"[#1614](https://github.com/adap/flower/pull/1614))" -#: ../../source/ref-changelog.md:72 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:69 +#: of msgid "" -"**Improved testing and development infrastructure** " -"([#2797](https://github.com/adap/flower/pull/2797), " -"[#2676](https://github.com/adap/flower/pull/2676), " -"[#2644](https://github.com/adap/flower/pull/2644), " -"[#2656](https://github.com/adap/flower/pull/2656), " -"[#2848](https://github.com/adap/flower/pull/2848), " -"[#2675](https://github.com/adap/flower/pull/2675), " -"[#2735](https://github.com/adap/flower/pull/2735), " -"[#2767](https://github.com/adap/flower/pull/2767), " -"[#2732](https://github.com/adap/flower/pull/2732), " -"[#2744](https://github.com/adap/flower/pull/2744), " -"[#2681](https://github.com/adap/flower/pull/2681), " -"[#2699](https://github.com/adap/flower/pull/2699), " -"[#2745](https://github.com/adap/flower/pull/2745), " -"[#2734](https://github.com/adap/flower/pull/2734), " -"[#2731](https://github.com/adap/flower/pull/2731), " -"[#2652](https://github.com/adap/flower/pull/2652), " -"[#2720](https://github.com/adap/flower/pull/2720), " -"[#2721](https://github.com/adap/flower/pull/2721), " -"[#2717](https://github.com/adap/flower/pull/2717), " -"[#2864](https://github.com/adap/flower/pull/2864), " -"[#2694](https://github.com/adap/flower/pull/2694), " -"[#2709](https://github.com/adap/flower/pull/2709), " -"[#2658](https://github.com/adap/flower/pull/2658), " -"[#2796](https://github.com/adap/flower/pull/2796), " -"[#2692](https://github.com/adap/flower/pull/2692), " -"[#2657](https://github.com/adap/flower/pull/2657), " -"[#2813](https://github.com/adap/flower/pull/2813), " -"[#2661](https://github.com/adap/flower/pull/2661), " -"[#2398](https://github.com/adap/flower/pull/2398))" +"Similarly, when `reconstruction_threshold` is a float, it is interpreted " +"as the proportion of the number of shares needed for the reconstruction " +"of a private key. This feature enables flexibility in setting the " +"security threshold relative to the number of distributed shares." msgstr "" -#: ../../source/ref-changelog.md:74 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:73 +#: of msgid "" -"The Flower testing and development infrastructure has received " -"substantial updates. This makes Flower 1.7 the most tested release ever." +"`num_shares`, `reconstruction_threshold`, and the quantization parameters" +" (`clipping_range`, `quantization_range`, `modulus_range`) play critical " +"roles in balancing privacy, robustness, and efficiency within the SecAgg+" +" protocol." msgstr "" -#: ../../source/ref-changelog.md:76 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"**Update dependencies** " -"([#2753](https://github.com/adap/flower/pull/2753), " -"[#2651](https://github.com/adap/flower/pull/2651), " -"[#2739](https://github.com/adap/flower/pull/2739), " -"[#2837](https://github.com/adap/flower/pull/2837), " -"[#2788](https://github.com/adap/flower/pull/2788), " -"[#2811](https://github.com/adap/flower/pull/2811), " -"[#2774](https://github.com/adap/flower/pull/2774), " -"[#2790](https://github.com/adap/flower/pull/2790), " -"[#2751](https://github.com/adap/flower/pull/2751), " -"[#2850](https://github.com/adap/flower/pull/2850), " -"[#2812](https://github.com/adap/flower/pull/2812), " -"[#2872](https://github.com/adap/flower/pull/2872), " -"[#2736](https://github.com/adap/flower/pull/2736), " -"[#2756](https://github.com/adap/flower/pull/2756), " -"[#2857](https://github.com/adap/flower/pull/2857), " -"[#2757](https://github.com/adap/flower/pull/2757), " -"[#2810](https://github.com/adap/flower/pull/2810), " -"[#2740](https://github.com/adap/flower/pull/2740), " -"[#2789](https://github.com/adap/flower/pull/2789))" +":py:obj:`collect_masked_vectors_stage " +"`\\" +" \\(driver\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:78 -msgid "" -"**General improvements** " -"([#2803](https://github.com/adap/flower/pull/2803), " -"[#2847](https://github.com/adap/flower/pull/2847), " -"[#2877](https://github.com/adap/flower/pull/2877), " -"[#2690](https://github.com/adap/flower/pull/2690), " -"[#2889](https://github.com/adap/flower/pull/2889), " -"[#2874](https://github.com/adap/flower/pull/2874), " -"[#2819](https://github.com/adap/flower/pull/2819), " -"[#2689](https://github.com/adap/flower/pull/2689), " -"[#2457](https://github.com/adap/flower/pull/2457), " -"[#2870](https://github.com/adap/flower/pull/2870), " -"[#2669](https://github.com/adap/flower/pull/2669), " -"[#2876](https://github.com/adap/flower/pull/2876), " -"[#2885](https://github.com/adap/flower/pull/2885), " -"[#2858](https://github.com/adap/flower/pull/2858), " -"[#2867](https://github.com/adap/flower/pull/2867), " -"[#2351](https://github.com/adap/flower/pull/2351), " -"[#2886](https://github.com/adap/flower/pull/2886), " -"[#2860](https://github.com/adap/flower/pull/2860), " -"[#2828](https://github.com/adap/flower/pull/2828), " -"[#2869](https://github.com/adap/flower/pull/2869), " -"[#2875](https://github.com/adap/flower/pull/2875), " -"[#2733](https://github.com/adap/flower/pull/2733), " -"[#2488](https://github.com/adap/flower/pull/2488), " -"[#2646](https://github.com/adap/flower/pull/2646), " -"[#2879](https://github.com/adap/flower/pull/2879), " -"[#2821](https://github.com/adap/flower/pull/2821), " -"[#2855](https://github.com/adap/flower/pull/2855), " -"[#2800](https://github.com/adap/flower/pull/2800), " -"[#2807](https://github.com/adap/flower/pull/2807), " -"[#2801](https://github.com/adap/flower/pull/2801), " -"[#2804](https://github.com/adap/flower/pull/2804), " -"[#2851](https://github.com/adap/flower/pull/2851), " -"[#2787](https://github.com/adap/flower/pull/2787), " -"[#2852](https://github.com/adap/flower/pull/2852), " -"[#2672](https://github.com/adap/flower/pull/2672), " -"[#2759](https://github.com/adap/flower/pull/2759))" +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of +msgid "Execute the 'collect masked vectors' stage." msgstr "" -#: ../../source/ref-changelog.md:82 -#, fuzzy +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"**Deprecate** `start_numpy_client` " -"([#2563](https://github.com/adap/flower/pull/2563), " -"[#2718](https://github.com/adap/flower/pull/2718))" +":py:obj:`setup_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" msgstr "" -"**Nouvelles stratégies intégrées** " -"([#828](https://github.com/adap/flower/pull/828) " -"[#822](https://github.com/adap/flower/pull/822))" -#: ../../source/ref-changelog.md:84 -msgid "" -"Until now, clients of type `NumPyClient` needed to be started via " -"`start_numpy_client`. In our efforts to consolidate framework APIs, we " -"have introduced changes, and now all client types should start via " -"`start_client`. To continue using `NumPyClient` clients, you simply need " -"to first call the `.to_client()` method and then pass returned `Client` " -"object to `start_client`. The examples and the documentation have been " -"updated accordingly." +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.setup_stage:1 +#: of +msgid "Execute the 'setup' stage." msgstr "" -#: ../../source/ref-changelog.md:86 -#, fuzzy +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"**Deprecate legacy DP wrappers** " -"([#2749](https://github.com/adap/flower/pull/2749))" +":py:obj:`share_keys_stage " +"`\\ " +"\\(driver\\, context\\, state\\)" msgstr "" -"**Supprimez KerasClient** " -"([#857](https://github.com/adap/flower/pull/857))" -#: ../../source/ref-changelog.md:88 -msgid "" -"Legacy DP wrapper classes are deprecated, but still functional. This is " -"in preparation for an all-new pluggable version of differential privacy " -"support in Flower." +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.share_keys_stage:1 +#: of +msgid "Execute the 'share keys' stage." msgstr "" -#: ../../source/ref-changelog.md:90 -#, fuzzy +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"**Make optional arg** `--callable` **in** `flower-client` **a required " -"positional arg** ([#2673](https://github.com/adap/flower/pull/2673))" +":py:obj:`unmask_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" msgstr "" -"**Log** `Client` **exceptions dans le moteur de client virtuel** " -"([#1493](https://github.com/adap/flower/pull/1493))" -#: ../../source/ref-changelog.md:92 -#, fuzzy -msgid "" -"**Rename** `certificates` **to** `root_certificates` **in** `Driver` " -"([#2890](https://github.com/adap/flower/pull/2890))" +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.unmask_stage:1 +#: of +msgid "Execute the 'unmask' stage." msgstr "" -"**Rename** `rnd` **to** `server_round` " -"([#1321](https://github.com/adap/flower/pull/1321))" -#: ../../source/ref-changelog.md:94 +#: ../../source/ref-api/flwr.server.workflow.SecAggWorkflow.rst:2 #, fuzzy +msgid "SecAggWorkflow" +msgstr "Flux de travail" + +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:1 of msgid "" -"**Drop experimental** `Task` **fields** " -"([#2866](https://github.com/adap/flower/pull/2866), " -"[#2865](https://github.com/adap/flower/pull/2865))" +"Bases: " +":py:class:`~flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow`" msgstr "" -"**Rename** `Weights` **to** `NDArrays` " -"([#1258](https://github.com/adap/flower/pull/1258), " -"[#1259](https://github.com/adap/flower/pull/1259))" -#: ../../source/ref-changelog.md:96 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:3 of msgid "" -"Experimental fields `sa`, `legacy_server_message` and " -"`legacy_client_message` were removed from `Task` message. The removed " -"fields are superseded by the new `RecordSet` abstraction." +"The SecAgg protocol ensures the secure summation of integer vectors owned" +" by multiple parties, without accessing any individual integer vector. " +"This workflow allows the server to compute the weighted average of model " +"parameters across all clients, ensuring individual contributions remain " +"private. This is achieved by clients sending both, a weighting factor and" +" a weighted version of the locally updated parameters, both of which are " +"masked for privacy. Specifically, each client uploads \"[w, w * params]\"" +" with masks, where weighting factor 'w' is the number of examples " +"('num_examples') and 'params' represents the model parameters " +"('parameters') from the client's `FitRes`. The server then aggregates " +"these contributions to compute the weighted average of model parameters." msgstr "" -#: ../../source/ref-changelog.md:98 -#, fuzzy +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:14 of msgid "" -"**Retire MXNet examples** " -"([#2724](https://github.com/adap/flower/pull/2724))" +"The protocol involves four main stages: - 'setup': Send SecAgg " +"configuration to clients and collect their public keys. - 'share keys': " +"Broadcast public keys among clients and collect encrypted secret" msgstr "" -"**Nouvel exemple de code scikit-learn** " -"([#748](https://github.com/adap/flower/pull/748))" -#: ../../source/ref-changelog.md:100 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:54 of msgid "" -"The development of the MXNet fremework has ended and the project is now " -"[archived on GitHub](https://github.com/apache/mxnet). Existing MXNet " -"examples won't receive updates." +"Each client's private key is split into N shares under the SecAgg " +"protocol, where N is the number of selected clients." msgstr "" -#: ../../source/ref-changelog.md:102 -#, fuzzy -msgid "v1.6.0 (2023-11-28)" -msgstr "v1.4.0 (2023-04-21)" - -#: ../../source/ref-changelog.md:108 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:56 of msgid "" -"`Aashish Kolluri`, `Adam Narozniak`, `Alessio Mora`, `Barathwaja S`, " -"`Charles Beauville`, `Daniel J. Beutel`, `Daniel Nata Nugraha`, `Gabriel " -"Mota`, `Heng Pan`, `Ivan Agarský`, `JS.KIM`, `Javier`, `Marius Schlegel`," -" `Navin Chandra`, `Nic Lane`, `Peterpan828`, `Qinbin Li`, `Shaz-hash`, " -"`Steve Laskaridis`, `Taner Topal`, `William Lindskog`, `Yan Gao`, " -"`cnxdeveloper`, `k3nfalt` " +"Generally, higher `reconstruction_threshold` means better privacy " +"guarantees but less tolerance to dropouts." msgstr "" -#: ../../source/ref-changelog.md:112 -#, fuzzy +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:60 of msgid "" -"**Add experimental support for Python 3.12** " -"([#2565](https://github.com/adap/flower/pull/2565))" -msgstr "" -"**Ajouter la prise en charge expérimentale de Python 3.10 et Python " -"3.11** ([#1135](https://github.com/adap/flower/pull/1135))" +"When `reconstruction_threshold` is a float, it is interpreted as the " +"proportion of the number of all selected clients needed for the " +"reconstruction of a private key. This feature enables flexibility in " +"setting the security threshold relative to the number of selected " +"clients." +msgstr "" -#: ../../source/ref-changelog.md:114 -#, fuzzy +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:64 of msgid "" -"**Add new XGBoost examples** " -"([#2612](https://github.com/adap/flower/pull/2612), " -"[#2554](https://github.com/adap/flower/pull/2554), " -"[#2617](https://github.com/adap/flower/pull/2617), " -"[#2618](https://github.com/adap/flower/pull/2618), " -"[#2619](https://github.com/adap/flower/pull/2619), " -"[#2567](https://github.com/adap/flower/pull/2567))" +"`reconstruction_threshold`, and the quantization parameters " +"(`clipping_range`, `quantization_range`, `modulus_range`) play critical " +"roles in balancing privacy, robustness, and efficiency within the SecAgg " +"protocol." msgstr "" -"**([#1520](https://github.com/adap/flower/pull/1520), " -"[#1525](https://github.com/adap/flower/pull/1525), " -"[#1545](https://github.com/adap/flower/pull/1545), " -"[#1546](https://github.com/adap/flower/pull/1546), " -"[#1550](https://github.com/adap/flower/pull/1550), " -"[#1551](https://github.com/adap/flower/pull/1551), " -"[#1567](https://github.com/adap/flower/pull/1567))" -#: ../../source/ref-changelog.md:116 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"We have added a new `xgboost-quickstart` example alongside a new " -"`xgboost-comprehensive` example that goes more in-depth." +":py:obj:`collect_masked_vectors_stage " +"`\\ " +"\\(driver\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:118 -#, fuzzy +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"**Add Vertical FL example** " -"([#2598](https://github.com/adap/flower/pull/2598))" +":py:obj:`setup_stage `\\" +" \\(driver\\, context\\, state\\)" msgstr "" -"**Nouvel exemple de code CoreML pour iOS** " -"([#1289](https://github.com/adap/flower/pull/1289))" -#: ../../source/ref-changelog.md:120 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"We had many questions about Vertical Federated Learning using Flower, so " -"we decided to add an simple example for it on the [Titanic " -"dataset](https://www.kaggle.com/competitions/titanic/data) alongside a " -"tutorial (in the README)." +":py:obj:`share_keys_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" msgstr "" -#: ../../source/ref-changelog.md:122 -#, fuzzy +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"**Support custom** `ClientManager` **in** `start_driver()` " -"([#2292](https://github.com/adap/flower/pull/2292))" +":py:obj:`unmask_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" msgstr "" -"Ajout de la prise en charge d'un `ClientManager` personnalisé comme " -"paramètre de `start_simulation` " -"([#1171](https://github.com/adap/flower/pull/1171))" -#: ../../source/ref-changelog.md:124 +#: ../../source/ref-api/flwr.simulation.rst:2 #, fuzzy +msgid "simulation" +msgstr "Simulation de moniteur" + +#: ../../source/ref-api/flwr.simulation.rst:18::1 msgid "" -"**Update REST API to support create and delete nodes** " -"([#2283](https://github.com/adap/flower/pull/2283))" +":py:obj:`start_simulation `\\ \\(\\*\\," +" client\\_fn\\[\\, ...\\]\\)" msgstr "" -"**Nouvelle stratégie expérimentale TensorBoard** " -"([#789](https://github.com/adap/flower/pull/789))" -#: ../../source/ref-changelog.md:126 +#: ../../source/ref-api/flwr.simulation.rst:18::1 +#: flwr.simulation.app.start_simulation:1 of #, fuzzy +msgid "Start a Ray-based Flower simulation server." +msgstr "Simulation de moniteur" + +#: ../../source/ref-api/flwr.simulation.rst:18::1 msgid "" -"**Update the Android SDK** " -"([#2187](https://github.com/adap/flower/pull/2187))" +":py:obj:`run_simulation `\\ " +"\\(server\\_app\\, client\\_app\\, ...\\)" msgstr "" -"**Introduire une nouvelle ligne de base pour les fleurs : FedAvg " -"FEMNIST** ([#1655](https://github.com/adap/flower/pull/1655))" -#: ../../source/ref-changelog.md:128 -msgid "Add gRPC request-response capability to the Android SDK." +#: ../../source/ref-api/flwr.simulation.rst:18::1 +#: flwr.simulation.run_simulation.run_simulation:1 of +msgid "Run a Flower App using the Simulation Engine." msgstr "" -#: ../../source/ref-changelog.md:130 +#: ../../source/ref-api/flwr.simulation.run_simulation.rst:2 #, fuzzy +msgid "run\\_simulation" +msgstr "Simulation de moniteur" + +#: flwr.simulation.run_simulation.run_simulation:3 of msgid "" -"**Update the C++ SDK** " -"([#2537](https://github.com/adap/flower/pull/2537), " -"[#2528](https://github.com/adap/flower/pull/2528), " -"[#2523](https://github.com/adap/flower/pull/2523), " -"[#2522](https://github.com/adap/flower/pull/2522))" +"The `ServerApp` to be executed. It will send messages to different " +"`ClientApp` instances running on different (virtual) SuperNodes." msgstr "" -"Mettre à jour les outils de développement " -"([#1231](https://github.com/adap/flower/pull/1231), " -"[#1276](https://github.com/adap/flower/pull/1276), " -"[#1301](https://github.com/adap/flower/pull/1301), " -"[#1310](https://github.com/adap/flower/pull/1310))" -#: ../../source/ref-changelog.md:132 -msgid "Add gRPC request-response capability to the C++ SDK." +#: flwr.simulation.run_simulation.run_simulation:6 of +msgid "" +"The `ClientApp` to be executed by each of the SuperNodes. It will receive" +" messages sent by the `ServerApp`." msgstr "" -#: ../../source/ref-changelog.md:134 -#, fuzzy +#: flwr.simulation.run_simulation.run_simulation:9 of msgid "" -"**Make HTTPS the new default** " -"([#2591](https://github.com/adap/flower/pull/2591), " -"[#2636](https://github.com/adap/flower/pull/2636))" +"Number of nodes that run a ClientApp. They can be sampled by a Driver in " +"the ServerApp and receive a Message describing what the ClientApp should " +"perform." msgstr "" -"**Exemple de code mis à jour** " -"([#1344](https://github.com/adap/flower/pull/1344), " -"[#1347](https://github.com/adap/flower/pull/1347))" -#: ../../source/ref-changelog.md:136 -msgid "" -"Flower is moving to HTTPS by default. The new `flower-server` requires " -"passing `--certificates`, but users can enable `--insecure` to use HTTP " -"for prototyping. The same applies to `flower-client`, which can either " -"use user-provided credentials or gRPC-bundled certificates to connect to " -"an HTTPS-enabled server or requires opt-out via passing `--insecure` to " -"enable insecure HTTP connections." +#: flwr.simulation.run_simulation.run_simulation:13 of +msgid "A simulation backend that runs `ClientApp`s." msgstr "" -#: ../../source/ref-changelog.md:138 +#: flwr.simulation.run_simulation.run_simulation:15 of msgid "" -"For backward compatibility, `start_client()` and `start_numpy_client()` " -"will still start in insecure mode by default. In a future release, " -"insecure connections will require user opt-in by passing `insecure=True`." +"'A dictionary, e.g {\"\": , \"\": } to " +"configure a backend. Values supported in are those included by " +"`flwr.common.typing.ConfigsRecordValues`." msgstr "" -#: ../../source/ref-changelog.md:140 -#, fuzzy +#: flwr.simulation.run_simulation.run_simulation:19 of msgid "" -"**Unify client API** ([#2303](https://github.com/adap/flower/pull/2303), " -"[#2390](https://github.com/adap/flower/pull/2390), " -"[#2493](https://github.com/adap/flower/pull/2493))" +"A boolean to indicate whether to enable GPU growth on the main thread. " +"This is desirable if you make use of a TensorFlow model on your " +"`ServerApp` while having your `ClientApp` running on the same GPU. " +"Without enabling this, you might encounter an out-of-memory error because" +" TensorFlow, by default, allocates all GPU memory. Read more about how " +"`tf.config.experimental.set_memory_growth()` works in the TensorFlow " +"documentation: https://www.tensorflow.org/api/stable." msgstr "" -"**Mettre à jour les exemples de code** " -"([#1291](https://github.com/adap/flower/pull/1291), " -"[#1286](https://github.com/adap/flower/pull/1286), " -"[#1282](https://github.com/adap/flower/pull/1282))" -#: ../../source/ref-changelog.md:142 +#: flwr.simulation.run_simulation.run_simulation:26 of msgid "" -"Using the `client_fn`, Flower clients can interchangeably run as " -"standalone processes (i.e. via `start_client`) or in simulation (i.e. via" -" `start_simulation`) without requiring changes to how the client class is" -" defined and instantiated. The `to_client()` function is introduced to " -"convert a `NumPyClient` to a `Client`." +"When diabled, only INFO, WARNING and ERROR log messages will be shown. If" +" enabled, DEBUG-level logs will be displayed." msgstr "" -#: ../../source/ref-changelog.md:144 +#: ../../source/ref-api/flwr.simulation.start_simulation.rst:2 #, fuzzy +msgid "start\\_simulation" +msgstr "démarrer_simulation" + +#: flwr.simulation.app.start_simulation:3 of msgid "" -"**Add new** `Bulyan` **strategy** " -"([#1817](https://github.com/adap/flower/pull/1817), " -"[#1891](https://github.com/adap/flower/pull/1891))" +"A function creating client instances. The function must take a single " +"`str` argument called `cid`. It should return a single client instance of" +" type Client. Note that the created client instances are ephemeral and " +"will often be destroyed after a single method invocation. Since client " +"instances are not long-lived, they should not attempt to carry state over" +" method invocations. Any state required by the instance (model, dataset, " +"hyperparameters, ...) should be (re-)created in either the call to " +"`client_fn` or the call to any of the client methods (e.g., load " +"evaluation data in the `evaluate` method itself)." msgstr "" -"**Nouvelles stratégies intégrées** " -"([#828](https://github.com/adap/flower/pull/828) " -"[#822](https://github.com/adap/flower/pull/822))" -#: ../../source/ref-changelog.md:146 -#, fuzzy +#: flwr.simulation.app.start_simulation:13 of msgid "" -"The new `Bulyan` strategy implements Bulyan by [El Mhamdi et al., " -"2018](https://arxiv.org/abs/1802.07927)" +"The total number of clients in this simulation. This must be set if " +"`clients_ids` is not set and vice-versa." msgstr "" -"La nouvelle stratégie `FedMedian` met en œuvre Federated Median " -"(FedMedian) par [Yin et al., 2018] " -"(https://arxiv.org/pdf/1803.01498v1.pdf)." -#: ../../source/ref-changelog.md:148 -#, fuzzy +#: flwr.simulation.app.start_simulation:16 of msgid "" -"**Add new** `XGB Bagging` **strategy** " -"([#2611](https://github.com/adap/flower/pull/2611))" +"List `client_id`s for each client. This is only required if `num_clients`" +" is not set. Setting both `num_clients` and `clients_ids` with " +"`len(clients_ids)` not equal to `num_clients` generates an error." msgstr "" -"**Ajouter une nouvelle stratégie `FedProx`** " -"([#1619](https://github.com/adap/flower/pull/1619))" -#: ../../source/ref-changelog.md:150 ../../source/ref-changelog.md:152 -#, fuzzy +#: flwr.simulation.app.start_simulation:20 of msgid "" -"**Introduce `WorkloadState`** " -"([#2564](https://github.com/adap/flower/pull/2564), " -"[#2632](https://github.com/adap/flower/pull/2632))" +"CPU and GPU resources for a single client. Supported keys are `num_cpus` " +"and `num_gpus`. To understand the GPU utilization caused by `num_gpus`, " +"as well as using custom resources, please consult the Ray documentation." msgstr "" -"**Nouvelles stratégies intégrées** " -"([#828](https://github.com/adap/flower/pull/828) " -"[#822](https://github.com/adap/flower/pull/822))" -#: ../../source/ref-changelog.md:156 -#, fuzzy +#: flwr.simulation.app.start_simulation:25 of msgid "" -"FedProx ([#2210](https://github.com/adap/flower/pull/2210), " -"[#2286](https://github.com/adap/flower/pull/2286), " -"[#2509](https://github.com/adap/flower/pull/2509))" +"An implementation of the abstract base class `flwr.server.Server`. If no " +"instance is provided, then `start_server` will create one." msgstr "" -"**Mettre à jour les exemples de code** " -"([#1291](https://github.com/adap/flower/pull/1291), " -"[#1286](https://github.com/adap/flower/pull/1286), " -"[#1282](https://github.com/adap/flower/pull/1282))" -#: ../../source/ref-changelog.md:158 -#, fuzzy +#: flwr.simulation.app.start_simulation:31 of msgid "" -"Baselines Docs ([#2290](https://github.com/adap/flower/pull/2290), " -"[#2400](https://github.com/adap/flower/pull/2400))" +"An implementation of the abstract base class `flwr.server.Strategy`. If " +"no strategy is provided, then `start_server` will use " +"`flwr.server.strategy.FedAvg`." msgstr "" -"**Nouvel exemple de code JAX** " -"([#906](https://github.com/adap/flower/pull/906), " -"[#1143](https://github.com/adap/flower/pull/1143))" -#: ../../source/ref-changelog.md:160 -#, fuzzy +#: flwr.simulation.app.start_simulation:35 of msgid "" -"FedMLB ([#2340](https://github.com/adap/flower/pull/2340), " -"[#2507](https://github.com/adap/flower/pull/2507))" +"An implementation of the abstract base class `flwr.server.ClientManager`." +" If no implementation is provided, then `start_simulation` will use " +"`flwr.server.client_manager.SimpleClientManager`." msgstr "" -"**Exemple de code mis à jour** " -"([#1344](https://github.com/adap/flower/pull/1344), " -"[#1347](https://github.com/adap/flower/pull/1347))" -#: ../../source/ref-changelog.md:162 -#, fuzzy +#: flwr.simulation.app.start_simulation:39 of msgid "" -"TAMUNA ([#2254](https://github.com/adap/flower/pull/2254), " -"[#2508](https://github.com/adap/flower/pull/2508))" +"Optional dictionary containing arguments for the call to `ray.init`. If " +"ray_init_args is None (the default), Ray will be initialized with the " +"following default args: { \"ignore_reinit_error\": True, " +"\"include_dashboard\": False } An empty dictionary can be used " +"(ray_init_args={}) to prevent any arguments from being passed to " +"ray.init." msgstr "" -"**Nouvelles stratégies intégrées** " -"([#828](https://github.com/adap/flower/pull/828) " -"[#822](https://github.com/adap/flower/pull/822))" -#: ../../source/ref-changelog.md:164 -#, fuzzy -msgid "FedMeta [#2438](https://github.com/adap/flower/pull/2438)" -msgstr "Nouvelle référence API ([#554](https://github.com/adap/flower/pull/554))" +#: flwr.simulation.app.start_simulation:39 of +msgid "" +"Optional dictionary containing arguments for the call to `ray.init`. If " +"ray_init_args is None (the default), Ray will be initialized with the " +"following default args:" +msgstr "" -#: ../../source/ref-changelog.md:166 -#, fuzzy -msgid "FjORD [#2431](https://github.com/adap/flower/pull/2431)" +#: flwr.simulation.app.start_simulation:43 of +msgid "{ \"ignore_reinit_error\": True, \"include_dashboard\": False }" msgstr "" -"Amélioration de la documentation sur le serveur gRPC " -"([#841](https://github.com/adap/flower/pull/841))" -#: ../../source/ref-changelog.md:168 -#, fuzzy -msgid "MOON [#2421](https://github.com/adap/flower/pull/2421)" +#: flwr.simulation.app.start_simulation:45 of +msgid "" +"An empty dictionary can be used (ray_init_args={}) to prevent any " +"arguments from being passed to ray.init." msgstr "" -"**Ajouter une nouvelle stratégie `FedProx`** " -"([#1619](https://github.com/adap/flower/pull/1619))" -#: ../../source/ref-changelog.md:170 -#, fuzzy -msgid "DepthFL [#2295](https://github.com/adap/flower/pull/2295)" +#: flwr.simulation.app.start_simulation:48 of +msgid "" +"Set to True to prevent `ray.shutdown()` in case " +"`ray.is_initialized()=True`." msgstr "" -"**Ajouter une nouvelle stratégie `FedProx`** " -"([#1619](https://github.com/adap/flower/pull/1619))" -#: ../../source/ref-changelog.md:172 -#, fuzzy -msgid "FedPer [#2266](https://github.com/adap/flower/pull/2266)" -msgstr "Nouvelle référence API ([#554](https://github.com/adap/flower/pull/554))" +#: flwr.simulation.app.start_simulation:50 of +msgid "" +"Optionally specify the type of actor to use. The actor object, which " +"persists throughout the simulation, will be the process in charge of " +"executing a ClientApp wrapping input argument `client_fn`." +msgstr "" -#: ../../source/ref-changelog.md:174 -#, fuzzy -msgid "FedWav2vec [#2551](https://github.com/adap/flower/pull/2551)" -msgstr "Nouvelle référence API ([#554](https://github.com/adap/flower/pull/554))" +#: flwr.simulation.app.start_simulation:54 of +msgid "" +"If you want to create your own Actor classes, you might need to pass some" +" input argument. You can use this dictionary for such purpose." +msgstr "" + +#: flwr.simulation.app.start_simulation:57 of +msgid "" +"(default: \"DEFAULT\") Optional string (\"DEFAULT\" or \"SPREAD\") for " +"the VCE to choose in which node the actor is placed. If you are an " +"advanced user needed more control you can use lower-level scheduling " +"strategies to pin actors to specific compute nodes (e.g. via " +"NodeAffinitySchedulingStrategy). Please note this is an advanced feature." +" For all details, please refer to the Ray documentation: " +"https://docs.ray.io/en/latest/ray-core/scheduling/index.html" +msgstr "" + +#: flwr.simulation.app.start_simulation:66 of +msgid "**hist** -- Object containing metrics from training." +msgstr "" + +#: ../../source/ref-changelog.md:1 +msgid "Changelog" +msgstr "Changelog" -#: ../../source/ref-changelog.md:176 +#: ../../source/ref-changelog.md:3 #, fuzzy -msgid "niid-Bench [#2428](https://github.com/adap/flower/pull/2428)" -msgstr "Nouvelle référence API ([#554](https://github.com/adap/flower/pull/554))" +msgid "Unreleased" +msgstr "Inédit" -#: ../../source/ref-changelog.md:178 +#: ../../source/ref-changelog.md:5 ../../source/ref-changelog.md:19 +#: ../../source/ref-changelog.md:83 ../../source/ref-changelog.md:176 +#: ../../source/ref-changelog.md:276 ../../source/ref-changelog.md:360 +#: ../../source/ref-changelog.md:424 ../../source/ref-changelog.md:482 +#: ../../source/ref-changelog.md:551 ../../source/ref-changelog.md:680 +#: ../../source/ref-changelog.md:722 ../../source/ref-changelog.md:789 +#: ../../source/ref-changelog.md:855 ../../source/ref-changelog.md:900 +#: ../../source/ref-changelog.md:939 ../../source/ref-changelog.md:972 +#: ../../source/ref-changelog.md:1022 +msgid "What's new?" +msgstr "Quoi de neuf ?" + +#: ../../source/ref-changelog.md:7 ../../source/ref-changelog.md:71 +#: ../../source/ref-changelog.md:146 ../../source/ref-changelog.md:258 +#: ../../source/ref-changelog.md:348 ../../source/ref-changelog.md:412 +#: ../../source/ref-changelog.md:470 ../../source/ref-changelog.md:539 +#: ../../source/ref-changelog.md:601 ../../source/ref-changelog.md:620 +#: ../../source/ref-changelog.md:776 ../../source/ref-changelog.md:847 +#: ../../source/ref-changelog.md:884 ../../source/ref-changelog.md:927 +msgid "Incompatible changes" +msgstr "Changements incompatibles" + +#: ../../source/ref-changelog.md:9 ../../source/ref-changelog.md:73 +#: ../../source/ref-changelog.md:350 ../../source/ref-changelog.md:414 +#: ../../source/ref-changelog.md:472 ../../source/ref-changelog.md:541 +#: ../../source/ref-changelog.md:603 +msgid "None" +msgstr "Aucun" + +#: ../../source/ref-changelog.md:11 #, fuzzy +msgid "v1.8.0 (2024-04-03)" +msgstr "v1.3.0 (2023-02-06)" + +#: ../../source/ref-changelog.md:13 ../../source/ref-changelog.md:77 +#: ../../source/ref-changelog.md:170 ../../source/ref-changelog.md:270 +#: ../../source/ref-changelog.md:354 ../../source/ref-changelog.md:418 +#: ../../source/ref-changelog.md:476 ../../source/ref-changelog.md:545 +#: ../../source/ref-changelog.md:614 +msgid "Thanks to our contributors" +msgstr "Merci à nos contributeurs" + +#: ../../source/ref-changelog.md:15 ../../source/ref-changelog.md:79 +#: ../../source/ref-changelog.md:172 ../../source/ref-changelog.md:272 +#: ../../source/ref-changelog.md:356 ../../source/ref-changelog.md:420 +#: ../../source/ref-changelog.md:478 msgid "" -"FedBN ([#2608](https://github.com/adap/flower/pull/2608), " -"[#2615](https://github.com/adap/flower/pull/2615))" +"We would like to give our special thanks to all the contributors who made" +" the new version of Flower possible (in `git shortlog` order):" msgstr "" -"**Nouvelles stratégies intégrées** " -"([#828](https://github.com/adap/flower/pull/828) " -"[#822](https://github.com/adap/flower/pull/822))" +"Nous tenons à remercier tout particulièrement tous les contributeurs qui " +"ont rendu possible la nouvelle version de Flower (dans l'ordre `git " +"shortlog`) :" -#: ../../source/ref-changelog.md:180 -#, fuzzy +#: ../../source/ref-changelog.md:17 msgid "" -"**General updates to Flower Examples** " -"([#2384](https://github.com/adap/flower/pull/2384), " -"[#2425](https://github.com/adap/flower/pull/2425), " -"[#2526](https://github.com/adap/flower/pull/2526), " -"[#2302](https://github.com/adap/flower/pull/2302), " -"[#2545](https://github.com/adap/flower/pull/2545))" +"`Adam Narozniak`, `Charles Beauville`, `Daniel J. Beutel`, `Daniel Nata " +"Nugraha`, `Danny`, `Gustavo Bertoli`, `Heng Pan`, `Ikko Eltociear " +"Ashimine`, `Jack Cook`, `Javier`, `Raj Parekh`, `Robert Steiner`, " +"`Sebastian van der Voort`, `Taner Topal`, `Yan Gao`, `mohammadnaseri`, " +"`tabdar-khan` " msgstr "" -"Mettre à jour les outils de développement " -"([#1231](https://github.com/adap/flower/pull/1231), " -"[#1276](https://github.com/adap/flower/pull/1276), " -"[#1301](https://github.com/adap/flower/pull/1301), " -"[#1310](https://github.com/adap/flower/pull/1310))" -#: ../../source/ref-changelog.md:182 +#: ../../source/ref-changelog.md:21 +msgid "" +"**Introduce Flower Next high-level API (stable)** " +"([#3002](https://github.com/adap/flower/pull/3002), " +"[#2934](https://github.com/adap/flower/pull/2934), " +"[#2958](https://github.com/adap/flower/pull/2958), " +"[#3173](https://github.com/adap/flower/pull/3173), " +"[#3174](https://github.com/adap/flower/pull/3174), " +"[#2923](https://github.com/adap/flower/pull/2923), " +"[#2691](https://github.com/adap/flower/pull/2691), " +"[#3079](https://github.com/adap/flower/pull/3079), " +"[#2961](https://github.com/adap/flower/pull/2961), " +"[#2924](https://github.com/adap/flower/pull/2924), " +"[#3166](https://github.com/adap/flower/pull/3166), " +"[#3031](https://github.com/adap/flower/pull/3031), " +"[#3057](https://github.com/adap/flower/pull/3057), " +"[#3000](https://github.com/adap/flower/pull/3000), " +"[#3113](https://github.com/adap/flower/pull/3113), " +"[#2957](https://github.com/adap/flower/pull/2957), " +"[#3183](https://github.com/adap/flower/pull/3183), " +"[#3180](https://github.com/adap/flower/pull/3180), " +"[#3035](https://github.com/adap/flower/pull/3035), " +"[#3189](https://github.com/adap/flower/pull/3189), " +"[#3185](https://github.com/adap/flower/pull/3185), " +"[#3190](https://github.com/adap/flower/pull/3190), " +"[#3191](https://github.com/adap/flower/pull/3191), " +"[#3195](https://github.com/adap/flower/pull/3195), " +"[#3197](https://github.com/adap/flower/pull/3197))" +msgstr "" + +#: ../../source/ref-changelog.md:23 +msgid "" +"The Flower Next high-level API is stable! Flower Next is the future of " +"Flower - all new features (like Flower Mods) will be built on top of it. " +"You can start to migrate your existing projects to Flower Next by using " +"`ServerApp` and `ClientApp` (check out `quickstart-pytorch` or " +"`quickstart-tensorflow`, a detailed migration guide will follow shortly)." +" Flower Next allows you to run multiple projects concurrently (we call " +"this multi-run) and execute the same project in either simulation " +"environments or deployment environments without having to change a single" +" line of code. The best part? It's fully compatible with existing Flower " +"projects that use `Strategy`, `NumPyClient` & co." +msgstr "" + +#: ../../source/ref-changelog.md:25 #, fuzzy msgid "" -"**General updates to Flower Baselines** " -"([#2301](https://github.com/adap/flower/pull/2301), " -"[#2305](https://github.com/adap/flower/pull/2305), " -"[#2307](https://github.com/adap/flower/pull/2307), " -"[#2327](https://github.com/adap/flower/pull/2327), " -"[#2435](https://github.com/adap/flower/pull/2435), " -"[#2462](https://github.com/adap/flower/pull/2462), " -"[#2463](https://github.com/adap/flower/pull/2463), " -"[#2461](https://github.com/adap/flower/pull/2461), " -"[#2469](https://github.com/adap/flower/pull/2469), " -"[#2466](https://github.com/adap/flower/pull/2466), " -"[#2471](https://github.com/adap/flower/pull/2471), " -"[#2472](https://github.com/adap/flower/pull/2472), " -"[#2470](https://github.com/adap/flower/pull/2470))" +"**Introduce Flower Next low-level API (preview)** " +"([#3062](https://github.com/adap/flower/pull/3062), " +"[#3034](https://github.com/adap/flower/pull/3034), " +"[#3069](https://github.com/adap/flower/pull/3069))" msgstr "" -"**Améliorations générales** " -"([#1491](https://github.com/adap/flower/pull/1491), " -"[#1504](https://github.com/adap/flower/pull/1504), " -"[#1506](https://github.com/adap/flower/pull/1506), " -"[#1514](https://github.com/adap/flower/pull/1514), " -"[#1522](https://github.com/adap/flower/pull/1522), " -"[#1523](https://github.com/adap/flower/pull/1523), " -"[#1526](https://github.com/adap/flower/pull/1526), " -"[#1528](https://github.com/adap/flower/pull/1528), " -"[#1547](https://github.com/adap/flower/pull/1547), " -"[#1549](https://github.com/adap/flower/pull/1549), " -"[#1560](https://github.com/adap/flower/pull/1560), " -"[#1564](https://github.com/adap/flower/pull/1564), " -"[#1566](https://github.com/adap/flower/pull/1566))" +"**Mettre à jour les exemples de code** " +"([#1291](https://github.com/adap/flower/pull/1291), " +"[#1286](https://github.com/adap/flower/pull/1286), " +"[#1282](https://github.com/adap/flower/pull/1282))" -#: ../../source/ref-changelog.md:184 +#: ../../source/ref-changelog.md:27 +msgid "" +"In addition to the Flower Next *high-level* API that uses `Strategy`, " +"`NumPyClient` & co, Flower 1.8 also comes with a preview version of the " +"new Flower Next *low-level* API. The low-level API allows for granular " +"control of every aspect of the learning process by sending/receiving " +"individual messages to/from client nodes. The new `ServerApp` supports " +"registering a custom `main` function that allows writing custom training " +"loops for methods like async FL, cyclic training, or federated analytics." +" The new `ClientApp` supports registering `train`, `evaluate` and `query`" +" functions that can access the raw message received from the `ServerApp`." +" New abstractions like `RecordSet`, `Message` and `Context` further " +"enable sending multiple models, multiple sets of config values and " +"metrics, stateful computations on the client node and implementations of " +"custom SMPC protocols, to name just a few." +msgstr "" + +#: ../../source/ref-changelog.md:29 #, fuzzy msgid "" -"**General updates to the simulation engine** " -"([#2331](https://github.com/adap/flower/pull/2331), " -"[#2447](https://github.com/adap/flower/pull/2447), " -"[#2448](https://github.com/adap/flower/pull/2448), " -"[#2294](https://github.com/adap/flower/pull/2294))" +"**Introduce Flower Mods (preview)** " +"([#3054](https://github.com/adap/flower/pull/3054), " +"[#2911](https://github.com/adap/flower/pull/2911), " +"[#3083](https://github.com/adap/flower/pull/3083))" msgstr "" -"Mettre à jour les outils de développement " -"([#1231](https://github.com/adap/flower/pull/1231), " -"[#1276](https://github.com/adap/flower/pull/1276), " -"[#1301](https://github.com/adap/flower/pull/1301), " -"[#1310](https://github.com/adap/flower/pull/1310))" +"**Introduire la télémétrie optionnelle** " +"([#1533](https://github.com/adap/flower/pull/1533), " +"[#1544](https://github.com/adap/flower/pull/1544), " +"[#1584](https://github.com/adap/flower/pull/1584))" -#: ../../source/ref-changelog.md:186 +#: ../../source/ref-changelog.md:31 +msgid "" +"Flower Modifiers (we call them Mods) can intercept messages and analyze, " +"edit or handle them directly. Mods can be used to develop pluggable " +"modules that work across different projects. Flower 1.8 already includes " +"mods to log the size of a message, the number of parameters sent over the" +" network, differential privacy with fixed clipping and adaptive clipping," +" local differential privacy and secure aggregation protocols SecAgg and " +"SecAgg+. The Flower Mods API is released as a preview, but researchers " +"can already use it to experiment with arbirtrary SMPC protocols." +msgstr "" + +#: ../../source/ref-changelog.md:33 #, fuzzy msgid "" -"**General updates to Flower SDKs** " -"([#2288](https://github.com/adap/flower/pull/2288), " -"[#2429](https://github.com/adap/flower/pull/2429), " -"[#2555](https://github.com/adap/flower/pull/2555), " -"[#2543](https://github.com/adap/flower/pull/2543), " -"[#2544](https://github.com/adap/flower/pull/2544), " -"[#2597](https://github.com/adap/flower/pull/2597), " -"[#2623](https://github.com/adap/flower/pull/2623))" +"**Fine-tune LLMs with LLM FlowerTune** " +"([#3029](https://github.com/adap/flower/pull/3029), " +"[#3089](https://github.com/adap/flower/pull/3089), " +"[#3092](https://github.com/adap/flower/pull/3092), " +"[#3100](https://github.com/adap/flower/pull/3100), " +"[#3114](https://github.com/adap/flower/pull/3114), " +"[#3162](https://github.com/adap/flower/pull/3162), " +"[#3172](https://github.com/adap/flower/pull/3172))" msgstr "" "**Tutoriel amélioré** ([#1468](https://github.com/adap/flower/pull/1468)," " [#1470](https://github.com/adap/flower/pull/1470), " @@ -15016,447 +15339,611 @@ msgstr "" "[#1474](https://github.com/adap/flower/pull/1474), " "[#1475](https://github.com/adap/flower/pull/1475))" -#: ../../source/ref-changelog.md:188 +#: ../../source/ref-changelog.md:35 msgid "" -"**General improvements** " -"([#2309](https://github.com/adap/flower/pull/2309), " -"[#2310](https://github.com/adap/flower/pull/2310), " -"[#2313](https://github.com/adap/flower/pull/2313), " -"[#2316](https://github.com/adap/flower/pull/2316), " -"[#2317](https://github.com/adap/flower/pull/2317), " -"[#2349](https://github.com/adap/flower/pull/2349), " -"[#2360](https://github.com/adap/flower/pull/2360), " -"[#2402](https://github.com/adap/flower/pull/2402), " -"[#2446](https://github.com/adap/flower/pull/2446), " -"[#2561](https://github.com/adap/flower/pull/2561), " -"[#2273](https://github.com/adap/flower/pull/2273), " -"[#2267](https://github.com/adap/flower/pull/2267), " -"[#2274](https://github.com/adap/flower/pull/2274), " -"[#2275](https://github.com/adap/flower/pull/2275), " -"[#2432](https://github.com/adap/flower/pull/2432), " -"[#2251](https://github.com/adap/flower/pull/2251), " -"[#2321](https://github.com/adap/flower/pull/2321), " -"[#1936](https://github.com/adap/flower/pull/1936), " -"[#2408](https://github.com/adap/flower/pull/2408), " -"[#2413](https://github.com/adap/flower/pull/2413), " -"[#2401](https://github.com/adap/flower/pull/2401), " -"[#2531](https://github.com/adap/flower/pull/2531), " -"[#2534](https://github.com/adap/flower/pull/2534), " -"[#2535](https://github.com/adap/flower/pull/2535), " -"[#2521](https://github.com/adap/flower/pull/2521), " -"[#2553](https://github.com/adap/flower/pull/2553), " -"[#2596](https://github.com/adap/flower/pull/2596))" +"We are introducing LLM FlowerTune, an introductory example that " +"demonstrates federated LLM fine-tuning of pre-trained Llama2 models on " +"the Alpaca-GPT4 dataset. The example is built to be easily adapted to use" +" different models and/or datasets. Read our blog post [LLM FlowerTune: " +"Federated LLM Fine-tuning with Flower](https://flower.ai/blog/2024-03-14" +"-llm-flowertune-federated-llm-finetuning-with-flower/) for more details." msgstr "" -#: ../../source/ref-changelog.md:190 ../../source/ref-changelog.md:280 -#: ../../source/ref-changelog.md:344 ../../source/ref-changelog.md:398 -#: ../../source/ref-changelog.md:465 -msgid "Flower received many improvements under the hood, too many to list here." +#: ../../source/ref-changelog.md:37 +#, fuzzy +msgid "" +"**Introduce built-in Differential Privacy (preview)** " +"([#2798](https://github.com/adap/flower/pull/2798), " +"[#2959](https://github.com/adap/flower/pull/2959), " +"[#3038](https://github.com/adap/flower/pull/3038), " +"[#3147](https://github.com/adap/flower/pull/3147), " +"[#2909](https://github.com/adap/flower/pull/2909), " +"[#2893](https://github.com/adap/flower/pull/2893), " +"[#2892](https://github.com/adap/flower/pull/2892), " +"[#3039](https://github.com/adap/flower/pull/3039), " +"[#3074](https://github.com/adap/flower/pull/3074))" msgstr "" -"Flower a reçu de nombreuses améliorations sous le capot, trop nombreuses " -"pour être énumérées ici." +"**([#842](https://github.com/adap/flower/pull/842), " +"[#844](https://github.com/adap/flower/pull/844), " +"[#845](https://github.com/adap/flower/pull/845), " +"[#847](https://github.com/adap/flower/pull/847), " +"[#993](https://github.com/adap/flower/pull/993), " +"[#994](https://github.com/adap/flower/pull/994))" -#: ../../source/ref-changelog.md:194 +#: ../../source/ref-changelog.md:39 +msgid "" +"Built-in Differential Privacy is here! Flower supports both central and " +"local differential privacy (DP). Central DP can be configured with either" +" fixed or adaptive clipping. The clipping can happen either on the " +"server-side or the client-side. Local DP does both clipping and noising " +"on the client-side. A new documentation page [explains Differential " +"Privacy approaches](https://flower.ai/docs/framework/explanation-" +"differential-privacy.html) and a new how-to guide describes [how to use " +"the new Differential Privacy components](https://flower.ai/docs/framework" +"/how-to-use-differential-privacy.html) in Flower." +msgstr "" + +#: ../../source/ref-changelog.md:41 #, fuzzy msgid "" -"**Remove support for Python 3.7** " -"([#2280](https://github.com/adap/flower/pull/2280), " -"[#2299](https://github.com/adap/flower/pull/2299), " -"[#2304](https://github.com/adap/flower/pull/2304), " -"[#2306](https://github.com/adap/flower/pull/2306), " -"[#2355](https://github.com/adap/flower/pull/2355), " -"[#2356](https://github.com/adap/flower/pull/2356))" +"**Introduce built-in Secure Aggregation (preview)** " +"([#3120](https://github.com/adap/flower/pull/3120), " +"[#3110](https://github.com/adap/flower/pull/3110), " +"[#3108](https://github.com/adap/flower/pull/3108))" msgstr "" -"**Nouvel exemple de code MLCube** " -"([#779](https://github.com/adap/flower/pull/779), " -"[#1034](https://github.com/adap/flower/pull/1034), " -"[#1065](https://github.com/adap/flower/pull/1065), " -"[#1090](https://github.com/adap/flower/pull/1090))" +"**Introduire la télémétrie optionnelle** " +"([#1533](https://github.com/adap/flower/pull/1533), " +"[#1544](https://github.com/adap/flower/pull/1544), " +"[#1584](https://github.com/adap/flower/pull/1584))" -#: ../../source/ref-changelog.md:196 +#: ../../source/ref-changelog.md:43 msgid "" -"Python 3.7 support was deprecated in Flower 1.5, and this release removes" -" support. Flower now requires Python 3.8." +"Built-in Secure Aggregation is here! Flower now supports different secure" +" aggregation protocols out-of-the-box. The best part? You can add secure " +"aggregation to your Flower projects with only a few lines of code. In " +"this initial release, we inlcude support for SecAgg and SecAgg+, but more" +" protocols will be implemented shortly. We'll also add detailed docs that" +" explain secure aggregation and how to use it in Flower. You can already " +"check out the new code example that shows how to use Flower to easily " +"combine Federated Learning, Differential Privacy and Secure Aggregation " +"in the same project." msgstr "" -#: ../../source/ref-changelog.md:198 +#: ../../source/ref-changelog.md:45 #, fuzzy msgid "" -"**Remove experimental argument** `rest` **from** `start_client` " -"([#2324](https://github.com/adap/flower/pull/2324))" +"**Introduce** `flwr` **CLI (preview)** " +"([#2942](https://github.com/adap/flower/pull/2942), " +"[#3055](https://github.com/adap/flower/pull/3055), " +"[#3111](https://github.com/adap/flower/pull/3111), " +"[#3130](https://github.com/adap/flower/pull/3130), " +"[#3136](https://github.com/adap/flower/pull/3136), " +"[#3094](https://github.com/adap/flower/pull/3094), " +"[#3059](https://github.com/adap/flower/pull/3059), " +"[#3049](https://github.com/adap/flower/pull/3049), " +"[#3142](https://github.com/adap/flower/pull/3142))" msgstr "" -"**Supprimer les stratégies expérimentales** " -"([#1280](https://github.com/adap/flower/pull/1280))" +"**Mise à jour de la documentation** " +"([#1629](https://github.com/adap/flower/pull/1629), " +"[#1628](https://github.com/adap/flower/pull/1628), " +"[#1620](https://github.com/adap/flower/pull/1620), " +"[#1618](https://github.com/adap/flower/pull/1618), " +"[#1617](https://github.com/adap/flower/pull/1617), " +"[#1613](https://github.com/adap/flower/pull/1613), " +"[#1614](https://github.com/adap/flower/pull/1614))" -#: ../../source/ref-changelog.md:200 +#: ../../source/ref-changelog.md:47 msgid "" -"The (still experimental) argument `rest` was removed from `start_client` " -"and `start_numpy_client`. Use `transport=\"rest\"` to opt into the " -"experimental REST API instead." +"A new `flwr` CLI command allows creating new Flower projects (`flwr new`)" +" and then running them using the Simulation Engine (`flwr run`)." msgstr "" -#: ../../source/ref-changelog.md:202 +#: ../../source/ref-changelog.md:49 #, fuzzy -msgid "v1.5.0 (2023-08-31)" -msgstr "v1.4.0 (2023-04-21)" - -#: ../../source/ref-changelog.md:208 msgid "" -"`Adam Narozniak`, `Anass Anhari`, `Charles Beauville`, `Dana-Farber`, " -"`Daniel J. Beutel`, `Daniel Nata Nugraha`, `Edoardo Gabrielli`, `Gustavo " -"Bertoli`, `Heng Pan`, `Javier`, `Mahdi`, `Steven Hé (Sīchàng)`, `Taner " -"Topal`, `achiverram28`, `danielnugraha`, `eunchung`, `ruthgal` " +"**Introduce Flower Next Simulation Engine** " +"([#3024](https://github.com/adap/flower/pull/3024), " +"[#3061](https://github.com/adap/flower/pull/3061), " +"[#2997](https://github.com/adap/flower/pull/2997), " +"[#2783](https://github.com/adap/flower/pull/2783), " +"[#3184](https://github.com/adap/flower/pull/3184), " +"[#3075](https://github.com/adap/flower/pull/3075), " +"[#3047](https://github.com/adap/flower/pull/3047), " +"[#2998](https://github.com/adap/flower/pull/2998), " +"[#3009](https://github.com/adap/flower/pull/3009), " +"[#3008](https://github.com/adap/flower/pull/3008))" msgstr "" +"**Introduire l'API REST (expérimentale)** " +"([#1594](https://github.com/adap/flower/pull/1594), " +"[#1690](https://github.com/adap/flower/pull/1690), " +"[#1695](https://github.com/adap/flower/pull/1695), " +"[#1712](https://github.com/adap/flower/pull/1712), " +"[#1802](https://github.com/adap/flower/pull/1802), " +"[#1770](https://github.com/adap/flower/pull/1770), " +"[#1733](https://github.com/adap/flower/pull/1733))" -#: ../../source/ref-changelog.md:212 -#, fuzzy +#: ../../source/ref-changelog.md:51 msgid "" -"**Introduce new simulation engine** " -"([#1969](https://github.com/adap/flower/pull/1969), " -"[#2221](https://github.com/adap/flower/pull/2221), " -"[#2248](https://github.com/adap/flower/pull/2248))" +"The Flower Simulation Engine can now run Flower Next projects. For " +"notebook environments, there's also a new `run_simulation` function that " +"can run `ServerApp` and `ClientApp`." msgstr "" -"**Introduire la télémétrie optionnelle** " -"([#1533](https://github.com/adap/flower/pull/1533), " -"[#1544](https://github.com/adap/flower/pull/1544), " -"[#1584](https://github.com/adap/flower/pull/1584))" -#: ../../source/ref-changelog.md:214 +#: ../../source/ref-changelog.md:53 +#, fuzzy msgid "" -"The new simulation engine has been rewritten from the ground up, yet it " -"remains fully backwards compatible. It offers much improved stability and" -" memory handling, especially when working with GPUs. Simulations " -"transparently adapt to different settings to scale simulation in CPU-" -"only, CPU+GPU, multi-GPU, or multi-node multi-GPU environments." +"**Handle SuperNode connection errors** " +"([#2969](https://github.com/adap/flower/pull/2969))" msgstr "" +"**Ajouter une nouvelle stratégie `FedProx`** " +"([#1619](https://github.com/adap/flower/pull/1619))" -#: ../../source/ref-changelog.md:216 +#: ../../source/ref-changelog.md:55 msgid "" -"Comprehensive documentation includes a new [how-to run " -"simulations](https://flower.ai/docs/framework/how-to-run-" -"simulations.html) guide, new [simulation-" -"pytorch](https://flower.ai/docs/examples/simulation-pytorch.html) and " -"[simulation-tensorflow](https://flower.ai/docs/examples/simulation-" -"tensorflow.html) notebooks, and a new [YouTube tutorial " -"series](https://www.youtube.com/watch?v=cRebUIGB5RU&list=PLNG4feLHqCWlnj8a_E1A_n5zr2-8pafTB)." +"A SuperNode will now try to reconnect indefinitely to the SuperLink in " +"case of connection errors. The arguments `--max-retries` and `--max-wait-" +"time` can now be passed to the `flower-client-app` command. `--max-" +"retries` will define the number of tentatives the client should make " +"before it gives up trying to reconnect to the SuperLink, and, `--max-" +"wait-time` defines the time before the SuperNode gives up trying to " +"reconnect to the SuperLink." msgstr "" -#: ../../source/ref-changelog.md:218 +#: ../../source/ref-changelog.md:57 +#, fuzzy msgid "" -"**Restructure Flower Docs** " -"([#1824](https://github.com/adap/flower/pull/1824), " -"[#1865](https://github.com/adap/flower/pull/1865), " -"[#1884](https://github.com/adap/flower/pull/1884), " -"[#1887](https://github.com/adap/flower/pull/1887), " -"[#1919](https://github.com/adap/flower/pull/1919), " -"[#1922](https://github.com/adap/flower/pull/1922), " -"[#1920](https://github.com/adap/flower/pull/1920), " -"[#1923](https://github.com/adap/flower/pull/1923), " -"[#1924](https://github.com/adap/flower/pull/1924), " -"[#1962](https://github.com/adap/flower/pull/1962), " -"[#2006](https://github.com/adap/flower/pull/2006), " -"[#2133](https://github.com/adap/flower/pull/2133), " -"[#2203](https://github.com/adap/flower/pull/2203), " -"[#2215](https://github.com/adap/flower/pull/2215), " -"[#2122](https://github.com/adap/flower/pull/2122), " -"[#2223](https://github.com/adap/flower/pull/2223), " -"[#2219](https://github.com/adap/flower/pull/2219), " -"[#2232](https://github.com/adap/flower/pull/2232), " -"[#2233](https://github.com/adap/flower/pull/2233), " -"[#2234](https://github.com/adap/flower/pull/2234), " -"[#2235](https://github.com/adap/flower/pull/2235), " -"[#2237](https://github.com/adap/flower/pull/2237), " -"[#2238](https://github.com/adap/flower/pull/2238), " -"[#2242](https://github.com/adap/flower/pull/2242), " -"[#2231](https://github.com/adap/flower/pull/2231), " -"[#2243](https://github.com/adap/flower/pull/2243), " -"[#2227](https://github.com/adap/flower/pull/2227))" +"**General updates to Flower Baselines** " +"([#2904](https://github.com/adap/flower/pull/2904), " +"[#2482](https://github.com/adap/flower/pull/2482), " +"[#2985](https://github.com/adap/flower/pull/2985), " +"[#2968](https://github.com/adap/flower/pull/2968))" msgstr "" +"**Introduire une nouvelle fleur Référence : FedProx MNIST** " +"([#1513](https://github.com/adap/flower/pull/1513), " +"[#1680](https://github.com/adap/flower/pull/1680), " +"[#1681](https://github.com/adap/flower/pull/1681), " +"[#1679](https://github.com/adap/flower/pull/1679))" -#: ../../source/ref-changelog.md:220 +#: ../../source/ref-changelog.md:59 msgid "" -"Much effort went into a completely restructured Flower docs experience. " -"The documentation on [flower.ai/docs](https://flower.ai/docs) is now " -"divided into Flower Framework, Flower Baselines, Flower Android SDK, " -"Flower iOS SDK, and code example projects." +"There's a new [FedStar](https://flower.ai/docs/baselines/fedstar.html) " +"baseline. Several other baselined have been updated as well." msgstr "" -#: ../../source/ref-changelog.md:222 -#, fuzzy +#: ../../source/ref-changelog.md:61 msgid "" -"**Introduce Flower Swift SDK** " -"([#1858](https://github.com/adap/flower/pull/1858), " -"[#1897](https://github.com/adap/flower/pull/1897))" +"**Improve documentation and translations** " +"([#3050](https://github.com/adap/flower/pull/3050), " +"[#3044](https://github.com/adap/flower/pull/3044), " +"[#3043](https://github.com/adap/flower/pull/3043), " +"[#2986](https://github.com/adap/flower/pull/2986), " +"[#3041](https://github.com/adap/flower/pull/3041), " +"[#3046](https://github.com/adap/flower/pull/3046), " +"[#3042](https://github.com/adap/flower/pull/3042), " +"[#2978](https://github.com/adap/flower/pull/2978), " +"[#2952](https://github.com/adap/flower/pull/2952), " +"[#3167](https://github.com/adap/flower/pull/3167), " +"[#2953](https://github.com/adap/flower/pull/2953), " +"[#3045](https://github.com/adap/flower/pull/3045), " +"[#2654](https://github.com/adap/flower/pull/2654), " +"[#3082](https://github.com/adap/flower/pull/3082), " +"[#2990](https://github.com/adap/flower/pull/2990), " +"[#2989](https://github.com/adap/flower/pull/2989))" msgstr "" -"**Introduction du SDK iOS (aperçu)** " -"([#1621](https://github.com/adap/flower/pull/1621), " -"[#1764](https://github.com/adap/flower/pull/1764))" -#: ../../source/ref-changelog.md:224 +#: ../../source/ref-changelog.md:63 msgid "" -"This is the first preview release of the Flower Swift SDK. Flower support" -" on iOS is improving, and alongside the Swift SDK and code example, there" -" is now also an iOS quickstart tutorial." +"As usual, we merged many smaller and larger improvements to the " +"documentation. A special thank you goes to [Sebastian van der " +"Voort](https://github.com/svdvoort) for landing a big documentation PR!" msgstr "" -#: ../../source/ref-changelog.md:226 +#: ../../source/ref-changelog.md:65 #, fuzzy msgid "" -"**Introduce Flower Android SDK** " -"([#2131](https://github.com/adap/flower/pull/2131))" +"**General updates to Flower Examples** " +"([3134](https://github.com/adap/flower/pull/3134), " +"[2996](https://github.com/adap/flower/pull/2996), " +"[2930](https://github.com/adap/flower/pull/2930), " +"[2967](https://github.com/adap/flower/pull/2967), " +"[2467](https://github.com/adap/flower/pull/2467), " +"[2910](https://github.com/adap/flower/pull/2910), " +"[#2918](https://github.com/adap/flower/pull/2918), " +"[#2773](https://github.com/adap/flower/pull/2773), " +"[#3063](https://github.com/adap/flower/pull/3063), " +"[#3116](https://github.com/adap/flower/pull/3116), " +"[#3117](https://github.com/adap/flower/pull/3117))" msgstr "" -"**Introduire une nouvelle ligne de base pour les fleurs : FedAvg " -"FEMNIST** ([#1655](https://github.com/adap/flower/pull/1655))" +"**Documentation mise à jour** " +"([#1494](https://github.com/adap/flower/pull/1494), " +"[#1496](https://github.com/adap/flower/pull/1496), " +"[#1500](https://github.com/adap/flower/pull/1500), " +"[#1503](https://github.com/adap/flower/pull/1503), " +"[#1505](https://github.com/adap/flower/pull/1505), " +"[#1524](https://github.com/adap/flower/pull/1524), " +"[#1518](https://github.com/adap/flower/pull/1518), " +"[#1519](https://github.com/adap/flower/pull/1519), " +"[#1515](https://github.com/adap/flower/pull/1515))" -#: ../../source/ref-changelog.md:228 +#: ../../source/ref-changelog.md:67 msgid "" -"This is the first preview release of the Flower Kotlin SDK. Flower " -"support on Android is improving, and alongside the Kotlin SDK and code " -"example, there is now also an Android quickstart tutorial." +"Two new examples show federated training of a Vision Transformer (ViT) " +"and federated learning in a medical context using the popular MONAI " +"library. `quickstart-pytorch` and `quickstart-tensorflow` demonstrate the" +" new Flower Next `ServerApp` and `ClientApp`. Many other examples " +"received considerable updates as well." msgstr "" -#: ../../source/ref-changelog.md:230 -#, fuzzy +#: ../../source/ref-changelog.md:69 msgid "" -"**Introduce new end-to-end testing infrastructure** " -"([#1842](https://github.com/adap/flower/pull/1842), " -"[#2071](https://github.com/adap/flower/pull/2071), " -"[#2072](https://github.com/adap/flower/pull/2072), " -"[#2068](https://github.com/adap/flower/pull/2068), " -"[#2067](https://github.com/adap/flower/pull/2067), " -"[#2069](https://github.com/adap/flower/pull/2069), " -"[#2073](https://github.com/adap/flower/pull/2073), " -"[#2070](https://github.com/adap/flower/pull/2070), " -"[#2074](https://github.com/adap/flower/pull/2074), " -"[#2082](https://github.com/adap/flower/pull/2082), " -"[#2084](https://github.com/adap/flower/pull/2084), " -"[#2093](https://github.com/adap/flower/pull/2093), " -"[#2109](https://github.com/adap/flower/pull/2109), " -"[#2095](https://github.com/adap/flower/pull/2095), " -"[#2140](https://github.com/adap/flower/pull/2140), " -"[#2137](https://github.com/adap/flower/pull/2137), " -"[#2165](https://github.com/adap/flower/pull/2165))" -msgstr "" -"**Améliorer l'API (expérimentale) du pilote** " -"([#1663](https://github.com/adap/flower/pull/1663), " -"[#1666](https://github.com/adap/flower/pull/1666), " -"[#1667](https://github.com/adap/flower/pull/1667), " -"[#1664](https://github.com/adap/flower/pull/1664), " -"[#1675](https://github.com/adap/flower/pull/1675), " -"[#1676](https://github.com/adap/flower/pull/1676), " -"[#1693](https://github.com/adap/flower/pull/1693), " -"[#1662](https://github.com/adap/flower/pull/1662), " -"[#1794](https://github.com/adap/flower/pull/1794))" +"**General improvements** " +"([#3171](https://github.com/adap/flower/pull/3171), " +"[3099](https://github.com/adap/flower/pull/3099), " +"[3003](https://github.com/adap/flower/pull/3003), " +"[3145](https://github.com/adap/flower/pull/3145), " +"[3017](https://github.com/adap/flower/pull/3017), " +"[3085](https://github.com/adap/flower/pull/3085), " +"[3012](https://github.com/adap/flower/pull/3012), " +"[3119](https://github.com/adap/flower/pull/3119), " +"[2991](https://github.com/adap/flower/pull/2991), " +"[2970](https://github.com/adap/flower/pull/2970), " +"[2980](https://github.com/adap/flower/pull/2980), " +"[3086](https://github.com/adap/flower/pull/3086), " +"[2932](https://github.com/adap/flower/pull/2932), " +"[2928](https://github.com/adap/flower/pull/2928), " +"[2941](https://github.com/adap/flower/pull/2941), " +"[2933](https://github.com/adap/flower/pull/2933), " +"[3181](https://github.com/adap/flower/pull/3181), " +"[2973](https://github.com/adap/flower/pull/2973), " +"[2992](https://github.com/adap/flower/pull/2992), " +"[2915](https://github.com/adap/flower/pull/2915), " +"[3040](https://github.com/adap/flower/pull/3040), " +"[3022](https://github.com/adap/flower/pull/3022), " +"[3032](https://github.com/adap/flower/pull/3032), " +"[2902](https://github.com/adap/flower/pull/2902), " +"[2931](https://github.com/adap/flower/pull/2931), " +"[3005](https://github.com/adap/flower/pull/3005), " +"[3132](https://github.com/adap/flower/pull/3132), " +"[3115](https://github.com/adap/flower/pull/3115), " +"[2944](https://github.com/adap/flower/pull/2944), " +"[3064](https://github.com/adap/flower/pull/3064), " +"[3106](https://github.com/adap/flower/pull/3106), " +"[2974](https://github.com/adap/flower/pull/2974), " +"[3178](https://github.com/adap/flower/pull/3178), " +"[2993](https://github.com/adap/flower/pull/2993), " +"[3186](https://github.com/adap/flower/pull/3186), " +"[3091](https://github.com/adap/flower/pull/3091), " +"[3125](https://github.com/adap/flower/pull/3125), " +"[3093](https://github.com/adap/flower/pull/3093), " +"[3013](https://github.com/adap/flower/pull/3013), " +"[3033](https://github.com/adap/flower/pull/3033), " +"[3133](https://github.com/adap/flower/pull/3133), " +"[3068](https://github.com/adap/flower/pull/3068), " +"[2916](https://github.com/adap/flower/pull/2916), " +"[2975](https://github.com/adap/flower/pull/2975), " +"[2984](https://github.com/adap/flower/pull/2984), " +"[2846](https://github.com/adap/flower/pull/2846), " +"[3077](https://github.com/adap/flower/pull/3077), " +"[3143](https://github.com/adap/flower/pull/3143), " +"[2921](https://github.com/adap/flower/pull/2921), " +"[3101](https://github.com/adap/flower/pull/3101), " +"[2927](https://github.com/adap/flower/pull/2927), " +"[2995](https://github.com/adap/flower/pull/2995), " +"[2972](https://github.com/adap/flower/pull/2972), " +"[2912](https://github.com/adap/flower/pull/2912), " +"[3065](https://github.com/adap/flower/pull/3065), " +"[3028](https://github.com/adap/flower/pull/3028), " +"[2922](https://github.com/adap/flower/pull/2922), " +"[2982](https://github.com/adap/flower/pull/2982), " +"[2914](https://github.com/adap/flower/pull/2914), " +"[3179](https://github.com/adap/flower/pull/3179), " +"[3080](https://github.com/adap/flower/pull/3080), " +"[2994](https://github.com/adap/flower/pull/2994), " +"[3187](https://github.com/adap/flower/pull/3187), " +"[2926](https://github.com/adap/flower/pull/2926), " +"[3018](https://github.com/adap/flower/pull/3018), " +"[3144](https://github.com/adap/flower/pull/3144), " +"[3011](https://github.com/adap/flower/pull/3011), " +"[#3152](https://github.com/adap/flower/pull/3152), " +"[#2836](https://github.com/adap/flower/pull/2836), " +"[#2929](https://github.com/adap/flower/pull/2929), " +"[#2943](https://github.com/adap/flower/pull/2943), " +"[#2955](https://github.com/adap/flower/pull/2955), " +"[#2954](https://github.com/adap/flower/pull/2954))" +msgstr "" + +#: ../../source/ref-changelog.md:75 +#, fuzzy +msgid "v1.7.0 (2024-02-05)" +msgstr "v1.3.0 (2023-02-06)" -#: ../../source/ref-changelog.md:232 +#: ../../source/ref-changelog.md:81 msgid "" -"A new testing infrastructure ensures that new changes stay compatible " -"with existing framework integrations or strategies." +"`Aasheesh Singh`, `Adam Narozniak`, `Aml Hassan Esmil`, `Charles " +"Beauville`, `Daniel J. Beutel`, `Daniel Nata Nugraha`, `Edoardo " +"Gabrielli`, `Gustavo Bertoli`, `HelinLin`, `Heng Pan`, `Javier`, `M S " +"Chaitanya Kumar`, `Mohammad Naseri`, `Nikos Vlachakis`, `Pritam Neog`, " +"`Robert Kuska`, `Robert Steiner`, `Taner Topal`, `Yahia Salaheldin " +"Shaaban`, `Yan Gao`, `Yasar Abbas` " msgstr "" -#: ../../source/ref-changelog.md:234 +#: ../../source/ref-changelog.md:85 #, fuzzy -msgid "**Deprecate Python 3.7**" -msgstr "**Créer le PR**" +msgid "" +"**Introduce stateful clients (experimental)** " +"([#2770](https://github.com/adap/flower/pull/2770), " +"[#2686](https://github.com/adap/flower/pull/2686), " +"[#2696](https://github.com/adap/flower/pull/2696), " +"[#2643](https://github.com/adap/flower/pull/2643), " +"[#2769](https://github.com/adap/flower/pull/2769))" +msgstr "" +"**([#842](https://github.com/adap/flower/pull/842), " +"[#844](https://github.com/adap/flower/pull/844), " +"[#845](https://github.com/adap/flower/pull/845), " +"[#847](https://github.com/adap/flower/pull/847), " +"[#993](https://github.com/adap/flower/pull/993), " +"[#994](https://github.com/adap/flower/pull/994))" -#: ../../source/ref-changelog.md:236 +#: ../../source/ref-changelog.md:87 msgid "" -"Since Python 3.7 reached its end of life (EOL) on 2023-06-27, support for" -" Python 3.7 is now deprecated and will be removed in an upcoming release." +"Subclasses of `Client` and `NumPyClient` can now store local state that " +"remains on the client. Let's start with the highlight first: this new " +"feature is compatible with both simulated clients (via " +"`start_simulation`) and networked clients (via `start_client`). It's also" +" the first preview of new abstractions like `Context` and `RecordSet`. " +"Clients can access state of type `RecordSet` via `state: RecordSet = " +"self.context.state`. Changes to this `RecordSet` are preserved across " +"different rounds of execution to enable stateful computations in a " +"unified way across simulation and deployment." msgstr "" -#: ../../source/ref-changelog.md:238 +#: ../../source/ref-changelog.md:89 #, fuzzy msgid "" -"**Add new** `FedTrimmedAvg` **strategy** " -"([#1769](https://github.com/adap/flower/pull/1769), " -"[#1853](https://github.com/adap/flower/pull/1853))" +"**Improve performance** " +"([#2293](https://github.com/adap/flower/pull/2293))" msgstr "" -"**Ajouter un nouvel exemple de Federated Analytics avec Pandas** " -"([#1469](https://github.com/adap/flower/pull/1469), " -"[#1535](https://github.com/adap/flower/pull/1535))" +"**Supprimer les stratégies expérimentales** " +"([#1280](https://github.com/adap/flower/pull/1280))" -#: ../../source/ref-changelog.md:240 -#, fuzzy +#: ../../source/ref-changelog.md:91 msgid "" -"The new `FedTrimmedAvg` strategy implements Trimmed Mean by [Dong Yin, " -"2018](https://arxiv.org/abs/1803.01498)." +"Flower is faster than ever. All `FedAvg`-derived strategies now use in-" +"place aggregation to reduce memory consumption. The Flower client " +"serialization/deserialization has been rewritten from the ground up, " +"which results in significant speedups, especially when the client-side " +"training time is short." msgstr "" -"La nouvelle stratégie `FedMedian` met en œuvre Federated Median " -"(FedMedian) par [Yin et al., 2018] " -"(https://arxiv.org/pdf/1803.01498v1.pdf)." -#: ../../source/ref-changelog.md:242 +#: ../../source/ref-changelog.md:93 #, fuzzy msgid "" -"**Introduce start_driver** " -"([#1697](https://github.com/adap/flower/pull/1697))" +"**Support Federated Learning with Apple MLX and Flower** " +"([#2693](https://github.com/adap/flower/pull/2693))" msgstr "" -"**Ajouter une nouvelle stratégie `FedProx`** " -"([#1619](https://github.com/adap/flower/pull/1619))" +"**Ajouter un nouvel exemple d'apprentissage fédéré utilisant fastai et " +"Flower** ([#1598](https://github.com/adap/flower/pull/1598))" -#: ../../source/ref-changelog.md:244 +#: ../../source/ref-changelog.md:95 msgid "" -"In addition to `start_server` and using the raw Driver API, there is a " -"new `start_driver` function that allows for running `start_server` " -"scripts as a Flower driver with only a single-line code change. Check out" -" the `mt-pytorch` code example to see a working example using " -"`start_driver`." +"Flower has official support for federated learning using [Apple " +"MLX](https://ml-explore.github.io/mlx) via the new `quickstart-mlx` code " +"example." msgstr "" -#: ../../source/ref-changelog.md:246 +#: ../../source/ref-changelog.md:97 #, fuzzy msgid "" -"**Add parameter aggregation to** `mt-pytorch` **code example** " -"([#1785](https://github.com/adap/flower/pull/1785))" +"**Introduce new XGBoost cyclic strategy** " +"([#2666](https://github.com/adap/flower/pull/2666), " +"[#2668](https://github.com/adap/flower/pull/2668))" msgstr "" -"**Nouvel exemple de code PyTorch avancé** " -"([#1007](https://github.com/adap/flower/pull/1007))" +"**Introduction du SDK iOS (aperçu)** " +"([#1621](https://github.com/adap/flower/pull/1621), " +"[#1764](https://github.com/adap/flower/pull/1764))" -#: ../../source/ref-changelog.md:248 +#: ../../source/ref-changelog.md:99 msgid "" -"The `mt-pytorch` example shows how to aggregate parameters when writing a" -" driver script. The included `driver.py` and `server.py` have been " -"aligned to demonstrate both the low-level way and the high-level way of " -"building server-side logic." +"A new strategy called `FedXgbCyclic` supports a client-by-client style of" +" training (often called cyclic). The `xgboost-comprehensive` code example" +" shows how to use it in a full project. In addition to that, `xgboost-" +"comprehensive` now also supports simulation mode. With this, Flower " +"offers best-in-class XGBoost support." msgstr "" -#: ../../source/ref-changelog.md:250 +#: ../../source/ref-changelog.md:101 #, fuzzy msgid "" -"**Migrate experimental REST API to Starlette** " -"([2171](https://github.com/adap/flower/pull/2171))" +"**Support Python 3.11** " +"([#2394](https://github.com/adap/flower/pull/2394))" msgstr "" -"**Nouvelle stratégie expérimentale TensorBoard** " -"([#789](https://github.com/adap/flower/pull/789))" +"**Support Python 3.10** " +"([#1320](https://github.com/adap/flower/pull/1320))" -#: ../../source/ref-changelog.md:252 +#: ../../source/ref-changelog.md:103 msgid "" -"The (experimental) REST API used to be implemented in " -"[FastAPI](https://fastapi.tiangolo.com/), but it has now been migrated to" -" use [Starlette](https://www.starlette.io/) directly." +"Framework tests now run on Python 3.8, 3.9, 3.10, and 3.11. This will " +"ensure better support for users using more recent Python versions." msgstr "" -#: ../../source/ref-changelog.md:254 +#: ../../source/ref-changelog.md:105 #, fuzzy msgid "" -"Please note: The REST request-response API is still experimental and will" -" likely change significantly over time." +"**Update gRPC and ProtoBuf dependencies** " +"([#2814](https://github.com/adap/flower/pull/2814))" msgstr "" -"Remarque : l'API REST est encore expérimentale et est susceptible de " -"changer de manière significative au fil du temps." +"**Ajouter une nouvelle stratégie `FedProx`** " +"([#1619](https://github.com/adap/flower/pull/1619))" -#: ../../source/ref-changelog.md:256 -#, fuzzy +#: ../../source/ref-changelog.md:107 msgid "" -"**Introduce experimental gRPC request-response API** " -"([#1867](https://github.com/adap/flower/pull/1867), " -"[#1901](https://github.com/adap/flower/pull/1901))" +"The `grpcio` and `protobuf` dependencies were updated to their latest " +"versions for improved security and performance." msgstr "" -"**Introduire les enveloppes de confidentialité différentielle (aperçu)** " -"([#1357](https://github.com/adap/flower/pull/1357), " -"[#1460](https://github.com/adap/flower/pull/1460))" -#: ../../source/ref-changelog.md:258 +#: ../../source/ref-changelog.md:109 +#, fuzzy msgid "" -"In addition to the existing gRPC API (based on bidirectional streaming) " -"and the experimental REST API, there is now a new gRPC API that uses a " -"request-response model to communicate with client nodes." +"**Introduce Docker image for Flower server** " +"([#2700](https://github.com/adap/flower/pull/2700), " +"[#2688](https://github.com/adap/flower/pull/2688), " +"[#2705](https://github.com/adap/flower/pull/2705), " +"[#2695](https://github.com/adap/flower/pull/2695), " +"[#2747](https://github.com/adap/flower/pull/2747), " +"[#2746](https://github.com/adap/flower/pull/2746), " +"[#2680](https://github.com/adap/flower/pull/2680), " +"[#2682](https://github.com/adap/flower/pull/2682), " +"[#2701](https://github.com/adap/flower/pull/2701))" msgstr "" +"**([#842](https://github.com/adap/flower/pull/842), " +"[#844](https://github.com/adap/flower/pull/844), " +"[#845](https://github.com/adap/flower/pull/845), " +"[#847](https://github.com/adap/flower/pull/847), " +"[#993](https://github.com/adap/flower/pull/993), " +"[#994](https://github.com/adap/flower/pull/994))" -#: ../../source/ref-changelog.md:260 -#, fuzzy +#: ../../source/ref-changelog.md:111 msgid "" -"Please note: The gRPC request-response API is still experimental and will" -" likely change significantly over time." +"The Flower server can now be run using an official Docker image. A new " +"how-to guide explains [how to run Flower using " +"Docker](https://flower.ai/docs/framework/how-to-run-flower-using-" +"docker.html). An official Flower client Docker image will follow." msgstr "" -"Remarque : l'API REST est encore expérimentale et est susceptible de " -"changer de manière significative au fil du temps." -#: ../../source/ref-changelog.md:262 +#: ../../source/ref-changelog.md:113 #, fuzzy msgid "" -"**Replace the experimental** `start_client(rest=True)` **with the new** " -"`start_client(transport=\"rest\")` " -"([#1880](https://github.com/adap/flower/pull/1880))" +"**Introduce** `flower-via-docker-compose` **example** " +"([#2626](https://github.com/adap/flower/pull/2626))" msgstr "" -"**Initialise** `start_simulation` **avec une liste d'ID de clients** " -"([#860](https://github.com/adap/flower/pull/860))" +"**Introduire une nouvelle ligne de base pour les fleurs : FedAvg " +"FEMNIST** ([#1655](https://github.com/adap/flower/pull/1655))" -#: ../../source/ref-changelog.md:264 +#: ../../source/ref-changelog.md:115 +#, fuzzy msgid "" -"The (experimental) `start_client` argument `rest` was deprecated in " -"favour of a new argument `transport`. `start_client(transport=\"rest\")` " -"will yield the same behaviour as `start_client(rest=True)` did before. " -"All code should migrate to the new argument `transport`. The deprecated " -"argument `rest` will be removed in a future release." +"**Introduce** `quickstart-sklearn-tabular` **example** " +"([#2719](https://github.com/adap/flower/pull/2719))" msgstr "" +"**Ajouter une nouvelle stratégie `FedProx`** " +"([#1619](https://github.com/adap/flower/pull/1619))" -#: ../../source/ref-changelog.md:266 +#: ../../source/ref-changelog.md:117 #, fuzzy msgid "" -"**Add a new gRPC option** " -"([#2197](https://github.com/adap/flower/pull/2197))" +"**Introduce** `custom-metrics` **example** " +"([#1958](https://github.com/adap/flower/pull/1958))" msgstr "" "**Ajouter une nouvelle stratégie `FedProx`** " "([#1619](https://github.com/adap/flower/pull/1619))" -#: ../../source/ref-changelog.md:268 +#: ../../source/ref-changelog.md:119 +#, fuzzy msgid "" -"We now start a gRPC server with the `grpc.keepalive_permit_without_calls`" -" option set to 0 by default. This prevents the clients from sending " -"keepalive pings when there is no outstanding stream." +"**Update code examples to use Flower Datasets** " +"([#2450](https://github.com/adap/flower/pull/2450), " +"[#2456](https://github.com/adap/flower/pull/2456), " +"[#2318](https://github.com/adap/flower/pull/2318), " +"[#2712](https://github.com/adap/flower/pull/2712))" +msgstr "" +"Mettre à jour les outils de développement " +"([#1231](https://github.com/adap/flower/pull/1231), " +"[#1276](https://github.com/adap/flower/pull/1276), " +"[#1301](https://github.com/adap/flower/pull/1301), " +"[#1310](https://github.com/adap/flower/pull/1310))" + +#: ../../source/ref-changelog.md:121 +msgid "" +"Several code examples were updated to use [Flower " +"Datasets](https://flower.ai/docs/datasets/)." msgstr "" -#: ../../source/ref-changelog.md:270 +#: ../../source/ref-changelog.md:123 #, fuzzy msgid "" -"**Improve example notebooks** " -"([#2005](https://github.com/adap/flower/pull/2005))" +"**General updates to Flower Examples** " +"([#2381](https://github.com/adap/flower/pull/2381), " +"[#2805](https://github.com/adap/flower/pull/2805), " +"[#2782](https://github.com/adap/flower/pull/2782), " +"[#2806](https://github.com/adap/flower/pull/2806), " +"[#2829](https://github.com/adap/flower/pull/2829), " +"[#2825](https://github.com/adap/flower/pull/2825), " +"[#2816](https://github.com/adap/flower/pull/2816), " +"[#2726](https://github.com/adap/flower/pull/2726), " +"[#2659](https://github.com/adap/flower/pull/2659), " +"[#2655](https://github.com/adap/flower/pull/2655))" msgstr "" -"**Supprimer les stratégies expérimentales** " -"([#1280](https://github.com/adap/flower/pull/1280))" +"**Améliorer l'API (expérimentale) du pilote** " +"([#1663](https://github.com/adap/flower/pull/1663), " +"[#1666](https://github.com/adap/flower/pull/1666), " +"[#1667](https://github.com/adap/flower/pull/1667), " +"[#1664](https://github.com/adap/flower/pull/1664), " +"[#1675](https://github.com/adap/flower/pull/1675), " +"[#1676](https://github.com/adap/flower/pull/1676), " +"[#1693](https://github.com/adap/flower/pull/1693), " +"[#1662](https://github.com/adap/flower/pull/1662), " +"[#1794](https://github.com/adap/flower/pull/1794))" -#: ../../source/ref-changelog.md:272 +#: ../../source/ref-changelog.md:125 +msgid "Many Flower code examples received substantial updates." +msgstr "" + +#: ../../source/ref-changelog.md:127 ../../source/ref-changelog.md:220 #, fuzzy -msgid "There's a new 30min Federated Learning PyTorch tutorial!" -msgstr "Bienvenue au tutoriel sur l'apprentissage fédéré de la fleur !" +msgid "**Update Flower Baselines**" +msgstr "Demande pour une nouvelle Flower Baseline" -#: ../../source/ref-changelog.md:274 +#: ../../source/ref-changelog.md:129 +#, fuzzy msgid "" -"**Example updates** ([#1772](https://github.com/adap/flower/pull/1772), " -"[#1873](https://github.com/adap/flower/pull/1873), " -"[#1981](https://github.com/adap/flower/pull/1981), " -"[#1988](https://github.com/adap/flower/pull/1988), " -"[#1984](https://github.com/adap/flower/pull/1984), " -"[#1982](https://github.com/adap/flower/pull/1982), " -"[#2112](https://github.com/adap/flower/pull/2112), " -"[#2144](https://github.com/adap/flower/pull/2144), " -"[#2174](https://github.com/adap/flower/pull/2174), " -"[#2225](https://github.com/adap/flower/pull/2225), " -"[#2183](https://github.com/adap/flower/pull/2183))" +"HFedXGBoost ([#2226](https://github.com/adap/flower/pull/2226), " +"[#2771](https://github.com/adap/flower/pull/2771))" +msgstr "" +"**Nouvel exemple de code JAX** " +"([#906](https://github.com/adap/flower/pull/906), " +"[#1143](https://github.com/adap/flower/pull/1143))" + +#: ../../source/ref-changelog.md:130 +#, fuzzy +msgid "FedVSSL ([#2412](https://github.com/adap/flower/pull/2412))" msgstr "" +"Amélioration de la documentation sur le serveur gRPC " +"([#841](https://github.com/adap/flower/pull/841))" -#: ../../source/ref-changelog.md:276 -msgid "" -"Many examples have received significant updates, including simplified " -"advanced-tensorflow and advanced-pytorch examples, improved macOS " -"compatibility of TensorFlow examples, and code examples for simulation. A" -" major upgrade is that all code examples now have a `requirements.txt` " -"(in addition to `pyproject.toml`)." +#: ../../source/ref-changelog.md:131 +#, fuzzy +msgid "FedNova ([#2179](https://github.com/adap/flower/pull/2179))" msgstr "" +"**Ajouter une nouvelle stratégie `FedProx`** " +"([#1619](https://github.com/adap/flower/pull/1619))" -#: ../../source/ref-changelog.md:278 +#: ../../source/ref-changelog.md:132 +#, fuzzy +msgid "HeteroFL ([#2439](https://github.com/adap/flower/pull/2439))" +msgstr "Nouvelle référence API ([#554](https://github.com/adap/flower/pull/554))" + +#: ../../source/ref-changelog.md:133 +#, fuzzy +msgid "FedAvgM ([#2246](https://github.com/adap/flower/pull/2246))" +msgstr "Nouvelle référence API ([#554](https://github.com/adap/flower/pull/554))" + +#: ../../source/ref-changelog.md:134 +#, fuzzy +msgid "FedPara ([#2722](https://github.com/adap/flower/pull/2722))" +msgstr "" +"**Renommé stratégie q-FedAvg** " +"([#802](https://github.com/adap/flower/pull/802))" + +#: ../../source/ref-changelog.md:136 #, fuzzy msgid "" -"**General improvements** " -"([#1872](https://github.com/adap/flower/pull/1872), " -"[#1866](https://github.com/adap/flower/pull/1866), " -"[#1884](https://github.com/adap/flower/pull/1884), " -"[#1837](https://github.com/adap/flower/pull/1837), " -"[#1477](https://github.com/adap/flower/pull/1477), " -"[#2171](https://github.com/adap/flower/pull/2171))" +"**Improve documentation** " +"([#2674](https://github.com/adap/flower/pull/2674), " +"[#2480](https://github.com/adap/flower/pull/2480), " +"[#2826](https://github.com/adap/flower/pull/2826), " +"[#2727](https://github.com/adap/flower/pull/2727), " +"[#2761](https://github.com/adap/flower/pull/2761), " +"[#2900](https://github.com/adap/flower/pull/2900))" msgstr "" "**Mise à jour de la documentation** " "([#1629](https://github.com/adap/flower/pull/1629), " @@ -15467,8675 +15954,9466 @@ msgstr "" "[#1613](https://github.com/adap/flower/pull/1613), " "[#1614](https://github.com/adap/flower/pull/1614))" -#: ../../source/ref-changelog.md:284 ../../source/ref-changelog.md:348 -#: ../../source/ref-changelog.md:406 ../../source/ref-changelog.md:475 -#: ../../source/ref-changelog.md:537 -msgid "None" -msgstr "Aucun" - -#: ../../source/ref-changelog.md:286 -msgid "v1.4.0 (2023-04-21)" -msgstr "v1.4.0 (2023-04-21)" - -#: ../../source/ref-changelog.md:292 +#: ../../source/ref-changelog.md:138 msgid "" -"`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, " -"`Chenyang Ma (Danny)`, `Daniel J. Beutel`, `Edoardo`, `Gautam Jajoo`, " -"`Iacob-Alexandru-Andrei`, `JDRanpariya`, `Jean Charle Yaacoub`, `Kunal " -"Sarkhel`, `L. Jiang`, `Lennart Behme`, `Max Kapsecker`, `Michał`, `Nic " -"Lane`, `Nikolaos Episkopos`, `Ragy`, `Saurav Maheshkar`, `Semo Yang`, " -"`Steve Laskaridis`, `Steven Hé (Sīchàng)`, `Taner Topal`" +"**Improved testing and development infrastructure** " +"([#2797](https://github.com/adap/flower/pull/2797), " +"[#2676](https://github.com/adap/flower/pull/2676), " +"[#2644](https://github.com/adap/flower/pull/2644), " +"[#2656](https://github.com/adap/flower/pull/2656), " +"[#2848](https://github.com/adap/flower/pull/2848), " +"[#2675](https://github.com/adap/flower/pull/2675), " +"[#2735](https://github.com/adap/flower/pull/2735), " +"[#2767](https://github.com/adap/flower/pull/2767), " +"[#2732](https://github.com/adap/flower/pull/2732), " +"[#2744](https://github.com/adap/flower/pull/2744), " +"[#2681](https://github.com/adap/flower/pull/2681), " +"[#2699](https://github.com/adap/flower/pull/2699), " +"[#2745](https://github.com/adap/flower/pull/2745), " +"[#2734](https://github.com/adap/flower/pull/2734), " +"[#2731](https://github.com/adap/flower/pull/2731), " +"[#2652](https://github.com/adap/flower/pull/2652), " +"[#2720](https://github.com/adap/flower/pull/2720), " +"[#2721](https://github.com/adap/flower/pull/2721), " +"[#2717](https://github.com/adap/flower/pull/2717), " +"[#2864](https://github.com/adap/flower/pull/2864), " +"[#2694](https://github.com/adap/flower/pull/2694), " +"[#2709](https://github.com/adap/flower/pull/2709), " +"[#2658](https://github.com/adap/flower/pull/2658), " +"[#2796](https://github.com/adap/flower/pull/2796), " +"[#2692](https://github.com/adap/flower/pull/2692), " +"[#2657](https://github.com/adap/flower/pull/2657), " +"[#2813](https://github.com/adap/flower/pull/2813), " +"[#2661](https://github.com/adap/flower/pull/2661), " +"[#2398](https://github.com/adap/flower/pull/2398))" msgstr "" -"`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, " -"`Chenyang Ma (Danny)`, `Daniel J. Beutel`, `Edoardo`, `Gautam Jajoo`, " -"`Iacob-Alexandru-Andrei`, `JDRanpariya`, `Jean Charle Yaacoub`, `Kunal " -"Sarkhel`, `L. Jiang`, `Lennart Behme`, `Max Kapsecker`, `Michał`, `Nic " -"Lane`, `Nikolaos Episkopos`, `Ragy`, `Saurav Maheshkar`, `Semo Yang`, " -"`Steve Laskaridis`, `Steven Hé (Sīchàng)`, `Taner Topal`" -#: ../../source/ref-changelog.md:296 +#: ../../source/ref-changelog.md:140 msgid "" -"**Introduce support for XGBoost (**`FedXgbNnAvg` **strategy and " -"example)** ([#1694](https://github.com/adap/flower/pull/1694), " -"[#1709](https://github.com/adap/flower/pull/1709), " -"[#1715](https://github.com/adap/flower/pull/1715), " -"[#1717](https://github.com/adap/flower/pull/1717), " -"[#1763](https://github.com/adap/flower/pull/1763), " -"[#1795](https://github.com/adap/flower/pull/1795))" +"The Flower testing and development infrastructure has received " +"substantial updates. This makes Flower 1.7 the most tested release ever." msgstr "" -"**Introduire la prise en charge de XGBoost (**`FedXgbNnAvg` **stratégie " -"et exemple)** ([#1694](https://github.com/adap/flower/pull/1694), " -"[#1709](https://github.com/adap/flower/pull/1709), " -"[#1715](https://github.com/adap/flower/pull/1715), " -"[#1717](https://github.com/adap/flower/pull/1717), " -"[#1763](https://github.com/adap/flower/pull/1763), " -"[#1795](https://github.com/adap/flower/pull/1795))" -#: ../../source/ref-changelog.md:298 +#: ../../source/ref-changelog.md:142 msgid "" -"XGBoost is a tree-based ensemble machine learning algorithm that uses " -"gradient boosting to improve model accuracy. We added a new `FedXgbNnAvg`" -" " -"[strategy](https://github.com/adap/flower/tree/main/src/py/flwr/server/strategy/fedxgb_nn_avg.py)," -" and a [code example](https://github.com/adap/flower/tree/main/examples" -"/xgboost-quickstart) that demonstrates the usage of this new strategy in " -"an XGBoost project." +"**Update dependencies** " +"([#2753](https://github.com/adap/flower/pull/2753), " +"[#2651](https://github.com/adap/flower/pull/2651), " +"[#2739](https://github.com/adap/flower/pull/2739), " +"[#2837](https://github.com/adap/flower/pull/2837), " +"[#2788](https://github.com/adap/flower/pull/2788), " +"[#2811](https://github.com/adap/flower/pull/2811), " +"[#2774](https://github.com/adap/flower/pull/2774), " +"[#2790](https://github.com/adap/flower/pull/2790), " +"[#2751](https://github.com/adap/flower/pull/2751), " +"[#2850](https://github.com/adap/flower/pull/2850), " +"[#2812](https://github.com/adap/flower/pull/2812), " +"[#2872](https://github.com/adap/flower/pull/2872), " +"[#2736](https://github.com/adap/flower/pull/2736), " +"[#2756](https://github.com/adap/flower/pull/2756), " +"[#2857](https://github.com/adap/flower/pull/2857), " +"[#2757](https://github.com/adap/flower/pull/2757), " +"[#2810](https://github.com/adap/flower/pull/2810), " +"[#2740](https://github.com/adap/flower/pull/2740), " +"[#2789](https://github.com/adap/flower/pull/2789))" msgstr "" -"Nous avons ajouté une nouvelle [stratégie] `FedXgbNnAvg` " -"(https://github.com/adap/flower/tree/main/src/py/flwr/server/strategy/fedxgb_nn_avg.py)," -" et un [exemple de code] " -"(https://github.com/adap/flower/tree/main/examples/xgboost-quickstart) " -"qui démontre l'utilisation de cette nouvelle stratégie dans un projet " -"XGBoost." -#: ../../source/ref-changelog.md:300 +#: ../../source/ref-changelog.md:144 msgid "" -"**Introduce iOS SDK (preview)** " -"([#1621](https://github.com/adap/flower/pull/1621), " -"[#1764](https://github.com/adap/flower/pull/1764))" +"**General improvements** " +"([#2803](https://github.com/adap/flower/pull/2803), " +"[#2847](https://github.com/adap/flower/pull/2847), " +"[#2877](https://github.com/adap/flower/pull/2877), " +"[#2690](https://github.com/adap/flower/pull/2690), " +"[#2889](https://github.com/adap/flower/pull/2889), " +"[#2874](https://github.com/adap/flower/pull/2874), " +"[#2819](https://github.com/adap/flower/pull/2819), " +"[#2689](https://github.com/adap/flower/pull/2689), " +"[#2457](https://github.com/adap/flower/pull/2457), " +"[#2870](https://github.com/adap/flower/pull/2870), " +"[#2669](https://github.com/adap/flower/pull/2669), " +"[#2876](https://github.com/adap/flower/pull/2876), " +"[#2885](https://github.com/adap/flower/pull/2885), " +"[#2858](https://github.com/adap/flower/pull/2858), " +"[#2867](https://github.com/adap/flower/pull/2867), " +"[#2351](https://github.com/adap/flower/pull/2351), " +"[#2886](https://github.com/adap/flower/pull/2886), " +"[#2860](https://github.com/adap/flower/pull/2860), " +"[#2828](https://github.com/adap/flower/pull/2828), " +"[#2869](https://github.com/adap/flower/pull/2869), " +"[#2875](https://github.com/adap/flower/pull/2875), " +"[#2733](https://github.com/adap/flower/pull/2733), " +"[#2488](https://github.com/adap/flower/pull/2488), " +"[#2646](https://github.com/adap/flower/pull/2646), " +"[#2879](https://github.com/adap/flower/pull/2879), " +"[#2821](https://github.com/adap/flower/pull/2821), " +"[#2855](https://github.com/adap/flower/pull/2855), " +"[#2800](https://github.com/adap/flower/pull/2800), " +"[#2807](https://github.com/adap/flower/pull/2807), " +"[#2801](https://github.com/adap/flower/pull/2801), " +"[#2804](https://github.com/adap/flower/pull/2804), " +"[#2851](https://github.com/adap/flower/pull/2851), " +"[#2787](https://github.com/adap/flower/pull/2787), " +"[#2852](https://github.com/adap/flower/pull/2852), " +"[#2672](https://github.com/adap/flower/pull/2672), " +"[#2759](https://github.com/adap/flower/pull/2759))" msgstr "" -"**Introduction du SDK iOS (aperçu)** " -"([#1621](https://github.com/adap/flower/pull/1621), " -"[#1764](https://github.com/adap/flower/pull/1764))" -#: ../../source/ref-changelog.md:302 +#: ../../source/ref-changelog.md:148 +#, fuzzy msgid "" -"This is a major update for anyone wanting to implement Federated Learning" -" on iOS mobile devices. We now have a swift iOS SDK present under " -"[src/swift/flwr](https://github.com/adap/flower/tree/main/src/swift/flwr)" -" that will facilitate greatly the app creating process. To showcase its " -"use, the [iOS " -"example](https://github.com/adap/flower/tree/main/examples/ios) has also " -"been updated!" +"**Deprecate** `start_numpy_client` " +"([#2563](https://github.com/adap/flower/pull/2563), " +"[#2718](https://github.com/adap/flower/pull/2718))" msgstr "" -"Il s'agit d'une mise à jour majeure pour tous ceux qui souhaitent mettre " -"en œuvre l'apprentissage fédéré sur les appareils mobiles iOS. Nous " -"disposons désormais d'un SDK swift iOS présent sous " -"[src/swift/flwr](https://github.com/adap/flower/tree/main/src/swift/flwr)" -" qui facilitera grandement le processus de création d'applications. Pour " -"présenter son utilisation, l'[exemple " -"iOS](https://github.com/adap/flower/tree/main/examples/ios) a également " -"été mis à jour !" +"**Nouvelles stratégies intégrées** " +"([#828](https://github.com/adap/flower/pull/828) " +"[#822](https://github.com/adap/flower/pull/822))" -#: ../../source/ref-changelog.md:304 +#: ../../source/ref-changelog.md:150 msgid "" -"**Introduce new \"What is Federated Learning?\" tutorial** " -"([#1657](https://github.com/adap/flower/pull/1657), " -"[#1721](https://github.com/adap/flower/pull/1721))" +"Until now, clients of type `NumPyClient` needed to be started via " +"`start_numpy_client`. In our efforts to consolidate framework APIs, we " +"have introduced changes, and now all client types should start via " +"`start_client`. To continue using `NumPyClient` clients, you simply need " +"to first call the `.to_client()` method and then pass returned `Client` " +"object to `start_client`. The examples and the documentation have been " +"updated accordingly." msgstr "" -"**Introduire un nouveau tutoriel \"Qu'est-ce que l'apprentissage fédéré ?" -" \"** ([#1657](https://github.com/adap/flower/pull/1657), " -"[#1721](https://github.com/adap/flower/pull/1721))" -#: ../../source/ref-changelog.md:306 +#: ../../source/ref-changelog.md:152 #, fuzzy msgid "" -"A new [entry-level tutorial](https://flower.ai/docs/framework/tutorial-" -"what-is-federated-learning.html) in our documentation explains the basics" -" of Fedetated Learning. It enables anyone who's unfamiliar with Federated" -" Learning to start their journey with Flower. Forward it to anyone who's " -"interested in Federated Learning!" +"**Deprecate legacy DP wrappers** " +"([#2749](https://github.com/adap/flower/pull/2749))" msgstr "" -"Un nouveau [tutoriel d'entrée de gamme] " -"(https://flower.ai/docs/tutorial/Flower-0-What-is-FL.html) dans notre " -"documentation explique les bases de l'apprentissage fédéré. Il permet à " -"tous ceux qui ne connaissent pas l'apprentissage fédéré de commencer leur" -" voyage avec Flower. Fais-le suivre à tous ceux qui s'intéressent à " -"l'apprentissage fédéré !" +"**Supprimez KerasClient** " +"([#857](https://github.com/adap/flower/pull/857))" -#: ../../source/ref-changelog.md:308 +#: ../../source/ref-changelog.md:154 msgid "" -"**Introduce new Flower Baseline: FedProx MNIST** " -"([#1513](https://github.com/adap/flower/pull/1513), " -"[#1680](https://github.com/adap/flower/pull/1680), " -"[#1681](https://github.com/adap/flower/pull/1681), " -"[#1679](https://github.com/adap/flower/pull/1679))" +"Legacy DP wrapper classes are deprecated, but still functional. This is " +"in preparation for an all-new pluggable version of differential privacy " +"support in Flower." msgstr "" -"**Introduire une nouvelle fleur Référence : FedProx MNIST** " -"([#1513](https://github.com/adap/flower/pull/1513), " -"[#1680](https://github.com/adap/flower/pull/1680), " -"[#1681](https://github.com/adap/flower/pull/1681), " -"[#1679](https://github.com/adap/flower/pull/1679))" -#: ../../source/ref-changelog.md:310 +#: ../../source/ref-changelog.md:156 +#, fuzzy msgid "" -"This new baseline replicates the MNIST+CNN task from the paper [Federated" -" Optimization in Heterogeneous Networks (Li et al., " -"2018)](https://arxiv.org/abs/1812.06127). It uses the `FedProx` strategy," -" which aims at making convergence more robust in heterogeneous settings." +"**Make optional arg** `--callable` **in** `flower-client` **a required " +"positional arg** ([#2673](https://github.com/adap/flower/pull/2673))" msgstr "" -"Cette nouvelle ligne de base reproduit la tâche MNIST+CNN de l'article " -"[Federated Optimization in Heterogeneous Networks (Li et al., 2018)] " -"(https://arxiv.org/abs/1812.06127). Elle utilise la stratégie `FedProx`, " -"qui vise à rendre la convergence plus robuste dans des contextes " -"hétérogènes." +"**Log** `Client` **exceptions dans le moteur de client virtuel** " +"([#1493](https://github.com/adap/flower/pull/1493))" -#: ../../source/ref-changelog.md:312 +#: ../../source/ref-changelog.md:158 +#, fuzzy msgid "" -"**Introduce new Flower Baseline: FedAvg FEMNIST** " -"([#1655](https://github.com/adap/flower/pull/1655))" +"**Rename** `certificates` **to** `root_certificates` **in** `Driver` " +"([#2890](https://github.com/adap/flower/pull/2890))" msgstr "" -"**Introduire une nouvelle ligne de base pour les fleurs : FedAvg " -"FEMNIST** ([#1655](https://github.com/adap/flower/pull/1655))" +"**Rename** `rnd` **to** `server_round` " +"([#1321](https://github.com/adap/flower/pull/1321))" -#: ../../source/ref-changelog.md:314 +#: ../../source/ref-changelog.md:160 +#, fuzzy msgid "" -"This new baseline replicates an experiment evaluating the performance of " -"the FedAvg algorithm on the FEMNIST dataset from the paper [LEAF: A " -"Benchmark for Federated Settings (Caldas et al., " -"2018)](https://arxiv.org/abs/1812.01097)." +"**Drop experimental** `Task` **fields** " +"([#2866](https://github.com/adap/flower/pull/2866), " +"[#2865](https://github.com/adap/flower/pull/2865))" msgstr "" -"Cette nouvelle ligne de base reproduit une expérience évaluant les " -"performances de l'algorithme FedAvg sur le jeu de données FEMNIST tiré de" -" l'article [LEAF : A Benchmark for Federated Settings (Caldas et al., " -"2018)] (https://arxiv.org/abs/1812.01097)." +"**Rename** `Weights` **to** `NDArrays` " +"([#1258](https://github.com/adap/flower/pull/1258), " +"[#1259](https://github.com/adap/flower/pull/1259))" -#: ../../source/ref-changelog.md:316 +#: ../../source/ref-changelog.md:162 msgid "" -"**Introduce (experimental) REST API** " -"([#1594](https://github.com/adap/flower/pull/1594), " -"[#1690](https://github.com/adap/flower/pull/1690), " -"[#1695](https://github.com/adap/flower/pull/1695), " -"[#1712](https://github.com/adap/flower/pull/1712), " -"[#1802](https://github.com/adap/flower/pull/1802), " -"[#1770](https://github.com/adap/flower/pull/1770), " -"[#1733](https://github.com/adap/flower/pull/1733))" +"Experimental fields `sa`, `legacy_server_message` and " +"`legacy_client_message` were removed from `Task` message. The removed " +"fields are superseded by the new `RecordSet` abstraction." msgstr "" -"**Introduire l'API REST (expérimentale)** " -"([#1594](https://github.com/adap/flower/pull/1594), " -"[#1690](https://github.com/adap/flower/pull/1690), " -"[#1695](https://github.com/adap/flower/pull/1695), " -"[#1712](https://github.com/adap/flower/pull/1712), " -"[#1802](https://github.com/adap/flower/pull/1802), " -"[#1770](https://github.com/adap/flower/pull/1770), " -"[#1733](https://github.com/adap/flower/pull/1733))" -#: ../../source/ref-changelog.md:318 +#: ../../source/ref-changelog.md:164 +#, fuzzy msgid "" -"A new REST API has been introduced as an alternative to the gRPC-based " -"communication stack. In this initial version, the REST API only supports " -"anonymous clients." +"**Retire MXNet examples** " +"([#2724](https://github.com/adap/flower/pull/2724))" msgstr "" -"Une nouvelle API REST a été introduite comme alternative à la pile de " -"communication basée sur gRPC. Dans cette version initiale, l'API REST ne " -"prend en charge que les clients anonymes." +"**Nouvel exemple de code scikit-learn** " +"([#748](https://github.com/adap/flower/pull/748))" -#: ../../source/ref-changelog.md:320 +#: ../../source/ref-changelog.md:166 msgid "" -"Please note: The REST API is still experimental and will likely change " -"significantly over time." +"The development of the MXNet fremework has ended and the project is now " +"[archived on GitHub](https://github.com/apache/mxnet). Existing MXNet " +"examples won't receive updates." msgstr "" -"Remarque : l'API REST est encore expérimentale et est susceptible de " -"changer de manière significative au fil du temps." -#: ../../source/ref-changelog.md:322 +#: ../../source/ref-changelog.md:168 +#, fuzzy +msgid "v1.6.0 (2023-11-28)" +msgstr "v1.4.0 (2023-04-21)" + +#: ../../source/ref-changelog.md:174 msgid "" -"**Improve the (experimental) Driver API** " -"([#1663](https://github.com/adap/flower/pull/1663), " -"[#1666](https://github.com/adap/flower/pull/1666), " -"[#1667](https://github.com/adap/flower/pull/1667), " -"[#1664](https://github.com/adap/flower/pull/1664), " -"[#1675](https://github.com/adap/flower/pull/1675), " -"[#1676](https://github.com/adap/flower/pull/1676), " -"[#1693](https://github.com/adap/flower/pull/1693), " -"[#1662](https://github.com/adap/flower/pull/1662), " -"[#1794](https://github.com/adap/flower/pull/1794))" +"`Aashish Kolluri`, `Adam Narozniak`, `Alessio Mora`, `Barathwaja S`, " +"`Charles Beauville`, `Daniel J. Beutel`, `Daniel Nata Nugraha`, `Gabriel " +"Mota`, `Heng Pan`, `Ivan Agarský`, `JS.KIM`, `Javier`, `Marius Schlegel`," +" `Navin Chandra`, `Nic Lane`, `Peterpan828`, `Qinbin Li`, `Shaz-hash`, " +"`Steve Laskaridis`, `Taner Topal`, `William Lindskog`, `Yan Gao`, " +"`cnxdeveloper`, `k3nfalt` " msgstr "" -"**Améliorer l'API (expérimentale) du pilote** " -"([#1663](https://github.com/adap/flower/pull/1663), " -"[#1666](https://github.com/adap/flower/pull/1666), " -"[#1667](https://github.com/adap/flower/pull/1667), " -"[#1664](https://github.com/adap/flower/pull/1664), " -"[#1675](https://github.com/adap/flower/pull/1675), " -"[#1676](https://github.com/adap/flower/pull/1676), " -"[#1693](https://github.com/adap/flower/pull/1693), " -"[#1662](https://github.com/adap/flower/pull/1662), " -"[#1794](https://github.com/adap/flower/pull/1794))" -#: ../../source/ref-changelog.md:324 +#: ../../source/ref-changelog.md:178 +#, fuzzy msgid "" -"The Driver API is still an experimental feature, but this release " -"introduces some major upgrades. One of the main improvements is the " -"introduction of an SQLite database to store server state on disk (instead" -" of in-memory). Another improvement is that tasks (instructions or " -"results) that have been delivered will now be deleted. This greatly " -"improves the memory efficiency of a long-running Flower server." +"**Add experimental support for Python 3.12** " +"([#2565](https://github.com/adap/flower/pull/2565))" msgstr "" -"L'API du pilote est encore une fonction expérimentale, mais cette version" -" introduit quelques améliorations majeures. L'une des principales " -"améliorations est l'introduction d'une base de données SQLite pour " -"stocker l'état du serveur sur le disque (au lieu de la mémoire). Une " -"autre amélioration est que les tâches (instructions ou résultats) qui ont" -" été livrées seront désormais supprimées, ce qui améliore " -"considérablement l'efficacité de la mémoire d'un serveur Flower " -"fonctionnant depuis longtemps." +"**Ajouter la prise en charge expérimentale de Python 3.10 et Python " +"3.11** ([#1135](https://github.com/adap/flower/pull/1135))" -#: ../../source/ref-changelog.md:326 +#: ../../source/ref-changelog.md:180 +#, fuzzy msgid "" -"**Fix spilling issues related to Ray during simulations** " -"([#1698](https://github.com/adap/flower/pull/1698))" +"**Add new XGBoost examples** " +"([#2612](https://github.com/adap/flower/pull/2612), " +"[#2554](https://github.com/adap/flower/pull/2554), " +"[#2617](https://github.com/adap/flower/pull/2617), " +"[#2618](https://github.com/adap/flower/pull/2618), " +"[#2619](https://github.com/adap/flower/pull/2619), " +"[#2567](https://github.com/adap/flower/pull/2567))" msgstr "" -"**Répare les problèmes de déversement liés à Ray pendant les " -"simulations** ([#1698](https://github.com/adap/flower/pull/1698))" +"**([#1520](https://github.com/adap/flower/pull/1520), " +"[#1525](https://github.com/adap/flower/pull/1525), " +"[#1545](https://github.com/adap/flower/pull/1545), " +"[#1546](https://github.com/adap/flower/pull/1546), " +"[#1550](https://github.com/adap/flower/pull/1550), " +"[#1551](https://github.com/adap/flower/pull/1551), " +"[#1567](https://github.com/adap/flower/pull/1567))" -#: ../../source/ref-changelog.md:328 +#: ../../source/ref-changelog.md:182 +msgid "" +"We have added a new `xgboost-quickstart` example alongside a new " +"`xgboost-comprehensive` example that goes more in-depth." +msgstr "" + +#: ../../source/ref-changelog.md:184 #, fuzzy msgid "" -"While running long simulations, `ray` was sometimes spilling huge amounts" -" of data that would make the training unable to continue. This is now " -"fixed! 🎉" +"**Add Vertical FL example** " +"([#2598](https://github.com/adap/flower/pull/2598))" msgstr "" -"Lors de l'exécution de longues simulations, `ray` déversait parfois " -"d'énormes quantités de données qui rendaient l'entraînement incapable de " -"continuer. ce problème est maintenant corrigé ! 🎉" +"**Nouvel exemple de code CoreML pour iOS** " +"([#1289](https://github.com/adap/flower/pull/1289))" -#: ../../source/ref-changelog.md:330 +#: ../../source/ref-changelog.md:186 msgid "" -"**Add new example using** `TabNet` **and Flower** " -"([#1725](https://github.com/adap/flower/pull/1725))" +"We had many questions about Vertical Federated Learning using Flower, so " +"we decided to add an simple example for it on the [Titanic " +"dataset](https://www.kaggle.com/competitions/titanic/data) alongside a " +"tutorial (in the README)." msgstr "" -"**Ajouter un nouvel exemple utilisant** `TabNet` **et Flower** " -"([#1725](https://github.com/adap/flower/pull/1725))" -#: ../../source/ref-changelog.md:332 +#: ../../source/ref-changelog.md:188 +#, fuzzy msgid "" -"TabNet is a powerful and flexible framework for training machine learning" -" models on tabular data. We now have a federated example using Flower: " -"[quickstart-tabnet](https://github.com/adap/flower/tree/main/examples" -"/quickstart-tabnet)." +"**Support custom** `ClientManager` **in** `start_driver()` " +"([#2292](https://github.com/adap/flower/pull/2292))" msgstr "" -"TabNet est un cadre puissant et flexible pour former des modèles " -"d'apprentissage automatique sur des données tabulaires. Nous avons " -"maintenant un exemple fédéré utilisant Flower : [quickstart-" -"tabnet](https://github.com/adap/flower/tree/main/examples/quickstart-" -"tabnet)." +"Ajout de la prise en charge d'un `ClientManager` personnalisé comme " +"paramètre de `start_simulation` " +"([#1171](https://github.com/adap/flower/pull/1171))" -#: ../../source/ref-changelog.md:334 +#: ../../source/ref-changelog.md:190 +#, fuzzy msgid "" -"**Add new how-to guide for monitoring simulations** " -"([#1649](https://github.com/adap/flower/pull/1649))" +"**Update REST API to support create and delete nodes** " +"([#2283](https://github.com/adap/flower/pull/2283))" msgstr "" -"**Ajouter un nouveau guide pratique pour le suivi des simulations** " -"([#1649](https://github.com/adap/flower/pull/1649))" +"**Nouvelle stratégie expérimentale TensorBoard** " +"([#789](https://github.com/adap/flower/pull/789))" -#: ../../source/ref-changelog.md:336 +#: ../../source/ref-changelog.md:192 +#, fuzzy msgid "" -"We now have a documentation guide to help users monitor their performance" -" during simulations." +"**Update the Android SDK** " +"([#2187](https://github.com/adap/flower/pull/2187))" msgstr "" -"Nous avons maintenant un guide de documentation pour aider les " -"utilisateurs à surveiller leurs performances pendant les simulations." +"**Introduire une nouvelle ligne de base pour les fleurs : FedAvg " +"FEMNIST** ([#1655](https://github.com/adap/flower/pull/1655))" -#: ../../source/ref-changelog.md:338 +#: ../../source/ref-changelog.md:194 +msgid "Add gRPC request-response capability to the Android SDK." +msgstr "" + +#: ../../source/ref-changelog.md:196 +#, fuzzy msgid "" -"**Add training metrics to** `History` **object during simulations** " -"([#1696](https://github.com/adap/flower/pull/1696))" +"**Update the C++ SDK** " +"([#2537](https://github.com/adap/flower/pull/2537), " +"[#2528](https://github.com/adap/flower/pull/2528), " +"[#2523](https://github.com/adap/flower/pull/2523), " +"[#2522](https://github.com/adap/flower/pull/2522))" msgstr "" -"**Ajouter des mesures de formation à** `History` **objet pendant les " -"simulations** ([#1696](https://github.com/adap/flower/pull/1696))" +"Mettre à jour les outils de développement " +"([#1231](https://github.com/adap/flower/pull/1231), " +"[#1276](https://github.com/adap/flower/pull/1276), " +"[#1301](https://github.com/adap/flower/pull/1301), " +"[#1310](https://github.com/adap/flower/pull/1310))" -#: ../../source/ref-changelog.md:340 +#: ../../source/ref-changelog.md:198 +msgid "Add gRPC request-response capability to the C++ SDK." +msgstr "" + +#: ../../source/ref-changelog.md:200 +#, fuzzy msgid "" -"The `fit_metrics_aggregation_fn` can be used to aggregate training " -"metrics, but previous releases did not save the results in the `History` " -"object. This is now the case!" +"**Make HTTPS the new default** " +"([#2591](https://github.com/adap/flower/pull/2591), " +"[#2636](https://github.com/adap/flower/pull/2636))" msgstr "" -"La fonction `fit_metrics_aggregation_fn` peut être utilisée pour agréger " -"les mesures d'entraînement, mais les versions précédentes " -"n'enregistraient pas les résultats dans l'objet `History`. c'est " -"désormais le cas !" +"**Exemple de code mis à jour** " +"([#1344](https://github.com/adap/flower/pull/1344), " +"[#1347](https://github.com/adap/flower/pull/1347))" -#: ../../source/ref-changelog.md:342 +#: ../../source/ref-changelog.md:202 msgid "" -"**General improvements** " -"([#1659](https://github.com/adap/flower/pull/1659), " -"[#1646](https://github.com/adap/flower/pull/1646), " -"[#1647](https://github.com/adap/flower/pull/1647), " -"[#1471](https://github.com/adap/flower/pull/1471), " -"[#1648](https://github.com/adap/flower/pull/1648), " -"[#1651](https://github.com/adap/flower/pull/1651), " -"[#1652](https://github.com/adap/flower/pull/1652), " -"[#1653](https://github.com/adap/flower/pull/1653), " -"[#1659](https://github.com/adap/flower/pull/1659), " -"[#1665](https://github.com/adap/flower/pull/1665), " -"[#1670](https://github.com/adap/flower/pull/1670), " -"[#1672](https://github.com/adap/flower/pull/1672), " -"[#1677](https://github.com/adap/flower/pull/1677), " -"[#1684](https://github.com/adap/flower/pull/1684), " -"[#1683](https://github.com/adap/flower/pull/1683), " -"[#1686](https://github.com/adap/flower/pull/1686), " -"[#1682](https://github.com/adap/flower/pull/1682), " -"[#1685](https://github.com/adap/flower/pull/1685), " -"[#1692](https://github.com/adap/flower/pull/1692), " -"[#1705](https://github.com/adap/flower/pull/1705), " -"[#1708](https://github.com/adap/flower/pull/1708), " -"[#1711](https://github.com/adap/flower/pull/1711), " -"[#1713](https://github.com/adap/flower/pull/1713), " -"[#1714](https://github.com/adap/flower/pull/1714), " -"[#1718](https://github.com/adap/flower/pull/1718), " -"[#1716](https://github.com/adap/flower/pull/1716), " -"[#1723](https://github.com/adap/flower/pull/1723), " -"[#1735](https://github.com/adap/flower/pull/1735), " -"[#1678](https://github.com/adap/flower/pull/1678), " -"[#1750](https://github.com/adap/flower/pull/1750), " -"[#1753](https://github.com/adap/flower/pull/1753), " -"[#1736](https://github.com/adap/flower/pull/1736), " -"[#1766](https://github.com/adap/flower/pull/1766), " -"[#1760](https://github.com/adap/flower/pull/1760), " -"[#1775](https://github.com/adap/flower/pull/1775), " -"[#1776](https://github.com/adap/flower/pull/1776), " -"[#1777](https://github.com/adap/flower/pull/1777), " -"[#1779](https://github.com/adap/flower/pull/1779), " -"[#1784](https://github.com/adap/flower/pull/1784), " -"[#1773](https://github.com/adap/flower/pull/1773), " -"[#1755](https://github.com/adap/flower/pull/1755), " -"[#1789](https://github.com/adap/flower/pull/1789), " -"[#1788](https://github.com/adap/flower/pull/1788), " -"[#1798](https://github.com/adap/flower/pull/1798), " -"[#1799](https://github.com/adap/flower/pull/1799), " -"[#1739](https://github.com/adap/flower/pull/1739), " -"[#1800](https://github.com/adap/flower/pull/1800), " -"[#1804](https://github.com/adap/flower/pull/1804), " -"[#1805](https://github.com/adap/flower/pull/1805))" +"Flower is moving to HTTPS by default. The new `flower-server` requires " +"passing `--certificates`, but users can enable `--insecure` to use HTTP " +"for prototyping. The same applies to `flower-client`, which can either " +"use user-provided credentials or gRPC-bundled certificates to connect to " +"an HTTPS-enabled server or requires opt-out via passing `--insecure` to " +"enable insecure HTTP connections." msgstr "" -"**General improvements** " -"([#1659](https://github.com/adap/flower/pull/1659), " -"[#1646](https://github.com/adap/flower/pull/1646), " -"[#1647](https://github.com/adap/flower/pull/1647), " -"[#1471](https://github.com/adap/flower/pull/1471), " -"[#1648](https://github.com/adap/flower/pull/1648), " -"[#1651](https://github.com/adap/flower/pull/1651), " -"[#1652](https://github.com/adap/flower/pull/1652), " -"[#1653](https://github.com/adap/flower/pull/1653), " -"[#1659](https://github.com/adap/flower/pull/1659), " -"[#1665](https://github.com/adap/flower/pull/1665), " -"[#1670](https://github.com/adap/flower/pull/1670), " -"[#1672](https://github.com/adap/flower/pull/1672), " -"[#1677](https://github.com/adap/flower/pull/1677), " -"[#1684](https://github.com/adap/flower/pull/1684), " -"[#1683](https://github.com/adap/flower/pull/1683), " -"[#1686](https://github.com/adap/flower/pull/1686), " -"[#1682](https://github.com/adap/flower/pull/1682), " -"[#1685](https://github.com/adap/flower/pull/1685), " -"[#1692](https://github.com/adap/flower/pull/1692), " -"[#1705](https://github.com/ada" - -#: ../../source/ref-changelog.md:350 -msgid "v1.3.0 (2023-02-06)" -msgstr "v1.3.0 (2023-02-06)" -#: ../../source/ref-changelog.md:356 +#: ../../source/ref-changelog.md:204 msgid "" -"`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, " -"`Daniel J. Beutel`, `JDRanpariya`, `Lennart Behme`, `Taner Topal`" +"For backward compatibility, `start_client()` and `start_numpy_client()` " +"will still start in insecure mode by default. In a future release, " +"insecure connections will require user opt-in by passing `insecure=True`." msgstr "" -"`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, " -"`Daniel J. Beutel`, `JDRanpariya`, `Lennart Behme`, `Taner Topal`" -#: ../../source/ref-changelog.md:360 +#: ../../source/ref-changelog.md:206 +#, fuzzy msgid "" -"**Add support for** `workload_id` **and** `group_id` **in Driver API** " -"([#1595](https://github.com/adap/flower/pull/1595))" +"**Unify client API** ([#2303](https://github.com/adap/flower/pull/2303), " +"[#2390](https://github.com/adap/flower/pull/2390), " +"[#2493](https://github.com/adap/flower/pull/2493))" msgstr "" -"**Ajouter la prise en charge de** `workload_id` **et** `group_id` **dans " -"l'API du pilote** ([#1595](https://github.com/adap/flower/pull/1595))" +"**Mettre à jour les exemples de code** " +"([#1291](https://github.com/adap/flower/pull/1291), " +"[#1286](https://github.com/adap/flower/pull/1286), " +"[#1282](https://github.com/adap/flower/pull/1282))" -#: ../../source/ref-changelog.md:362 +#: ../../source/ref-changelog.md:208 msgid "" -"The (experimental) Driver API now supports a `workload_id` that can be " -"used to identify which workload a task belongs to. It also supports a new" -" `group_id` that can be used, for example, to indicate the current " -"training round. Both the `workload_id` and `group_id` enable client nodes" -" to decide whether they want to handle a task or not." +"Using the `client_fn`, Flower clients can interchangeably run as " +"standalone processes (i.e. via `start_client`) or in simulation (i.e. via" +" `start_simulation`) without requiring changes to how the client class is" +" defined and instantiated. The `to_client()` function is introduced to " +"convert a `NumPyClient` to a `Client`." msgstr "" -"L'API (expérimentale) Driver prend désormais en charge un `workload_id` " -"qui peut être utilisé pour identifier la charge de travail à laquelle une" -" tâche appartient. Elle prend également en charge un nouveau `group_id` " -"qui peut être utilisé, par exemple, pour indiquer le cycle de formation " -"en cours. Le `workload_id` et le `group_id` permettent tous deux aux " -"nœuds clients de décider s'ils veulent traiter une tâche ou non." -#: ../../source/ref-changelog.md:364 +#: ../../source/ref-changelog.md:210 +#, fuzzy msgid "" -"**Make Driver API and Fleet API address configurable** " -"([#1637](https://github.com/adap/flower/pull/1637))" +"**Add new** `Bulyan` **strategy** " +"([#1817](https://github.com/adap/flower/pull/1817), " +"[#1891](https://github.com/adap/flower/pull/1891))" msgstr "" -"**Faire en sorte que l'adresse de l'API du conducteur et de l'API de la " -"flotte soit configurable** " -"([#1637](https://github.com/adap/flower/pull/1637))" +"**Nouvelles stratégies intégrées** " +"([#828](https://github.com/adap/flower/pull/828) " +"[#822](https://github.com/adap/flower/pull/822))" -#: ../../source/ref-changelog.md:366 +#: ../../source/ref-changelog.md:212 +#, fuzzy msgid "" -"The (experimental) long-running Flower server (Driver API and Fleet API) " -"can now configure the server address of both Driver API (via `--driver-" -"api-address`) and Fleet API (via `--fleet-api-address`) when starting:" +"The new `Bulyan` strategy implements Bulyan by [El Mhamdi et al., " +"2018](https://arxiv.org/abs/1802.07927)" msgstr "" -"Le serveur Flower (expérimental) de longue durée (Driver API et Fleet " -"API) peut maintenant configurer l'adresse du serveur de Driver API (via " -"`--driver-api-address`) et de Fleet API (via `--fleet-api-address`) lors " -"de son démarrage :" +"La nouvelle stratégie `FedMedian` met en œuvre Federated Median " +"(FedMedian) par [Yin et al., 2018] " +"(https://arxiv.org/pdf/1803.01498v1.pdf)." -#: ../../source/ref-changelog.md:368 +#: ../../source/ref-changelog.md:214 #, fuzzy msgid "" -"`flower-server --driver-api-address \"0.0.0.0:8081\" --fleet-api-address " -"\"0.0.0.0:8086\"`" +"**Add new** `XGB Bagging` **strategy** " +"([#2611](https://github.com/adap/flower/pull/2611))" msgstr "" -"``flower-superlink --driver-api-address \"0.0.0.0:8081\" --fleet-api-" -"address \"0.0.0.0:8086\" ``" - -#: ../../source/ref-changelog.md:370 -msgid "Both IPv4 and IPv6 addresses are supported." -msgstr "Les adresses IPv4 et IPv6 sont toutes deux prises en charge." +"**Ajouter une nouvelle stratégie `FedProx`** " +"([#1619](https://github.com/adap/flower/pull/1619))" -#: ../../source/ref-changelog.md:372 +#: ../../source/ref-changelog.md:216 ../../source/ref-changelog.md:218 +#, fuzzy msgid "" -"**Add new example of Federated Learning using fastai and Flower** " -"([#1598](https://github.com/adap/flower/pull/1598))" +"**Introduce `WorkloadState`** " +"([#2564](https://github.com/adap/flower/pull/2564), " +"[#2632](https://github.com/adap/flower/pull/2632))" msgstr "" -"**Ajouter un nouvel exemple d'apprentissage fédéré utilisant fastai et " -"Flower** ([#1598](https://github.com/adap/flower/pull/1598))" +"**Nouvelles stratégies intégrées** " +"([#828](https://github.com/adap/flower/pull/828) " +"[#822](https://github.com/adap/flower/pull/822))" -#: ../../source/ref-changelog.md:374 +#: ../../source/ref-changelog.md:222 +#, fuzzy msgid "" -"A new code example (`quickstart-fastai`) demonstrates federated learning " -"with [fastai](https://www.fast.ai/) and Flower. You can find it here: " -"[quickstart-fastai](https://github.com/adap/flower/tree/main/examples" -"/quickstart-fastai)." +"FedProx ([#2210](https://github.com/adap/flower/pull/2210), " +"[#2286](https://github.com/adap/flower/pull/2286), " +"[#2509](https://github.com/adap/flower/pull/2509))" msgstr "" -"Un nouvel exemple de code (`quickstart-fastai`) démontre l'apprentissage " -"fédéré avec [fastai](https://www.fast.ai/) et Flower. Tu peux le trouver " -"ici : [quickstart-" -"fastai](https://github.com/adap/flower/tree/main/examples/quickstart-" -"fastai)." +"**Mettre à jour les exemples de code** " +"([#1291](https://github.com/adap/flower/pull/1291), " +"[#1286](https://github.com/adap/flower/pull/1286), " +"[#1282](https://github.com/adap/flower/pull/1282))" -#: ../../source/ref-changelog.md:376 +#: ../../source/ref-changelog.md:224 +#, fuzzy msgid "" -"**Make Android example compatible with** `flwr >= 1.0.0` **and the latest" -" versions of Android** " -"([#1603](https://github.com/adap/flower/pull/1603))" +"Baselines Docs ([#2290](https://github.com/adap/flower/pull/2290), " +"[#2400](https://github.com/adap/flower/pull/2400))" msgstr "" -"**Rendre l'exemple Android compatible avec** `flwr >= 1.0.0` **et les " -"dernières versions d'Android** " -"([#1603](https://github.com/adap/flower/pull/1603))" +"**Nouvel exemple de code JAX** " +"([#906](https://github.com/adap/flower/pull/906), " +"[#1143](https://github.com/adap/flower/pull/1143))" -#: ../../source/ref-changelog.md:378 +#: ../../source/ref-changelog.md:226 #, fuzzy msgid "" -"The Android code example has received a substantial update: the project " -"is compatible with Flower 1.0 (and later), the UI received a full " -"refresh, and the project is updated to be compatible with newer Android " -"tooling." +"FedMLB ([#2340](https://github.com/adap/flower/pull/2340), " +"[#2507](https://github.com/adap/flower/pull/2507))" msgstr "" -"L'exemple de code Android a reçu une mise à jour substantielle : le " -"projet est compatible avec Flower 1.0 et les versions ultérieures, " -"l'interface utilisateur a reçu un rafraîchissement complet, et le projet " -"est mis à jour pour être compatible avec les outils Android les plus " -"récents." +"**Exemple de code mis à jour** " +"([#1344](https://github.com/adap/flower/pull/1344), " +"[#1347](https://github.com/adap/flower/pull/1347))" -#: ../../source/ref-changelog.md:380 +#: ../../source/ref-changelog.md:228 +#, fuzzy msgid "" -"**Add new `FedProx` strategy** " -"([#1619](https://github.com/adap/flower/pull/1619))" +"TAMUNA ([#2254](https://github.com/adap/flower/pull/2254), " +"[#2508](https://github.com/adap/flower/pull/2508))" msgstr "" -"**Ajouter une nouvelle stratégie `FedProx`** " -"([#1619](https://github.com/adap/flower/pull/1619))" +"**Nouvelles stratégies intégrées** " +"([#828](https://github.com/adap/flower/pull/828) " +"[#822](https://github.com/adap/flower/pull/822))" -#: ../../source/ref-changelog.md:382 -msgid "" -"This " -"[strategy](https://github.com/adap/flower/blob/main/src/py/flwr/server/strategy/fedprox.py)" -" is almost identical to " -"[`FedAvg`](https://github.com/adap/flower/blob/main/src/py/flwr/server/strategy/fedavg.py)," -" but helps users replicate what is described in this " -"[paper](https://arxiv.org/abs/1812.06127). It essentially adds a " -"parameter called `proximal_mu` to regularize the local models with " -"respect to the global models." -msgstr "" -"Cette " -"[stratégie](https://github.com/adap/flower/blob/main/src/py/flwr/server/strategy/fedprox.py)" -" est presque identique à " -"[`FedAvg`](https://github.com/adap/flower/blob/main/src/py/flwr/server/strategy/fedavg.py)," -" mais aide les utilisateurs à reproduire ce qui est décrit dans cet " -"[article](https://arxiv.org/abs/1812.06127). Elle ajoute essentiellement " -"un paramètre appelé `proximal_mu` pour régulariser les modèles locaux par" -" rapport aux modèles globaux." +#: ../../source/ref-changelog.md:230 +#, fuzzy +msgid "FedMeta [#2438](https://github.com/adap/flower/pull/2438)" +msgstr "Nouvelle référence API ([#554](https://github.com/adap/flower/pull/554))" -#: ../../source/ref-changelog.md:384 -msgid "" -"**Add new metrics to telemetry events** " -"([#1640](https://github.com/adap/flower/pull/1640))" +#: ../../source/ref-changelog.md:232 +#, fuzzy +msgid "FjORD [#2431](https://github.com/adap/flower/pull/2431)" msgstr "" -"**Ajouter de nouvelles métriques aux événements de télémétrie** " -"([#1640](https://github.com/adap/flower/pull/1640))" +"Amélioration de la documentation sur le serveur gRPC " +"([#841](https://github.com/adap/flower/pull/841))" -#: ../../source/ref-changelog.md:386 -msgid "" -"An updated event structure allows, for example, the clustering of events " -"within the same workload." +#: ../../source/ref-changelog.md:234 +#, fuzzy +msgid "MOON [#2421](https://github.com/adap/flower/pull/2421)" msgstr "" -"Une structure d'événements mise à jour permet, par exemple, de regrouper " -"des événements au sein d'une même charge de travail." +"**Ajouter une nouvelle stratégie `FedProx`** " +"([#1619](https://github.com/adap/flower/pull/1619))" -#: ../../source/ref-changelog.md:388 -msgid "" -"**Add new custom strategy tutorial section** " -"[#1623](https://github.com/adap/flower/pull/1623)" +#: ../../source/ref-changelog.md:236 +#, fuzzy +msgid "DepthFL [#2295](https://github.com/adap/flower/pull/2295)" msgstr "" -"**Ajouter une nouvelle section de tutoriel sur les stratégies " -"personnalisées** [#1623](https://github.com/adap/flower/pull/1623)" +"**Ajouter une nouvelle stratégie `FedProx`** " +"([#1619](https://github.com/adap/flower/pull/1619))" -#: ../../source/ref-changelog.md:390 +#: ../../source/ref-changelog.md:238 #, fuzzy -msgid "" -"The Flower tutorial now has a new section that covers implementing a " -"custom strategy from scratch: [Open in " -"Colab](https://colab.research.google.com/github/adap/flower/blob/main/doc/source" -"/tutorial-build-a-strategy-from-scratch-pytorch.ipynb)" +msgid "FedPer [#2266](https://github.com/adap/flower/pull/2266)" +msgstr "Nouvelle référence API ([#554](https://github.com/adap/flower/pull/554))" + +#: ../../source/ref-changelog.md:240 +#, fuzzy +msgid "FedWav2vec [#2551](https://github.com/adap/flower/pull/2551)" +msgstr "Nouvelle référence API ([#554](https://github.com/adap/flower/pull/554))" + +#: ../../source/ref-changelog.md:242 +#, fuzzy +msgid "niid-Bench [#2428](https://github.com/adap/flower/pull/2428)" +msgstr "Nouvelle référence API ([#554](https://github.com/adap/flower/pull/554))" + +#: ../../source/ref-changelog.md:244 +#, fuzzy +msgid "" +"FedBN ([#2608](https://github.com/adap/flower/pull/2608), " +"[#2615](https://github.com/adap/flower/pull/2615))" msgstr "" -"Le tutoriel sur les fleurs comporte désormais une nouvelle section qui " -"traite de la mise en œuvre d'une stratégie personnalisée à partir de zéro" -" : [Ouvrir dans " -"Colab](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial/Flower-3-Building-a" -"-Strategy-PyTorch.ipynb)" +"**Nouvelles stratégies intégrées** " +"([#828](https://github.com/adap/flower/pull/828) " +"[#822](https://github.com/adap/flower/pull/822))" -#: ../../source/ref-changelog.md:392 +#: ../../source/ref-changelog.md:246 +#, fuzzy msgid "" -"**Add new custom serialization tutorial section** " -"([#1622](https://github.com/adap/flower/pull/1622))" +"**General updates to Flower Examples** " +"([#2384](https://github.com/adap/flower/pull/2384), " +"[#2425](https://github.com/adap/flower/pull/2425), " +"[#2526](https://github.com/adap/flower/pull/2526), " +"[#2302](https://github.com/adap/flower/pull/2302), " +"[#2545](https://github.com/adap/flower/pull/2545))" msgstr "" -"**Ajouter une nouvelle section de tutoriel sur la sérialisation " -"personnalisée** ([#1622](https://github.com/adap/flower/pull/1622))" +"Mettre à jour les outils de développement " +"([#1231](https://github.com/adap/flower/pull/1231), " +"[#1276](https://github.com/adap/flower/pull/1276), " +"[#1301](https://github.com/adap/flower/pull/1301), " +"[#1310](https://github.com/adap/flower/pull/1310))" -#: ../../source/ref-changelog.md:394 +#: ../../source/ref-changelog.md:248 #, fuzzy msgid "" -"The Flower tutorial now has a new section that covers custom " -"serialization: [Open in " -"Colab](https://colab.research.google.com/github/adap/flower/blob/main/doc/source" -"/tutorial-customize-the-client-pytorch.ipynb)" +"**General updates to Flower Baselines** " +"([#2301](https://github.com/adap/flower/pull/2301), " +"[#2305](https://github.com/adap/flower/pull/2305), " +"[#2307](https://github.com/adap/flower/pull/2307), " +"[#2327](https://github.com/adap/flower/pull/2327), " +"[#2435](https://github.com/adap/flower/pull/2435), " +"[#2462](https://github.com/adap/flower/pull/2462), " +"[#2463](https://github.com/adap/flower/pull/2463), " +"[#2461](https://github.com/adap/flower/pull/2461), " +"[#2469](https://github.com/adap/flower/pull/2469), " +"[#2466](https://github.com/adap/flower/pull/2466), " +"[#2471](https://github.com/adap/flower/pull/2471), " +"[#2472](https://github.com/adap/flower/pull/2472), " +"[#2470](https://github.com/adap/flower/pull/2470))" msgstr "" -"Le tutoriel sur les fleurs comporte désormais une nouvelle section qui " -"traite de la sérialisation personnalisée : [Ouvrir dans " -"Colab](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial/Flower-4" -"-Client-and-NumPyClient-PyTorch.ipynb)" +"**Améliorations générales** " +"([#1491](https://github.com/adap/flower/pull/1491), " +"[#1504](https://github.com/adap/flower/pull/1504), " +"[#1506](https://github.com/adap/flower/pull/1506), " +"[#1514](https://github.com/adap/flower/pull/1514), " +"[#1522](https://github.com/adap/flower/pull/1522), " +"[#1523](https://github.com/adap/flower/pull/1523), " +"[#1526](https://github.com/adap/flower/pull/1526), " +"[#1528](https://github.com/adap/flower/pull/1528), " +"[#1547](https://github.com/adap/flower/pull/1547), " +"[#1549](https://github.com/adap/flower/pull/1549), " +"[#1560](https://github.com/adap/flower/pull/1560), " +"[#1564](https://github.com/adap/flower/pull/1564), " +"[#1566](https://github.com/adap/flower/pull/1566))" -#: ../../source/ref-changelog.md:396 +#: ../../source/ref-changelog.md:250 +#, fuzzy msgid "" -"**General improvements** " -"([#1638](https://github.com/adap/flower/pull/1638), " -"[#1634](https://github.com/adap/flower/pull/1634), " -"[#1636](https://github.com/adap/flower/pull/1636), " -"[#1635](https://github.com/adap/flower/pull/1635), " -"[#1633](https://github.com/adap/flower/pull/1633), " -"[#1632](https://github.com/adap/flower/pull/1632), " -"[#1631](https://github.com/adap/flower/pull/1631), " -"[#1630](https://github.com/adap/flower/pull/1630), " -"[#1627](https://github.com/adap/flower/pull/1627), " -"[#1593](https://github.com/adap/flower/pull/1593), " -"[#1616](https://github.com/adap/flower/pull/1616), " -"[#1615](https://github.com/adap/flower/pull/1615), " -"[#1607](https://github.com/adap/flower/pull/1607), " -"[#1609](https://github.com/adap/flower/pull/1609), " -"[#1608](https://github.com/adap/flower/pull/1608), " -"[#1603](https://github.com/adap/flower/pull/1603), " -"[#1590](https://github.com/adap/flower/pull/1590), " -"[#1580](https://github.com/adap/flower/pull/1580), " -"[#1599](https://github.com/adap/flower/pull/1599), " -"[#1600](https://github.com/adap/flower/pull/1600), " -"[#1601](https://github.com/adap/flower/pull/1601), " -"[#1597](https://github.com/adap/flower/pull/1597), " -"[#1595](https://github.com/adap/flower/pull/1595), " -"[#1591](https://github.com/adap/flower/pull/1591), " -"[#1588](https://github.com/adap/flower/pull/1588), " -"[#1589](https://github.com/adap/flower/pull/1589), " -"[#1587](https://github.com/adap/flower/pull/1587), " -"[#1573](https://github.com/adap/flower/pull/1573), " -"[#1581](https://github.com/adap/flower/pull/1581), " -"[#1578](https://github.com/adap/flower/pull/1578), " -"[#1574](https://github.com/adap/flower/pull/1574), " -"[#1572](https://github.com/adap/flower/pull/1572), " -"[#1586](https://github.com/adap/flower/pull/1586))" +"**General updates to the simulation engine** " +"([#2331](https://github.com/adap/flower/pull/2331), " +"[#2447](https://github.com/adap/flower/pull/2447), " +"[#2448](https://github.com/adap/flower/pull/2448), " +"[#2294](https://github.com/adap/flower/pull/2294))" msgstr "" -"**General improvements** " -"([#1638](https://github.com/adap/flower/pull/1638), " -"[#1634](https://github.com/adap/flower/pull/1634), " -"[#1636](https://github.com/adap/flower/pull/1636), " -"[#1635](https://github.com/adap/flower/pull/1635), " -"[#1633](https://github.com/adap/flower/pull/1633), " -"[#1632](https://github.com/adap/flower/pull/1632), " -"[#1631](https://github.com/adap/flower/pull/1631), " -"[#1630](https://github.com/adap/flower/pull/1630), " -"[#1627](https://github.com/adap/flower/pull/1627), " -"[#1593](https://github.com/adap/flower/pull/1593), " -"[#1616](https://github.com/adap/flower/pull/1616), " -"[#1615](https://github.com/adap/flower/pull/1615), " -"[#1607](https://github.com/adap/flower/pull/1607), " -"[#1609](https://github.com/adap/flower/pull/1609), " -"[#1608](https://github.com/adap/flower/pull/1608), " -"[#1603](https://github.com/adap/flower/pull/1603), " -"[#1590](https://github.com/adap/flower/pull/1590), " -"[#1580](https://github.com/adap/flower/pull/1580), " -"[#1599](https://github.com/adap/flower/pull/1599), " -"[#1600](https://github.com/ada" +"Mettre à jour les outils de développement " +"([#1231](https://github.com/adap/flower/pull/1231), " +"[#1276](https://github.com/adap/flower/pull/1276), " +"[#1301](https://github.com/adap/flower/pull/1301), " +"[#1310](https://github.com/adap/flower/pull/1310))" -#: ../../source/ref-changelog.md:400 +#: ../../source/ref-changelog.md:252 +#, fuzzy msgid "" -"**Updated documentation** " -"([#1629](https://github.com/adap/flower/pull/1629), " -"[#1628](https://github.com/adap/flower/pull/1628), " -"[#1620](https://github.com/adap/flower/pull/1620), " -"[#1618](https://github.com/adap/flower/pull/1618), " -"[#1617](https://github.com/adap/flower/pull/1617), " -"[#1613](https://github.com/adap/flower/pull/1613), " -"[#1614](https://github.com/adap/flower/pull/1614))" +"**General updates to Flower SDKs** " +"([#2288](https://github.com/adap/flower/pull/2288), " +"[#2429](https://github.com/adap/flower/pull/2429), " +"[#2555](https://github.com/adap/flower/pull/2555), " +"[#2543](https://github.com/adap/flower/pull/2543), " +"[#2544](https://github.com/adap/flower/pull/2544), " +"[#2597](https://github.com/adap/flower/pull/2597), " +"[#2623](https://github.com/adap/flower/pull/2623))" msgstr "" -"**Mise à jour de la documentation** " -"([#1629](https://github.com/adap/flower/pull/1629), " -"[#1628](https://github.com/adap/flower/pull/1628), " -"[#1620](https://github.com/adap/flower/pull/1620), " -"[#1618](https://github.com/adap/flower/pull/1618), " -"[#1617](https://github.com/adap/flower/pull/1617), " -"[#1613](https://github.com/adap/flower/pull/1613), " -"[#1614](https://github.com/adap/flower/pull/1614))" +"**Tutoriel amélioré** ([#1468](https://github.com/adap/flower/pull/1468)," +" [#1470](https://github.com/adap/flower/pull/1470), " +"[#1472](https://github.com/adap/flower/pull/1472), " +"[#1473](https://github.com/adap/flower/pull/1473), " +"[#1474](https://github.com/adap/flower/pull/1474), " +"[#1475](https://github.com/adap/flower/pull/1475))" -#: ../../source/ref-changelog.md:402 ../../source/ref-changelog.md:469 +#: ../../source/ref-changelog.md:254 msgid "" -"As usual, the documentation has improved quite a bit. It is another step " -"in our effort to make the Flower documentation the best documentation of " -"any project. Stay tuned and as always, feel free to provide feedback!" +"**General improvements** " +"([#2309](https://github.com/adap/flower/pull/2309), " +"[#2310](https://github.com/adap/flower/pull/2310), " +"[#2313](https://github.com/adap/flower/pull/2313), " +"[#2316](https://github.com/adap/flower/pull/2316), " +"[#2317](https://github.com/adap/flower/pull/2317), " +"[#2349](https://github.com/adap/flower/pull/2349), " +"[#2360](https://github.com/adap/flower/pull/2360), " +"[#2402](https://github.com/adap/flower/pull/2402), " +"[#2446](https://github.com/adap/flower/pull/2446), " +"[#2561](https://github.com/adap/flower/pull/2561), " +"[#2273](https://github.com/adap/flower/pull/2273), " +"[#2267](https://github.com/adap/flower/pull/2267), " +"[#2274](https://github.com/adap/flower/pull/2274), " +"[#2275](https://github.com/adap/flower/pull/2275), " +"[#2432](https://github.com/adap/flower/pull/2432), " +"[#2251](https://github.com/adap/flower/pull/2251), " +"[#2321](https://github.com/adap/flower/pull/2321), " +"[#1936](https://github.com/adap/flower/pull/1936), " +"[#2408](https://github.com/adap/flower/pull/2408), " +"[#2413](https://github.com/adap/flower/pull/2413), " +"[#2401](https://github.com/adap/flower/pull/2401), " +"[#2531](https://github.com/adap/flower/pull/2531), " +"[#2534](https://github.com/adap/flower/pull/2534), " +"[#2535](https://github.com/adap/flower/pull/2535), " +"[#2521](https://github.com/adap/flower/pull/2521), " +"[#2553](https://github.com/adap/flower/pull/2553), " +"[#2596](https://github.com/adap/flower/pull/2596))" msgstr "" -"Comme d'habitude, la documentation s'est beaucoup améliorée. C'est une " -"autre étape dans notre effort pour faire de la documentation de Flower la" -" meilleure documentation de tout projet. Reste à l'écoute et comme " -"toujours, n'hésite pas à nous faire part de tes commentaires !" -#: ../../source/ref-changelog.md:408 -msgid "v1.2.0 (2023-01-13)" -msgstr "v1.2.0 (2023-01-13)" +#: ../../source/ref-changelog.md:256 ../../source/ref-changelog.md:346 +#: ../../source/ref-changelog.md:410 ../../source/ref-changelog.md:464 +#: ../../source/ref-changelog.md:531 +msgid "Flower received many improvements under the hood, too many to list here." +msgstr "" +"Flower a reçu de nombreuses améliorations sous le capot, trop nombreuses " +"pour être énumérées ici." -#: ../../source/ref-changelog.md:414 +#: ../../source/ref-changelog.md:260 +#, fuzzy msgid "" -"`Adam Narozniak`, `Charles Beauville`, `Daniel J. Beutel`, `Edoardo`, `L." -" Jiang`, `Ragy`, `Taner Topal`, `dannymcy`" +"**Remove support for Python 3.7** " +"([#2280](https://github.com/adap/flower/pull/2280), " +"[#2299](https://github.com/adap/flower/pull/2299), " +"[#2304](https://github.com/adap/flower/pull/2304), " +"[#2306](https://github.com/adap/flower/pull/2306), " +"[#2355](https://github.com/adap/flower/pull/2355), " +"[#2356](https://github.com/adap/flower/pull/2356))" msgstr "" -"adam Narozniak`, `Charles Beauville`, `Daniel J. Beutel`, `Edoardo`, `L. " -"Jiang`, `Ragy`, `Taner Topal`, `dannymcy`" +"**Nouvel exemple de code MLCube** " +"([#779](https://github.com/adap/flower/pull/779), " +"[#1034](https://github.com/adap/flower/pull/1034), " +"[#1065](https://github.com/adap/flower/pull/1065), " +"[#1090](https://github.com/adap/flower/pull/1090))" -#: ../../source/ref-changelog.md:418 +#: ../../source/ref-changelog.md:262 msgid "" -"**Introduce new Flower Baseline: FedAvg MNIST** " -"([#1497](https://github.com/adap/flower/pull/1497), " -"[#1552](https://github.com/adap/flower/pull/1552))" +"Python 3.7 support was deprecated in Flower 1.5, and this release removes" +" support. Flower now requires Python 3.8." msgstr "" -"**Introduire une nouvelle fleur Référence : FedAvg MNIST** " -"([#1497](https://github.com/adap/flower/pull/1497), " -"[#1552](https://github.com/adap/flower/pull/1552))" -#: ../../source/ref-changelog.md:420 -msgid "" -"Over the coming weeks, we will be releasing a number of new reference " -"implementations useful especially to FL newcomers. They will typically " -"revisit well known papers from the literature, and be suitable for " -"integration in your own application or for experimentation, in order to " -"deepen your knowledge of FL in general. Today's release is the first in " -"this series. [Read more.](https://flower.ai/blog/2023-01-12-fl-starter-" -"pack-fedavg-mnist-cnn/)" -msgstr "" -"Au cours des prochaines semaines, nous publierons un certain nombre de " -"nouvelles implémentations de référence utiles en particulier pour les " -"nouveaux venus en FL. Elles revisiteront généralement des articles bien " -"connus de la littérature, et seront adaptées à l'intégration dans votre " -"propre application ou à l'expérimentation, afin d'approfondir votre " -"connaissance de FL en général. La publication d'aujourd'hui est la " -"première de cette série. [Lire la " -"suite.](https://flower.ai/blog/2023-01-12-fl-starter-pack-fedavg-mnist-" -"cnn/)" - -#: ../../source/ref-changelog.md:422 -msgid "" -"**Improve GPU support in simulations** " -"([#1555](https://github.com/adap/flower/pull/1555))" -msgstr "" -"**Améliorer la prise en charge des GPU dans les simulations** " -"([#1555](https://github.com/adap/flower/pull/1555))" - -#: ../../source/ref-changelog.md:424 -msgid "" -"The Ray-based Virtual Client Engine (`start_simulation`) has been updated" -" to improve GPU support. The update includes some of the hard-earned " -"lessons from scaling simulations in GPU cluster environments. New " -"defaults make running GPU-based simulations substantially more robust." -msgstr "" -"Le moteur client virtuel basé sur Ray (`start_simulation`) a été mis à " -"jour pour améliorer la prise en charge des GPU. La mise à jour inclut " -"certaines des leçons durement apprises lors de la mise à l'échelle des " -"simulations dans des environnements de grappes de GPU. De nouveaux " -"paramètres par défaut rendent l'exécution des simulations basées sur les " -"GPU beaucoup plus robuste." - -#: ../../source/ref-changelog.md:426 -msgid "" -"**Improve GPU support in Jupyter Notebook tutorials** " -"([#1527](https://github.com/adap/flower/pull/1527), " -"[#1558](https://github.com/adap/flower/pull/1558))" -msgstr "" -"**Améliorer la prise en charge du GPU dans les tutoriels Jupyter " -"Notebook** ([#1527](https://github.com/adap/flower/pull/1527), " -"[#1558](https://github.com/adap/flower/pull/1558))" - -#: ../../source/ref-changelog.md:428 -msgid "" -"Some users reported that Jupyter Notebooks have not always been easy to " -"use on GPU instances. We listened and made improvements to all of our " -"Jupyter notebooks! Check out the updated notebooks here:" -msgstr "" -"Certains utilisateurs ont signalé que les carnets Jupyter n'ont pas " -"toujours été faciles à utiliser sur les instances GPU. Nous les avons " -"écoutés et avons apporté des améliorations à tous nos carnets Jupyter ! " -"Découvre les carnets mis à jour ici :" - -#: ../../source/ref-changelog.md:430 +#: ../../source/ref-changelog.md:264 #, fuzzy msgid "" -"[An Introduction to Federated Learning](https://flower.ai/docs/framework" -"/tutorial-get-started-with-flower-pytorch.html)" +"**Remove experimental argument** `rest` **from** `start_client` " +"([#2324](https://github.com/adap/flower/pull/2324))" msgstr "" -"[Une introduction à l'apprentissage fédéré] " -"(https://flower.ai/docs/tutorial/Flower-1-Intro-to-FL-PyTorch.html)" +"**Supprimer les stratégies expérimentales** " +"([#1280](https://github.com/adap/flower/pull/1280))" -#: ../../source/ref-changelog.md:431 -#, fuzzy +#: ../../source/ref-changelog.md:266 msgid "" -"[Strategies in Federated Learning](https://flower.ai/docs/framework" -"/tutorial-use-a-federated-learning-strategy-pytorch.html)" +"The (still experimental) argument `rest` was removed from `start_client` " +"and `start_numpy_client`. Use `transport=\"rest\"` to opt into the " +"experimental REST API instead." msgstr "" -"[Stratégies d'apprentissage fédéré] " -"(https://flower.ai/docs/tutorial/Flower-2-Strategies-in-FL-PyTorch.html)" -#: ../../source/ref-changelog.md:432 +#: ../../source/ref-changelog.md:268 #, fuzzy -msgid "" -"[Building a Strategy](https://flower.ai/docs/framework/tutorial-build-a" -"-strategy-from-scratch-pytorch.html)" -msgstr "" -"[Construire une stratégie] " -"(https://flower.ai/docs/tutorial/Flower-3-Building-a-Strategy-" -"PyTorch.html)" +msgid "v1.5.0 (2023-08-31)" +msgstr "v1.4.0 (2023-04-21)" -#: ../../source/ref-changelog.md:433 -#, fuzzy +#: ../../source/ref-changelog.md:274 msgid "" -"[Client and NumPyClient](https://flower.ai/docs/framework/tutorial-" -"customize-the-client-pytorch.html)" +"`Adam Narozniak`, `Anass Anhari`, `Charles Beauville`, `Dana-Farber`, " +"`Daniel J. Beutel`, `Daniel Nata Nugraha`, `Edoardo Gabrielli`, `Gustavo " +"Bertoli`, `Heng Pan`, `Javier`, `Mahdi`, `Steven Hé (Sīchàng)`, `Taner " +"Topal`, `achiverram28`, `danielnugraha`, `eunchung`, `ruthgal` " msgstr "" -"[Client et NumPyClient] (https://flower.ai/docs/tutorial/Flower-4-Client-" -"and-NumPyClient-PyTorch.html)" -#: ../../source/ref-changelog.md:435 +#: ../../source/ref-changelog.md:278 +#, fuzzy msgid "" -"**Introduce optional telemetry** " -"([#1533](https://github.com/adap/flower/pull/1533), " -"[#1544](https://github.com/adap/flower/pull/1544), " -"[#1584](https://github.com/adap/flower/pull/1584))" +"**Introduce new simulation engine** " +"([#1969](https://github.com/adap/flower/pull/1969), " +"[#2221](https://github.com/adap/flower/pull/2221), " +"[#2248](https://github.com/adap/flower/pull/2248))" msgstr "" "**Introduire la télémétrie optionnelle** " "([#1533](https://github.com/adap/flower/pull/1533), " "[#1544](https://github.com/adap/flower/pull/1544), " "[#1584](https://github.com/adap/flower/pull/1584))" -#: ../../source/ref-changelog.md:437 +#: ../../source/ref-changelog.md:280 msgid "" -"After a [request for " -"feedback](https://github.com/adap/flower/issues/1534) from the community," -" the Flower open-source project introduces optional collection of " -"*anonymous* usage metrics to make well-informed decisions to improve " -"Flower. Doing this enables the Flower team to understand how Flower is " -"used and what challenges users might face." +"The new simulation engine has been rewritten from the ground up, yet it " +"remains fully backwards compatible. It offers much improved stability and" +" memory handling, especially when working with GPUs. Simulations " +"transparently adapt to different settings to scale simulation in CPU-" +"only, CPU+GPU, multi-GPU, or multi-node multi-GPU environments." msgstr "" -"À la suite d'une [demande de commentaires] " -"(https://github.com/adap/flower/issues/1534) de la part de la communauté," -" le projet open-source Flower introduit la collecte optionnelle de " -"mesures d'utilisation *anonymes* afin de prendre des décisions éclairées " -"pour améliorer Flower. Cela permet à l'équipe de Flower de comprendre " -"comment Flower est utilisé et quels sont les défis auxquels les " -"utilisateurs peuvent être confrontés." -#: ../../source/ref-changelog.md:439 -#, fuzzy +#: ../../source/ref-changelog.md:282 msgid "" -"**Flower is a friendly framework for collaborative AI and data science.**" -" Staying true to this statement, Flower makes it easy to disable " -"telemetry for users who do not want to share anonymous usage metrics. " -"[Read more.](https://flower.ai/docs/telemetry.html)." +"Comprehensive documentation includes a new [how-to run " +"simulations](https://flower.ai/docs/framework/how-to-run-" +"simulations.html) guide, new [simulation-" +"pytorch](https://flower.ai/docs/examples/simulation-pytorch.html) and " +"[simulation-tensorflow](https://flower.ai/docs/examples/simulation-" +"tensorflow.html) notebooks, and a new [YouTube tutorial " +"series](https://www.youtube.com/watch?v=cRebUIGB5RU&list=PLNG4feLHqCWlnj8a_E1A_n5zr2-8pafTB)." msgstr "" -"**Flower est un cadre convivial pour l'IA collaborative et la science des" -" données.** Restant fidèle à cette déclaration, Flower permet de " -"désactiver facilement la télémétrie pour les utilisateurs qui ne " -"souhaitent pas partager des métriques d'utilisation anonymes.[Lire la " -"suite.](https://flower.ai/docs/telemetry.html)." -#: ../../source/ref-changelog.md:441 +#: ../../source/ref-changelog.md:284 msgid "" -"**Introduce (experimental) Driver API** " -"([#1520](https://github.com/adap/flower/pull/1520), " -"[#1525](https://github.com/adap/flower/pull/1525), " -"[#1545](https://github.com/adap/flower/pull/1545), " -"[#1546](https://github.com/adap/flower/pull/1546), " -"[#1550](https://github.com/adap/flower/pull/1550), " -"[#1551](https://github.com/adap/flower/pull/1551), " -"[#1567](https://github.com/adap/flower/pull/1567))" +"**Restructure Flower Docs** " +"([#1824](https://github.com/adap/flower/pull/1824), " +"[#1865](https://github.com/adap/flower/pull/1865), " +"[#1884](https://github.com/adap/flower/pull/1884), " +"[#1887](https://github.com/adap/flower/pull/1887), " +"[#1919](https://github.com/adap/flower/pull/1919), " +"[#1922](https://github.com/adap/flower/pull/1922), " +"[#1920](https://github.com/adap/flower/pull/1920), " +"[#1923](https://github.com/adap/flower/pull/1923), " +"[#1924](https://github.com/adap/flower/pull/1924), " +"[#1962](https://github.com/adap/flower/pull/1962), " +"[#2006](https://github.com/adap/flower/pull/2006), " +"[#2133](https://github.com/adap/flower/pull/2133), " +"[#2203](https://github.com/adap/flower/pull/2203), " +"[#2215](https://github.com/adap/flower/pull/2215), " +"[#2122](https://github.com/adap/flower/pull/2122), " +"[#2223](https://github.com/adap/flower/pull/2223), " +"[#2219](https://github.com/adap/flower/pull/2219), " +"[#2232](https://github.com/adap/flower/pull/2232), " +"[#2233](https://github.com/adap/flower/pull/2233), " +"[#2234](https://github.com/adap/flower/pull/2234), " +"[#2235](https://github.com/adap/flower/pull/2235), " +"[#2237](https://github.com/adap/flower/pull/2237), " +"[#2238](https://github.com/adap/flower/pull/2238), " +"[#2242](https://github.com/adap/flower/pull/2242), " +"[#2231](https://github.com/adap/flower/pull/2231), " +"[#2243](https://github.com/adap/flower/pull/2243), " +"[#2227](https://github.com/adap/flower/pull/2227))" msgstr "" -"**([#1520](https://github.com/adap/flower/pull/1520), " -"[#1525](https://github.com/adap/flower/pull/1525), " -"[#1545](https://github.com/adap/flower/pull/1545), " -"[#1546](https://github.com/adap/flower/pull/1546), " -"[#1550](https://github.com/adap/flower/pull/1550), " -"[#1551](https://github.com/adap/flower/pull/1551), " -"[#1567](https://github.com/adap/flower/pull/1567))" -#: ../../source/ref-changelog.md:443 +#: ../../source/ref-changelog.md:286 msgid "" -"Flower now has a new (experimental) Driver API which will enable fully " -"programmable, async, and multi-tenant Federated Learning and Federated " -"Analytics applications. Phew, that's a lot! Going forward, the Driver API" -" will be the abstraction that many upcoming features will be built on - " -"and you can start building those things now, too." +"Much effort went into a completely restructured Flower docs experience. " +"The documentation on [flower.ai/docs](https://flower.ai/docs) is now " +"divided into Flower Framework, Flower Baselines, Flower Android SDK, " +"Flower iOS SDK, and code example projects." msgstr "" -"Flower dispose désormais d'une nouvelle API de pilote (expérimentale) qui" -" permettra de créer des applications Federated Learning et Federated " -"Analytics entièrement programmables, asynchrones et multi-tenant. Ouf, " -"c'est beaucoup ! À l'avenir, l'API de pilote sera l'abstraction sur " -"laquelle de nombreuses fonctionnalités à venir seront construites - et tu" -" peux commencer à construire ces choses dès maintenant, aussi." -#: ../../source/ref-changelog.md:445 +#: ../../source/ref-changelog.md:288 +#, fuzzy msgid "" -"The Driver API also enables a new execution mode in which the server runs" -" indefinitely. Multiple individual workloads can run concurrently and " -"start and stop their execution independent of the server. This is " -"especially useful for users who want to deploy Flower in production." +"**Introduce Flower Swift SDK** " +"([#1858](https://github.com/adap/flower/pull/1858), " +"[#1897](https://github.com/adap/flower/pull/1897))" msgstr "" -"L'API du pilote permet également un nouveau mode d'exécution dans lequel " -"le serveur s'exécute indéfiniment. Plusieurs charges de travail " -"individuelles peuvent s'exécuter simultanément et démarrer et arrêter " -"leur exécution indépendamment du serveur. Ceci est particulièrement utile" -" pour les utilisateurs qui souhaitent déployer Flower en production." +"**Introduction du SDK iOS (aperçu)** " +"([#1621](https://github.com/adap/flower/pull/1621), " +"[#1764](https://github.com/adap/flower/pull/1764))" -#: ../../source/ref-changelog.md:447 +#: ../../source/ref-changelog.md:290 msgid "" -"To learn more, check out the `mt-pytorch` code example. We look forward " -"to you feedback!" +"This is the first preview release of the Flower Swift SDK. Flower support" +" on iOS is improving, and alongside the Swift SDK and code example, there" +" is now also an iOS quickstart tutorial." msgstr "" -"Pour en savoir plus, consulte l'exemple de code `mt-pytorch`. Nous " -"attendons tes commentaires avec impatience !" -#: ../../source/ref-changelog.md:449 +#: ../../source/ref-changelog.md:292 +#, fuzzy msgid "" -"Please note: *The Driver API is still experimental and will likely change" -" significantly over time.*" +"**Introduce Flower Android SDK** " +"([#2131](https://github.com/adap/flower/pull/2131))" msgstr "" -"Remarque : *L'API du pilote est encore expérimentale et est susceptible " -"de changer de manière significative au fil du temps.*" +"**Introduire une nouvelle ligne de base pour les fleurs : FedAvg " +"FEMNIST** ([#1655](https://github.com/adap/flower/pull/1655))" -#: ../../source/ref-changelog.md:451 +#: ../../source/ref-changelog.md:294 msgid "" -"**Add new Federated Analytics with Pandas example** " -"([#1469](https://github.com/adap/flower/pull/1469), " -"[#1535](https://github.com/adap/flower/pull/1535))" +"This is the first preview release of the Flower Kotlin SDK. Flower " +"support on Android is improving, and alongside the Kotlin SDK and code " +"example, there is now also an Android quickstart tutorial." msgstr "" -"**Ajouter un nouvel exemple de Federated Analytics avec Pandas** " -"([#1469](https://github.com/adap/flower/pull/1469), " -"[#1535](https://github.com/adap/flower/pull/1535))" -#: ../../source/ref-changelog.md:453 +#: ../../source/ref-changelog.md:296 +#, fuzzy msgid "" -"A new code example (`quickstart-pandas`) demonstrates federated analytics" -" with Pandas and Flower. You can find it here: [quickstart-" -"pandas](https://github.com/adap/flower/tree/main/examples/quickstart-" -"pandas)." -msgstr "" -"Un nouvel exemple de code (`quickstart-pandas`) démontre l'analyse " -"fédérée avec Pandas et Flower. Tu peux le trouver ici : [quickstart-" -"pandas](https://github.com/adap/flower/tree/main/examples/quickstart-" -"pandas)." +"**Introduce new end-to-end testing infrastructure** " +"([#1842](https://github.com/adap/flower/pull/1842), " +"[#2071](https://github.com/adap/flower/pull/2071), " +"[#2072](https://github.com/adap/flower/pull/2072), " +"[#2068](https://github.com/adap/flower/pull/2068), " +"[#2067](https://github.com/adap/flower/pull/2067), " +"[#2069](https://github.com/adap/flower/pull/2069), " +"[#2073](https://github.com/adap/flower/pull/2073), " +"[#2070](https://github.com/adap/flower/pull/2070), " +"[#2074](https://github.com/adap/flower/pull/2074), " +"[#2082](https://github.com/adap/flower/pull/2082), " +"[#2084](https://github.com/adap/flower/pull/2084), " +"[#2093](https://github.com/adap/flower/pull/2093), " +"[#2109](https://github.com/adap/flower/pull/2109), " +"[#2095](https://github.com/adap/flower/pull/2095), " +"[#2140](https://github.com/adap/flower/pull/2140), " +"[#2137](https://github.com/adap/flower/pull/2137), " +"[#2165](https://github.com/adap/flower/pull/2165))" +msgstr "" +"**Améliorer l'API (expérimentale) du pilote** " +"([#1663](https://github.com/adap/flower/pull/1663), " +"[#1666](https://github.com/adap/flower/pull/1666), " +"[#1667](https://github.com/adap/flower/pull/1667), " +"[#1664](https://github.com/adap/flower/pull/1664), " +"[#1675](https://github.com/adap/flower/pull/1675), " +"[#1676](https://github.com/adap/flower/pull/1676), " +"[#1693](https://github.com/adap/flower/pull/1693), " +"[#1662](https://github.com/adap/flower/pull/1662), " +"[#1794](https://github.com/adap/flower/pull/1794))" -#: ../../source/ref-changelog.md:455 +#: ../../source/ref-changelog.md:298 msgid "" -"**Add new strategies: Krum and MultiKrum** " -"([#1481](https://github.com/adap/flower/pull/1481))" +"A new testing infrastructure ensures that new changes stay compatible " +"with existing framework integrations or strategies." msgstr "" -"**Ajouter de nouvelles stratégies : Krum et MultiKrum** " -"([#1481](https://github.com/adap/flower/pull/1481))" -#: ../../source/ref-changelog.md:457 +#: ../../source/ref-changelog.md:300 +#, fuzzy +msgid "**Deprecate Python 3.7**" +msgstr "**Créer le PR**" + +#: ../../source/ref-changelog.md:302 msgid "" -"Edoardo, a computer science student at the Sapienza University of Rome, " -"contributed a new `Krum` strategy that enables users to easily use Krum " -"and MultiKrum in their workloads." +"Since Python 3.7 reached its end of life (EOL) on 2023-06-27, support for" +" Python 3.7 is now deprecated and will be removed in an upcoming release." msgstr "" -"Edoardo, étudiant en informatique à l'Université Sapienza de Rome, a " -"contribué à une nouvelle stratégie `Krum` qui permet aux utilisateurs " -"d'utiliser facilement Krum et MultiKrum dans leurs charges de travail." -#: ../../source/ref-changelog.md:459 +#: ../../source/ref-changelog.md:304 +#, fuzzy msgid "" -"**Update C++ example to be compatible with Flower v1.2.0** " -"([#1495](https://github.com/adap/flower/pull/1495))" +"**Add new** `FedTrimmedAvg` **strategy** " +"([#1769](https://github.com/adap/flower/pull/1769), " +"[#1853](https://github.com/adap/flower/pull/1853))" msgstr "" -"**Mettre à jour l'exemple C++ pour qu'il soit compatible avec Flower " -"v1.2.0** ([#1495](https://github.com/adap/flower/pull/1495))" +"**Ajouter un nouvel exemple de Federated Analytics avec Pandas** " +"([#1469](https://github.com/adap/flower/pull/1469), " +"[#1535](https://github.com/adap/flower/pull/1535))" -#: ../../source/ref-changelog.md:461 +#: ../../source/ref-changelog.md:306 +#, fuzzy msgid "" -"The C++ code example has received a substantial update to make it " -"compatible with the latest version of Flower." +"The new `FedTrimmedAvg` strategy implements Trimmed Mean by [Dong Yin, " +"2018](https://arxiv.org/abs/1803.01498)." msgstr "" -"L'exemple de code C++ a reçu une mise à jour substantielle pour le rendre" -" compatible avec la dernière version de Flower." +"La nouvelle stratégie `FedMedian` met en œuvre Federated Median " +"(FedMedian) par [Yin et al., 2018] " +"(https://arxiv.org/pdf/1803.01498v1.pdf)." -#: ../../source/ref-changelog.md:463 +#: ../../source/ref-changelog.md:308 +#, fuzzy msgid "" -"**General improvements** " -"([#1491](https://github.com/adap/flower/pull/1491), " -"[#1504](https://github.com/adap/flower/pull/1504), " -"[#1506](https://github.com/adap/flower/pull/1506), " -"[#1514](https://github.com/adap/flower/pull/1514), " -"[#1522](https://github.com/adap/flower/pull/1522), " -"[#1523](https://github.com/adap/flower/pull/1523), " -"[#1526](https://github.com/adap/flower/pull/1526), " -"[#1528](https://github.com/adap/flower/pull/1528), " -"[#1547](https://github.com/adap/flower/pull/1547), " -"[#1549](https://github.com/adap/flower/pull/1549), " -"[#1560](https://github.com/adap/flower/pull/1560), " -"[#1564](https://github.com/adap/flower/pull/1564), " -"[#1566](https://github.com/adap/flower/pull/1566))" +"**Introduce start_driver** " +"([#1697](https://github.com/adap/flower/pull/1697))" msgstr "" -"**Améliorations générales** " -"([#1491](https://github.com/adap/flower/pull/1491), " -"[#1504](https://github.com/adap/flower/pull/1504), " -"[#1506](https://github.com/adap/flower/pull/1506), " -"[#1514](https://github.com/adap/flower/pull/1514), " -"[#1522](https://github.com/adap/flower/pull/1522), " -"[#1523](https://github.com/adap/flower/pull/1523), " -"[#1526](https://github.com/adap/flower/pull/1526), " -"[#1528](https://github.com/adap/flower/pull/1528), " -"[#1547](https://github.com/adap/flower/pull/1547), " -"[#1549](https://github.com/adap/flower/pull/1549), " -"[#1560](https://github.com/adap/flower/pull/1560), " -"[#1564](https://github.com/adap/flower/pull/1564), " -"[#1566](https://github.com/adap/flower/pull/1566))" +"**Ajouter une nouvelle stratégie `FedProx`** " +"([#1619](https://github.com/adap/flower/pull/1619))" -#: ../../source/ref-changelog.md:467 +#: ../../source/ref-changelog.md:310 msgid "" -"**Updated documentation** " -"([#1494](https://github.com/adap/flower/pull/1494), " -"[#1496](https://github.com/adap/flower/pull/1496), " -"[#1500](https://github.com/adap/flower/pull/1500), " -"[#1503](https://github.com/adap/flower/pull/1503), " -"[#1505](https://github.com/adap/flower/pull/1505), " -"[#1524](https://github.com/adap/flower/pull/1524), " -"[#1518](https://github.com/adap/flower/pull/1518), " -"[#1519](https://github.com/adap/flower/pull/1519), " -"[#1515](https://github.com/adap/flower/pull/1515))" +"In addition to `start_server` and using the raw Driver API, there is a " +"new `start_driver` function that allows for running `start_server` " +"scripts as a Flower driver with only a single-line code change. Check out" +" the `mt-pytorch` code example to see a working example using " +"`start_driver`." msgstr "" -"**Documentation mise à jour** " -"([#1494](https://github.com/adap/flower/pull/1494), " -"[#1496](https://github.com/adap/flower/pull/1496), " -"[#1500](https://github.com/adap/flower/pull/1500), " -"[#1503](https://github.com/adap/flower/pull/1503), " -"[#1505](https://github.com/adap/flower/pull/1505), " -"[#1524](https://github.com/adap/flower/pull/1524), " -"[#1518](https://github.com/adap/flower/pull/1518), " -"[#1519](https://github.com/adap/flower/pull/1519), " -"[#1515](https://github.com/adap/flower/pull/1515))" -#: ../../source/ref-changelog.md:471 +#: ../../source/ref-changelog.md:312 +#, fuzzy msgid "" -"One highlight is the new [first time contributor " -"guide](https://flower.ai/docs/first-time-contributors.html): if you've " -"never contributed on GitHub before, this is the perfect place to start!" +"**Add parameter aggregation to** `mt-pytorch` **code example** " +"([#1785](https://github.com/adap/flower/pull/1785))" msgstr "" -"L'un des points forts est le nouveau [guide du premier contributeur] " -"(https://flower.ai/docs/first-time-contributors.html) : si tu n'as jamais" -" contribué sur GitHub auparavant, c'est l'endroit idéal pour commencer !" +"**Nouvel exemple de code PyTorch avancé** " +"([#1007](https://github.com/adap/flower/pull/1007))" -#: ../../source/ref-changelog.md:477 -msgid "v1.1.0 (2022-10-31)" -msgstr "v1.1.0 (2022-10-31)" +#: ../../source/ref-changelog.md:314 +msgid "" +"The `mt-pytorch` example shows how to aggregate parameters when writing a" +" driver script. The included `driver.py` and `server.py` have been " +"aligned to demonstrate both the low-level way and the high-level way of " +"building server-side logic." +msgstr "" -#: ../../source/ref-changelog.md:481 +#: ../../source/ref-changelog.md:316 +#, fuzzy msgid "" -"We would like to give our **special thanks** to all the contributors who " -"made the new version of Flower possible (in `git shortlog` order):" +"**Migrate experimental REST API to Starlette** " +"([2171](https://github.com/adap/flower/pull/2171))" msgstr "" -"Nous aimerions **remercier tout particulièrement** tous les contributeurs" -" qui ont rendu possible la nouvelle version de Flower (dans l'ordre `git " -"shortlog`) :" +"**Nouvelle stratégie expérimentale TensorBoard** " +"([#789](https://github.com/adap/flower/pull/789))" -#: ../../source/ref-changelog.md:483 +#: ../../source/ref-changelog.md:318 msgid "" -"`Akis Linardos`, `Christopher S`, `Daniel J. Beutel`, `George`, `Jan " -"Schlicht`, `Mohammad Fares`, `Pedro Porto Buarque de Gusmão`, `Philipp " -"Wiesner`, `Rob Luke`, `Taner Topal`, `VasundharaAgarwal`, " -"`danielnugraha`, `edogab33`" +"The (experimental) REST API used to be implemented in " +"[FastAPI](https://fastapi.tiangolo.com/), but it has now been migrated to" +" use [Starlette](https://www.starlette.io/) directly." msgstr "" -"`Akis Linardos`, `Christopher S`, `Daniel J. Beutel`, `George`, `Jan " -"Schlicht`, `Mohammad Fares`, `Pedro Porto Buarque de Gusmão`, `Philipp " -"Wiesner`, `Rob Luke`, `Taner Topal`, `VasundharaAgarwal`, " -"`danielnugraha`, `edogab33`" -#: ../../source/ref-changelog.md:487 +#: ../../source/ref-changelog.md:320 +#, fuzzy msgid "" -"**Introduce Differential Privacy wrappers (preview)** " -"([#1357](https://github.com/adap/flower/pull/1357), " -"[#1460](https://github.com/adap/flower/pull/1460))" +"Please note: The REST request-response API is still experimental and will" +" likely change significantly over time." +msgstr "" +"Remarque : l'API REST est encore expérimentale et est susceptible de " +"changer de manière significative au fil du temps." + +#: ../../source/ref-changelog.md:322 +#, fuzzy +msgid "" +"**Introduce experimental gRPC request-response API** " +"([#1867](https://github.com/adap/flower/pull/1867), " +"[#1901](https://github.com/adap/flower/pull/1901))" msgstr "" "**Introduire les enveloppes de confidentialité différentielle (aperçu)** " "([#1357](https://github.com/adap/flower/pull/1357), " "[#1460](https://github.com/adap/flower/pull/1460))" -#: ../../source/ref-changelog.md:489 +#: ../../source/ref-changelog.md:324 msgid "" -"The first (experimental) preview of pluggable Differential Privacy " -"wrappers enables easy configuration and usage of differential privacy " -"(DP). The pluggable DP wrappers enable framework-agnostic **and** " -"strategy-agnostic usage of both client-side DP and server-side DP. Head " -"over to the Flower docs, a new explainer goes into more detail." +"In addition to the existing gRPC API (based on bidirectional streaming) " +"and the experimental REST API, there is now a new gRPC API that uses a " +"request-response model to communicate with client nodes." msgstr "" -"Le premier aperçu (expérimental) des wrappers enfichables de " -"confidentialité différentielle permet de configurer et d'utiliser " -"facilement la confidentialité différentielle (DP). Les wrappers DP " -"enfichables permettent une utilisation agnostique du cadre **et** de la " -"stratégie à la fois de la DP côté client et de la DP côté serveur. Va " -"voir les documents de Flower, un nouvel explicatif va plus loin dans les " -"détails." -#: ../../source/ref-changelog.md:491 +#: ../../source/ref-changelog.md:326 +#, fuzzy msgid "" -"**New iOS CoreML code example** " -"([#1289](https://github.com/adap/flower/pull/1289))" +"Please note: The gRPC request-response API is still experimental and will" +" likely change significantly over time." msgstr "" -"**Nouvel exemple de code CoreML pour iOS** " -"([#1289](https://github.com/adap/flower/pull/1289))" +"Remarque : l'API REST est encore expérimentale et est susceptible de " +"changer de manière significative au fil du temps." -#: ../../source/ref-changelog.md:493 +#: ../../source/ref-changelog.md:328 +#, fuzzy msgid "" -"Flower goes iOS! A massive new code example shows how Flower clients can " -"be built for iOS. The code example contains both Flower iOS SDK " -"components that can be used for many tasks, and one task example running " -"on CoreML." +"**Replace the experimental** `start_client(rest=True)` **with the new** " +"`start_client(transport=\"rest\")` " +"([#1880](https://github.com/adap/flower/pull/1880))" msgstr "" -"Flower passe à iOS ! Un nouvel exemple de code massif montre comment les " -"clients Flower peuvent être construits pour iOS. L'exemple de code " -"contient à la fois des composants Flower iOS SDK qui peuvent être " -"utilisés pour de nombreuses tâches, et un exemple de tâche fonctionnant " -"sur CoreML." +"**Initialise** `start_simulation` **avec une liste d'ID de clients** " +"([#860](https://github.com/adap/flower/pull/860))" -#: ../../source/ref-changelog.md:495 +#: ../../source/ref-changelog.md:330 msgid "" -"**New FedMedian strategy** " -"([#1461](https://github.com/adap/flower/pull/1461))" +"The (experimental) `start_client` argument `rest` was deprecated in " +"favour of a new argument `transport`. `start_client(transport=\"rest\")` " +"will yield the same behaviour as `start_client(rest=True)` did before. " +"All code should migrate to the new argument `transport`. The deprecated " +"argument `rest` will be removed in a future release." msgstr "" -"**Nouvelle stratégie de FedMedian** " -"([#1461](https://github.com/adap/flower/pull/1461))" -#: ../../source/ref-changelog.md:497 +#: ../../source/ref-changelog.md:332 +#, fuzzy msgid "" -"The new `FedMedian` strategy implements Federated Median (FedMedian) by " -"[Yin et al., 2018](https://arxiv.org/pdf/1803.01498v1.pdf)." +"**Add a new gRPC option** " +"([#2197](https://github.com/adap/flower/pull/2197))" msgstr "" -"La nouvelle stratégie `FedMedian` met en œuvre Federated Median " -"(FedMedian) par [Yin et al., 2018] " -"(https://arxiv.org/pdf/1803.01498v1.pdf)." +"**Ajouter une nouvelle stratégie `FedProx`** " +"([#1619](https://github.com/adap/flower/pull/1619))" -#: ../../source/ref-changelog.md:499 +#: ../../source/ref-changelog.md:334 msgid "" -"**Log** `Client` **exceptions in Virtual Client Engine** " -"([#1493](https://github.com/adap/flower/pull/1493))" +"We now start a gRPC server with the `grpc.keepalive_permit_without_calls`" +" option set to 0 by default. This prevents the clients from sending " +"keepalive pings when there is no outstanding stream." msgstr "" -"**Log** `Client` **exceptions dans le moteur de client virtuel** " -"([#1493](https://github.com/adap/flower/pull/1493))" -#: ../../source/ref-changelog.md:501 +#: ../../source/ref-changelog.md:336 +#, fuzzy msgid "" -"All `Client` exceptions happening in the VCE are now logged by default " -"and not just exposed to the configured `Strategy` (via the `failures` " -"argument)." +"**Improve example notebooks** " +"([#2005](https://github.com/adap/flower/pull/2005))" msgstr "" -"Toutes les exceptions `Client` qui se produisent dans le VCE sont " -"maintenant enregistrées par défaut et ne sont pas seulement exposées à la" -" `Stratégie` configurée (via l'argument `failures`)." +"**Supprimer les stratégies expérimentales** " +"([#1280](https://github.com/adap/flower/pull/1280))" -#: ../../source/ref-changelog.md:503 -msgid "" -"**Improve Virtual Client Engine internals** " -"([#1401](https://github.com/adap/flower/pull/1401), " -"[#1453](https://github.com/adap/flower/pull/1453))" -msgstr "" -"**Améliorer le moteur du client virtuel** " -"([#1401](https://github.com/adap/flower/pull/1401), " -"[#1453](https://github.com/adap/flower/pull/1453))" +#: ../../source/ref-changelog.md:338 +#, fuzzy +msgid "There's a new 30min Federated Learning PyTorch tutorial!" +msgstr "Bienvenue au tutoriel sur l'apprentissage fédéré de la fleur !" -#: ../../source/ref-changelog.md:505 +#: ../../source/ref-changelog.md:340 msgid "" -"Some internals of the Virtual Client Engine have been revamped. The VCE " -"now uses Ray 2.0 under the hood, the value type of the `client_resources`" -" dictionary changed to `float` to allow fractions of resources to be " -"allocated." +"**Example updates** ([#1772](https://github.com/adap/flower/pull/1772), " +"[#1873](https://github.com/adap/flower/pull/1873), " +"[#1981](https://github.com/adap/flower/pull/1981), " +"[#1988](https://github.com/adap/flower/pull/1988), " +"[#1984](https://github.com/adap/flower/pull/1984), " +"[#1982](https://github.com/adap/flower/pull/1982), " +"[#2112](https://github.com/adap/flower/pull/2112), " +"[#2144](https://github.com/adap/flower/pull/2144), " +"[#2174](https://github.com/adap/flower/pull/2174), " +"[#2225](https://github.com/adap/flower/pull/2225), " +"[#2183](https://github.com/adap/flower/pull/2183))" msgstr "" -"Le VCE utilise maintenant Ray 2.0 sous le capot, le type de valeur du " -"dictionnaire `client_resources` a été remplacé par `float` pour permettre" -" l'allocation de fractions de ressources." -#: ../../source/ref-changelog.md:507 +#: ../../source/ref-changelog.md:342 msgid "" -"**Support optional** `Client`**/**`NumPyClient` **methods in Virtual " -"Client Engine**" +"Many examples have received significant updates, including simplified " +"advanced-tensorflow and advanced-pytorch examples, improved macOS " +"compatibility of TensorFlow examples, and code examples for simulation. A" +" major upgrade is that all code examples now have a `requirements.txt` " +"(in addition to `pyproject.toml`)." msgstr "" -"**Support optional** `Client`**/**`NumPyClient` **methods in Virtual " -"Client Engine**" -#: ../../source/ref-changelog.md:509 +#: ../../source/ref-changelog.md:344 +#, fuzzy msgid "" -"The Virtual Client Engine now has full support for optional `Client` (and" -" `NumPyClient`) methods." +"**General improvements** " +"([#1872](https://github.com/adap/flower/pull/1872), " +"[#1866](https://github.com/adap/flower/pull/1866), " +"[#1884](https://github.com/adap/flower/pull/1884), " +"[#1837](https://github.com/adap/flower/pull/1837), " +"[#1477](https://github.com/adap/flower/pull/1477), " +"[#2171](https://github.com/adap/flower/pull/2171))" msgstr "" -"Le moteur de client virtuel prend désormais en charge les méthodes " -"optionnelles `Client` (et `NumPyClient`)." +"**Mise à jour de la documentation** " +"([#1629](https://github.com/adap/flower/pull/1629), " +"[#1628](https://github.com/adap/flower/pull/1628), " +"[#1620](https://github.com/adap/flower/pull/1620), " +"[#1618](https://github.com/adap/flower/pull/1618), " +"[#1617](https://github.com/adap/flower/pull/1617), " +"[#1613](https://github.com/adap/flower/pull/1613), " +"[#1614](https://github.com/adap/flower/pull/1614))" -#: ../../source/ref-changelog.md:511 +#: ../../source/ref-changelog.md:352 +msgid "v1.4.0 (2023-04-21)" +msgstr "v1.4.0 (2023-04-21)" + +#: ../../source/ref-changelog.md:358 msgid "" -"**Provide type information to packages using** `flwr` " -"([#1377](https://github.com/adap/flower/pull/1377))" +"`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, " +"`Chenyang Ma (Danny)`, `Daniel J. Beutel`, `Edoardo`, `Gautam Jajoo`, " +"`Iacob-Alexandru-Andrei`, `JDRanpariya`, `Jean Charle Yaacoub`, `Kunal " +"Sarkhel`, `L. Jiang`, `Lennart Behme`, `Max Kapsecker`, `Michał`, `Nic " +"Lane`, `Nikolaos Episkopos`, `Ragy`, `Saurav Maheshkar`, `Semo Yang`, " +"`Steve Laskaridis`, `Steven Hé (Sīchàng)`, `Taner Topal`" msgstr "" -"**Fournir des informations de type aux paquets en utilisant** `flwr` " -"([#1377](https://github.com/adap/flower/pull/1377))" +"`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, " +"`Chenyang Ma (Danny)`, `Daniel J. Beutel`, `Edoardo`, `Gautam Jajoo`, " +"`Iacob-Alexandru-Andrei`, `JDRanpariya`, `Jean Charle Yaacoub`, `Kunal " +"Sarkhel`, `L. Jiang`, `Lennart Behme`, `Max Kapsecker`, `Michał`, `Nic " +"Lane`, `Nikolaos Episkopos`, `Ragy`, `Saurav Maheshkar`, `Semo Yang`, " +"`Steve Laskaridis`, `Steven Hé (Sīchàng)`, `Taner Topal`" -#: ../../source/ref-changelog.md:513 +#: ../../source/ref-changelog.md:362 msgid "" -"The package `flwr` is now bundled with a `py.typed` file indicating that " -"the package is typed. This enables typing support for projects or " -"packages that use `flwr` by enabling them to improve their code using " -"static type checkers like `mypy`." +"**Introduce support for XGBoost (**`FedXgbNnAvg` **strategy and " +"example)** ([#1694](https://github.com/adap/flower/pull/1694), " +"[#1709](https://github.com/adap/flower/pull/1709), " +"[#1715](https://github.com/adap/flower/pull/1715), " +"[#1717](https://github.com/adap/flower/pull/1717), " +"[#1763](https://github.com/adap/flower/pull/1763), " +"[#1795](https://github.com/adap/flower/pull/1795))" msgstr "" -"Le paquet `flwr` est maintenant accompagné d'un fichier `py.typed` " -"indiquant que le paquet est typé. Cela permet de prendre en charge le " -"typage pour les projets ou les paquets qui utilisent `flwr` en leur " -"permettant d'améliorer leur code à l'aide de vérificateurs de types " -"statiques comme `mypy`." +"**Introduire la prise en charge de XGBoost (**`FedXgbNnAvg` **stratégie " +"et exemple)** ([#1694](https://github.com/adap/flower/pull/1694), " +"[#1709](https://github.com/adap/flower/pull/1709), " +"[#1715](https://github.com/adap/flower/pull/1715), " +"[#1717](https://github.com/adap/flower/pull/1717), " +"[#1763](https://github.com/adap/flower/pull/1763), " +"[#1795](https://github.com/adap/flower/pull/1795))" -#: ../../source/ref-changelog.md:515 +#: ../../source/ref-changelog.md:364 msgid "" -"**Updated code example** " -"([#1344](https://github.com/adap/flower/pull/1344), " -"[#1347](https://github.com/adap/flower/pull/1347))" +"XGBoost is a tree-based ensemble machine learning algorithm that uses " +"gradient boosting to improve model accuracy. We added a new `FedXgbNnAvg`" +" " +"[strategy](https://github.com/adap/flower/tree/main/src/py/flwr/server/strategy/fedxgb_nn_avg.py)," +" and a [code example](https://github.com/adap/flower/tree/main/examples" +"/xgboost-quickstart) that demonstrates the usage of this new strategy in " +"an XGBoost project." msgstr "" -"**Exemple de code mis à jour** " -"([#1344](https://github.com/adap/flower/pull/1344), " -"[#1347](https://github.com/adap/flower/pull/1347))" +"Nous avons ajouté une nouvelle [stratégie] `FedXgbNnAvg` " +"(https://github.com/adap/flower/tree/main/src/py/flwr/server/strategy/fedxgb_nn_avg.py)," +" et un [exemple de code] " +"(https://github.com/adap/flower/tree/main/examples/xgboost-quickstart) " +"qui démontre l'utilisation de cette nouvelle stratégie dans un projet " +"XGBoost." -#: ../../source/ref-changelog.md:517 +#: ../../source/ref-changelog.md:366 msgid "" -"The code examples covering scikit-learn and PyTorch Lightning have been " -"updated to work with the latest version of Flower." +"**Introduce iOS SDK (preview)** " +"([#1621](https://github.com/adap/flower/pull/1621), " +"[#1764](https://github.com/adap/flower/pull/1764))" msgstr "" -"Les exemples de code couvrant scikit-learn et PyTorch Lightning ont été " -"mis à jour pour fonctionner avec la dernière version de Flower." +"**Introduction du SDK iOS (aperçu)** " +"([#1621](https://github.com/adap/flower/pull/1621), " +"[#1764](https://github.com/adap/flower/pull/1764))" -#: ../../source/ref-changelog.md:519 +#: ../../source/ref-changelog.md:368 msgid "" -"**Updated documentation** " -"([#1355](https://github.com/adap/flower/pull/1355), " -"[#1558](https://github.com/adap/flower/pull/1558), " -"[#1379](https://github.com/adap/flower/pull/1379), " -"[#1380](https://github.com/adap/flower/pull/1380), " -"[#1381](https://github.com/adap/flower/pull/1381), " -"[#1332](https://github.com/adap/flower/pull/1332), " -"[#1391](https://github.com/adap/flower/pull/1391), " -"[#1403](https://github.com/adap/flower/pull/1403), " -"[#1364](https://github.com/adap/flower/pull/1364), " -"[#1409](https://github.com/adap/flower/pull/1409), " -"[#1419](https://github.com/adap/flower/pull/1419), " -"[#1444](https://github.com/adap/flower/pull/1444), " -"[#1448](https://github.com/adap/flower/pull/1448), " -"[#1417](https://github.com/adap/flower/pull/1417), " -"[#1449](https://github.com/adap/flower/pull/1449), " -"[#1465](https://github.com/adap/flower/pull/1465), " -"[#1467](https://github.com/adap/flower/pull/1467))" +"This is a major update for anyone wanting to implement Federated Learning" +" on iOS mobile devices. We now have a swift iOS SDK present under " +"[src/swift/flwr](https://github.com/adap/flower/tree/main/src/swift/flwr)" +" that will facilitate greatly the app creating process. To showcase its " +"use, the [iOS " +"example](https://github.com/adap/flower/tree/main/examples/ios) has also " +"been updated!" msgstr "" -"**Documentation mise à jour** " -"([#1355](https://github.com/adap/flower/pull/1355), " -"[#1558](https://github.com/adap/flower/pull/1558), " -"[#1379](https://github.com/adap/flower/pull/1379), " -"[#1380](https://github.com/adap/flower/pull/1380), " -"[#1381](https://github.com/adap/flower/pull/1381), " -"[#1332](https://github.com/adap/flower/pull/1332), " -"[#1391](https://github.com/adap/flower/pull/1391), " -"[#1403](https://github.com/adap/flower/pull/1403), " -"[#1364](https://github.com/adap/flower/pull/1364), " -"[#1409](https://github.com/adap/flower/pull/1409), " -"[#1419](https://github.com/adap/flower/pull/1419), " -"[#1444](https://github.com/adap/flower/pull/1444), " -"[#1448](https://github.com/adap/flower/pull/1448), " -"[#1417](https://github.com/adap/flower/pull/1417), " -"[#1449](https://github.com/adap/flower/pull/1449), " -"[#1465](https://github.com/adap/flower/pull/1465), " -"[#1467](https://github.com/adap/flower/pull/1467))" +"Il s'agit d'une mise à jour majeure pour tous ceux qui souhaitent mettre " +"en œuvre l'apprentissage fédéré sur les appareils mobiles iOS. Nous " +"disposons désormais d'un SDK swift iOS présent sous " +"[src/swift/flwr](https://github.com/adap/flower/tree/main/src/swift/flwr)" +" qui facilitera grandement le processus de création d'applications. Pour " +"présenter son utilisation, l'[exemple " +"iOS](https://github.com/adap/flower/tree/main/examples/ios) a également " +"été mis à jour !" -#: ../../source/ref-changelog.md:521 +#: ../../source/ref-changelog.md:370 msgid "" -"There have been so many documentation updates that it doesn't even make " -"sense to list them individually." +"**Introduce new \"What is Federated Learning?\" tutorial** " +"([#1657](https://github.com/adap/flower/pull/1657), " +"[#1721](https://github.com/adap/flower/pull/1721))" msgstr "" -"Il y a eu tellement de mises à jour de la documentation que cela n'a même" -" pas de sens de les énumérer individuellement." +"**Introduire un nouveau tutoriel \"Qu'est-ce que l'apprentissage fédéré ?" +" \"** ([#1657](https://github.com/adap/flower/pull/1657), " +"[#1721](https://github.com/adap/flower/pull/1721))" -#: ../../source/ref-changelog.md:523 +#: ../../source/ref-changelog.md:372 +#, fuzzy msgid "" -"**Restructured documentation** " -"([#1387](https://github.com/adap/flower/pull/1387))" +"A new [entry-level tutorial](https://flower.ai/docs/framework/tutorial-" +"what-is-federated-learning.html) in our documentation explains the basics" +" of Fedetated Learning. It enables anyone who's unfamiliar with Federated" +" Learning to start their journey with Flower. Forward it to anyone who's " +"interested in Federated Learning!" msgstr "" -"**Documentation restructurée** " -"([#1387](https://github.com/adap/flower/pull/1387))" +"Un nouveau [tutoriel d'entrée de gamme] " +"(https://flower.ai/docs/tutorial/Flower-0-What-is-FL.html) dans notre " +"documentation explique les bases de l'apprentissage fédéré. Il permet à " +"tous ceux qui ne connaissent pas l'apprentissage fédéré de commencer leur" +" voyage avec Flower. Fais-le suivre à tous ceux qui s'intéressent à " +"l'apprentissage fédéré !" -#: ../../source/ref-changelog.md:525 +#: ../../source/ref-changelog.md:374 msgid "" -"The documentation has been restructured to make it easier to navigate. " -"This is just the first step in a larger effort to make the Flower " -"documentation the best documentation of any project ever. Stay tuned!" +"**Introduce new Flower Baseline: FedProx MNIST** " +"([#1513](https://github.com/adap/flower/pull/1513), " +"[#1680](https://github.com/adap/flower/pull/1680), " +"[#1681](https://github.com/adap/flower/pull/1681), " +"[#1679](https://github.com/adap/flower/pull/1679))" msgstr "" -"La documentation a été restructurée pour faciliter la navigation. Ce " -"n'est que la première étape d'un effort plus important visant à faire de " -"la documentation de Flower la meilleure documentation de tous les projets" +"**Introduire une nouvelle fleur Référence : FedProx MNIST** " +"([#1513](https://github.com/adap/flower/pull/1513), " +"[#1680](https://github.com/adap/flower/pull/1680), " +"[#1681](https://github.com/adap/flower/pull/1681), " +"[#1679](https://github.com/adap/flower/pull/1679))" -#: ../../source/ref-changelog.md:527 +#: ../../source/ref-changelog.md:376 msgid "" -"**Open in Colab button** " -"([#1389](https://github.com/adap/flower/pull/1389))" +"This new baseline replicates the MNIST+CNN task from the paper [Federated" +" Optimization in Heterogeneous Networks (Li et al., " +"2018)](https://arxiv.org/abs/1812.06127). It uses the `FedProx` strategy," +" which aims at making convergence more robust in heterogeneous settings." msgstr "" -"**Ouvrir dans le bouton Colab** " -"([#1389](https://github.com/adap/flower/pull/1389))" +"Cette nouvelle ligne de base reproduit la tâche MNIST+CNN de l'article " +"[Federated Optimization in Heterogeneous Networks (Li et al., 2018)] " +"(https://arxiv.org/abs/1812.06127). Elle utilise la stratégie `FedProx`, " +"qui vise à rendre la convergence plus robuste dans des contextes " +"hétérogènes." -#: ../../source/ref-changelog.md:529 +#: ../../source/ref-changelog.md:378 msgid "" -"The four parts of the Flower Federated Learning Tutorial now come with a " -"new `Open in Colab` button. No need to install anything on your local " -"machine, you can now use and learn about Flower in your browser, it's " -"only a single click away." +"**Introduce new Flower Baseline: FedAvg FEMNIST** " +"([#1655](https://github.com/adap/flower/pull/1655))" msgstr "" -"Les quatre parties du didacticiel d'apprentissage fédéré Flower sont " -"maintenant accompagnées d'un nouveau bouton \"Ouvrir dans Colab\". Pas " -"besoin d'installer quoi que ce soit sur ta machine locale, tu peux " -"maintenant utiliser et apprendre à connaître Flower dans ton navigateur, " -"il te suffit d'un simple clic." +"**Introduire une nouvelle ligne de base pour les fleurs : FedAvg " +"FEMNIST** ([#1655](https://github.com/adap/flower/pull/1655))" -#: ../../source/ref-changelog.md:531 +#: ../../source/ref-changelog.md:380 msgid "" -"**Improved tutorial** ([#1468](https://github.com/adap/flower/pull/1468)," -" [#1470](https://github.com/adap/flower/pull/1470), " -"[#1472](https://github.com/adap/flower/pull/1472), " -"[#1473](https://github.com/adap/flower/pull/1473), " -"[#1474](https://github.com/adap/flower/pull/1474), " -"[#1475](https://github.com/adap/flower/pull/1475))" +"This new baseline replicates an experiment evaluating the performance of " +"the FedAvg algorithm on the FEMNIST dataset from the paper [LEAF: A " +"Benchmark for Federated Settings (Caldas et al., " +"2018)](https://arxiv.org/abs/1812.01097)." msgstr "" -"**Tutoriel amélioré** ([#1468](https://github.com/adap/flower/pull/1468)," -" [#1470](https://github.com/adap/flower/pull/1470), " -"[#1472](https://github.com/adap/flower/pull/1472), " -"[#1473](https://github.com/adap/flower/pull/1473), " -"[#1474](https://github.com/adap/flower/pull/1474), " -"[#1475](https://github.com/adap/flower/pull/1475))" +"Cette nouvelle ligne de base reproduit une expérience évaluant les " +"performances de l'algorithme FedAvg sur le jeu de données FEMNIST tiré de" +" l'article [LEAF : A Benchmark for Federated Settings (Caldas et al., " +"2018)] (https://arxiv.org/abs/1812.01097)." -#: ../../source/ref-changelog.md:533 +#: ../../source/ref-changelog.md:382 msgid "" -"The Flower Federated Learning Tutorial has two brand-new parts covering " -"custom strategies (still WIP) and the distinction between `Client` and " -"`NumPyClient`. The existing parts one and two have also been improved " -"(many small changes and fixes)." +"**Introduce (experimental) REST API** " +"([#1594](https://github.com/adap/flower/pull/1594), " +"[#1690](https://github.com/adap/flower/pull/1690), " +"[#1695](https://github.com/adap/flower/pull/1695), " +"[#1712](https://github.com/adap/flower/pull/1712), " +"[#1802](https://github.com/adap/flower/pull/1802), " +"[#1770](https://github.com/adap/flower/pull/1770), " +"[#1733](https://github.com/adap/flower/pull/1733))" msgstr "" -"Le tutoriel sur l'apprentissage fédéré des fleurs a deux toutes nouvelles" -" parties couvrant les stratégies personnalisées (encore WIP) et la " -"distinction entre `Client` et `NumPyClient`. Les parties un et deux " -"existantes ont également été améliorées (beaucoup de petits changements " -"et de corrections)." - -#: ../../source/ref-changelog.md:539 -msgid "v1.0.0 (2022-07-28)" -msgstr "v1.0.0 (2022-07-28)" - -#: ../../source/ref-changelog.md:541 -msgid "Highlights" -msgstr "Points forts" - -#: ../../source/ref-changelog.md:543 -msgid "Stable **Virtual Client Engine** (accessible via `start_simulation`)" -msgstr "Moteur de client virtuel stable** (accessible via `start_simulation`)" - -#: ../../source/ref-changelog.md:544 -msgid "All `Client`/`NumPyClient` methods are now optional" -msgstr "Toutes les méthodes `Client`/`NumPyClient` sont maintenant optionnelles" +"**Introduire l'API REST (expérimentale)** " +"([#1594](https://github.com/adap/flower/pull/1594), " +"[#1690](https://github.com/adap/flower/pull/1690), " +"[#1695](https://github.com/adap/flower/pull/1695), " +"[#1712](https://github.com/adap/flower/pull/1712), " +"[#1802](https://github.com/adap/flower/pull/1802), " +"[#1770](https://github.com/adap/flower/pull/1770), " +"[#1733](https://github.com/adap/flower/pull/1733))" -#: ../../source/ref-changelog.md:545 -msgid "Configurable `get_parameters`" -msgstr "`get_parameters` configurable" +#: ../../source/ref-changelog.md:384 +msgid "" +"A new REST API has been introduced as an alternative to the gRPC-based " +"communication stack. In this initial version, the REST API only supports " +"anonymous clients." +msgstr "" +"Une nouvelle API REST a été introduite comme alternative à la pile de " +"communication basée sur gRPC. Dans cette version initiale, l'API REST ne " +"prend en charge que les clients anonymes." -#: ../../source/ref-changelog.md:546 +#: ../../source/ref-changelog.md:386 msgid "" -"Tons of small API cleanups resulting in a more coherent developer " -"experience" +"Please note: The REST API is still experimental and will likely change " +"significantly over time." msgstr "" -"Des tonnes de petits nettoyages d'API résultant en une expérience plus " -"cohérente pour les développeurs" +"Remarque : l'API REST est encore expérimentale et est susceptible de " +"changer de manière significative au fil du temps." -#: ../../source/ref-changelog.md:550 +#: ../../source/ref-changelog.md:388 msgid "" -"We would like to give our **special thanks** to all the contributors who " -"made Flower 1.0 possible (in reverse [GitHub " -"Contributors](https://github.com/adap/flower/graphs/contributors) order):" +"**Improve the (experimental) Driver API** " +"([#1663](https://github.com/adap/flower/pull/1663), " +"[#1666](https://github.com/adap/flower/pull/1666), " +"[#1667](https://github.com/adap/flower/pull/1667), " +"[#1664](https://github.com/adap/flower/pull/1664), " +"[#1675](https://github.com/adap/flower/pull/1675), " +"[#1676](https://github.com/adap/flower/pull/1676), " +"[#1693](https://github.com/adap/flower/pull/1693), " +"[#1662](https://github.com/adap/flower/pull/1662), " +"[#1794](https://github.com/adap/flower/pull/1794))" msgstr "" -"Nous tenons à remercier **particulièrement** tous les contributeurs qui " -"ont rendu Flower 1.0 possible (dans l'ordre inverse de [GitHub " -"Contributors](https://github.com/adap/flower/graphs/contributors)) :" +"**Améliorer l'API (expérimentale) du pilote** " +"([#1663](https://github.com/adap/flower/pull/1663), " +"[#1666](https://github.com/adap/flower/pull/1666), " +"[#1667](https://github.com/adap/flower/pull/1667), " +"[#1664](https://github.com/adap/flower/pull/1664), " +"[#1675](https://github.com/adap/flower/pull/1675), " +"[#1676](https://github.com/adap/flower/pull/1676), " +"[#1693](https://github.com/adap/flower/pull/1693), " +"[#1662](https://github.com/adap/flower/pull/1662), " +"[#1794](https://github.com/adap/flower/pull/1794))" -#: ../../source/ref-changelog.md:552 +#: ../../source/ref-changelog.md:390 msgid "" -"[@rtaiello](https://github.com/rtaiello), " -"[@g-pichler](https://github.com/g-pichler), [@rob-" -"luke](https://github.com/rob-luke), [@andreea-zaharia](https://github.com" -"/andreea-zaharia), [@kinshukdua](https://github.com/kinshukdua), " -"[@nfnt](https://github.com/nfnt), " -"[@tatiana-s](https://github.com/tatiana-s), " -"[@TParcollet](https://github.com/TParcollet), " -"[@vballoli](https://github.com/vballoli), " -"[@negedng](https://github.com/negedng), " -"[@RISHIKESHAVAN](https://github.com/RISHIKESHAVAN), " -"[@hei411](https://github.com/hei411), " -"[@SebastianSpeitel](https://github.com/SebastianSpeitel), " -"[@AmitChaulwar](https://github.com/AmitChaulwar), " -"[@Rubiel1](https://github.com/Rubiel1), [@FANTOME-PAN](https://github.com" -"/FANTOME-PAN), [@Rono-BC](https://github.com/Rono-BC), " -"[@lbhm](https://github.com/lbhm), " -"[@sishtiaq](https://github.com/sishtiaq), " -"[@remde](https://github.com/remde), [@Jueun-Park](https://github.com" -"/Jueun-Park), [@architjen](https://github.com/architjen), " -"[@PratikGarai](https://github.com/PratikGarai), " -"[@mrinaald](https://github.com/mrinaald), " -"[@zliel](https://github.com/zliel), " -"[@MeiruiJiang](https://github.com/MeiruiJiang), " -"[@sancarlim](https://github.com/sancarlim), " -"[@gubertoli](https://github.com/gubertoli), " -"[@Vingt100](https://github.com/Vingt100), " -"[@MakGulati](https://github.com/MakGulati), " -"[@cozek](https://github.com/cozek), " -"[@jafermarq](https://github.com/jafermarq), " -"[@sisco0](https://github.com/sisco0), " -"[@akhilmathurs](https://github.com/akhilmathurs), " -"[@CanTuerk](https://github.com/CanTuerk), " -"[@mariaboerner1987](https://github.com/mariaboerner1987), " -"[@pedropgusmao](https://github.com/pedropgusmao), " -"[@tanertopal](https://github.com/tanertopal), " -"[@danieljanes](https://github.com/danieljanes)." +"The Driver API is still an experimental feature, but this release " +"introduces some major upgrades. One of the main improvements is the " +"introduction of an SQLite database to store server state on disk (instead" +" of in-memory). Another improvement is that tasks (instructions or " +"results) that have been delivered will now be deleted. This greatly " +"improves the memory efficiency of a long-running Flower server." msgstr "" -"[@rtaiello](https://github.com/rtaiello), " -"[@g-pichler](https://github.com/g-pichler), [@rob-" -"luke](https://github.com/rob-luke), [@andreea-zaharia](https://github.com" -"/andreea-zaharia), [@kinshukdua](https://github.com/kinshukdua), " -"[@nfnt](https://github.com/nfnt), " -"[@tatiana-s](https://github.com/tatiana-s), " -"[@TParcollet](https://github.com/TParcollet), " -"[@vballoli](https://github.com/vballoli), " -"[@negedng](https://github.com/negedng), " -"[@RISHIKESHAVAN](https://github.com/RISHIKESHAVAN), " -"[@hei411](https://github.com/hei411), " -"[@SebastianSpeitel](https://github.com/SebastianSpeitel), " -"[@AmitChaulwar](https://github.com/AmitChaulwar), " -"[@Rubiel1](https://github.com/Rubiel1), [@FANTOME-PAN](https://github.com" -"/FANTOME-PAN), [@Rono-BC](https://github.com/Rono-BC), " -"[@lbhm](https://github.com/lbhm), " -"[@sishtiaq](https://github.com/sishtiaq), " -"[@remde](https://github.com/remde), [@Jueun-Park](https://github.com" -"/Jueun-Park), [@architjen](https://github.com/architjen), " -"[@PratikGarai](https://github.com/PratikGarai), [@mrinaald](" +"L'API du pilote est encore une fonction expérimentale, mais cette version" +" introduit quelques améliorations majeures. L'une des principales " +"améliorations est l'introduction d'une base de données SQLite pour " +"stocker l'état du serveur sur le disque (au lieu de la mémoire). Une " +"autre amélioration est que les tâches (instructions ou résultats) qui ont" +" été livrées seront désormais supprimées, ce qui améliore " +"considérablement l'efficacité de la mémoire d'un serveur Flower " +"fonctionnant depuis longtemps." -#: ../../source/ref-changelog.md:556 +#: ../../source/ref-changelog.md:392 msgid "" -"**All arguments must be passed as keyword arguments** " -"([#1338](https://github.com/adap/flower/pull/1338))" +"**Fix spilling issues related to Ray during simulations** " +"([#1698](https://github.com/adap/flower/pull/1698))" msgstr "" -"**Tous les arguments doivent être passés comme des arguments de mot-clé**" -" ([#1338](https://github.com/adap/flower/pull/1338))" +"**Répare les problèmes de déversement liés à Ray pendant les " +"simulations** ([#1698](https://github.com/adap/flower/pull/1698))" -#: ../../source/ref-changelog.md:558 +#: ../../source/ref-changelog.md:394 #, fuzzy msgid "" -"Pass all arguments as keyword arguments, positional arguments are not " -"longer supported. Code that uses positional arguments (e.g., " -"`start_client(\"127.0.0.1:8080\", FlowerClient())`) must add the keyword " -"for each positional argument (e.g., " -"`start_client(server_address=\"127.0.0.1:8080\", " -"client=FlowerClient())`)." +"While running long simulations, `ray` was sometimes spilling huge amounts" +" of data that would make the training unable to continue. This is now " +"fixed! 🎉" msgstr "" -"Le code qui utilise des arguments positionnels (par exemple, " -"``start_client(\"127.0.0.1:8080\", FlowerClient())`) doit ajouter le mot-" -"clé pour chaque argument positionnel (par exemple, " -"``start_client(server_address=\"127.0.0.1:8080\", " -"client=FlowerClient())`)." +"Lors de l'exécution de longues simulations, `ray` déversait parfois " +"d'énormes quantités de données qui rendaient l'entraînement incapable de " +"continuer. ce problème est maintenant corrigé ! 🎉" -#: ../../source/ref-changelog.md:560 +#: ../../source/ref-changelog.md:396 msgid "" -"**Introduce configuration object** `ServerConfig` **in** `start_server` " -"**and** `start_simulation` " -"([#1317](https://github.com/adap/flower/pull/1317))" -msgstr "" -"**Introduire l'objet de configuration** `ServerConfig` **dans** " -"`start_server` **et** `start_simulation` " -"([#1317](https://github.com/adap/flower/pull/1317))" +"**Add new example using** `TabNet` **and Flower** " +"([#1725](https://github.com/adap/flower/pull/1725))" +msgstr "" +"**Ajouter un nouvel exemple utilisant** `TabNet` **et Flower** " +"([#1725](https://github.com/adap/flower/pull/1725))" -#: ../../source/ref-changelog.md:562 +#: ../../source/ref-changelog.md:398 msgid "" -"Instead of a config dictionary `{\"num_rounds\": 3, \"round_timeout\": " -"600.0}`, `start_server` and `start_simulation` now expect a configuration" -" object of type `flwr.server.ServerConfig`. `ServerConfig` takes the same" -" arguments that as the previous config dict, but it makes writing type-" -"safe code easier and the default parameters values more transparent." +"TabNet is a powerful and flexible framework for training machine learning" +" models on tabular data. We now have a federated example using Flower: " +"[quickstart-tabnet](https://github.com/adap/flower/tree/main/examples" +"/quickstart-tabnet)." msgstr "" -"Au lieu d'un dictionnaire de configuration `{\"num_rounds\" : 3, " -"\"round_timeout\" : 600.0}`, `start_server` et `start_simulation` " -"attendent maintenant un objet de configuration de type " -"`flwr.server.ServerConfig`. `ServerConfig` prend les mêmes arguments que " -"le dict de configuration précédent, mais il rend l'écriture de code " -"sécurisé plus facile et les valeurs des paramètres par défaut plus " -"transparentes." +"TabNet est un cadre puissant et flexible pour former des modèles " +"d'apprentissage automatique sur des données tabulaires. Nous avons " +"maintenant un exemple fédéré utilisant Flower : [quickstart-" +"tabnet](https://github.com/adap/flower/tree/main/examples/quickstart-" +"tabnet)." -#: ../../source/ref-changelog.md:564 +#: ../../source/ref-changelog.md:400 msgid "" -"**Rename built-in strategy parameters for clarity** " -"([#1334](https://github.com/adap/flower/pull/1334))" +"**Add new how-to guide for monitoring simulations** " +"([#1649](https://github.com/adap/flower/pull/1649))" msgstr "" -"**Renommer les paramètres de la stratégie intégrée pour plus de clarté** " -"([#1334](https://github.com/adap/flower/pull/1334))" +"**Ajouter un nouveau guide pratique pour le suivi des simulations** " +"([#1649](https://github.com/adap/flower/pull/1649))" -#: ../../source/ref-changelog.md:566 +#: ../../source/ref-changelog.md:402 msgid "" -"The following built-in strategy parameters were renamed to improve " -"readability and consistency with other API's:" +"We now have a documentation guide to help users monitor their performance" +" during simulations." msgstr "" -"Les paramètres de stratégie intégrés suivants ont été renommés pour " -"améliorer la lisibilité et la cohérence avec d'autres API :" - -#: ../../source/ref-changelog.md:568 -msgid "`fraction_eval` --> `fraction_evaluate`" -msgstr "`fraction_eval` --> `fraction_evaluate`" - -#: ../../source/ref-changelog.md:569 -msgid "`min_eval_clients` --> `min_evaluate_clients`" -msgstr "`min_eval_clients` --> `min_evaluate_clients`" - -#: ../../source/ref-changelog.md:570 -msgid "`eval_fn` --> `evaluate_fn`" -msgstr "`eval_fn` --> `evaluate_fn`" +"Nous avons maintenant un guide de documentation pour aider les " +"utilisateurs à surveiller leurs performances pendant les simulations." -#: ../../source/ref-changelog.md:572 +#: ../../source/ref-changelog.md:404 msgid "" -"**Update default arguments of built-in strategies** " -"([#1278](https://github.com/adap/flower/pull/1278))" +"**Add training metrics to** `History` **object during simulations** " +"([#1696](https://github.com/adap/flower/pull/1696))" msgstr "" -"**Mettre à jour les arguments par défaut des stratégies intégrées** " -"([#1278](https://github.com/adap/flower/pull/1278))" +"**Ajouter des mesures de formation à** `History` **objet pendant les " +"simulations** ([#1696](https://github.com/adap/flower/pull/1696))" -#: ../../source/ref-changelog.md:574 +#: ../../source/ref-changelog.md:406 msgid "" -"All built-in strategies now use `fraction_fit=1.0` and " -"`fraction_evaluate=1.0`, which means they select *all* currently " -"available clients for training and evaluation. Projects that relied on " -"the previous default values can get the previous behaviour by " -"initializing the strategy in the following way:" +"The `fit_metrics_aggregation_fn` can be used to aggregate training " +"metrics, but previous releases did not save the results in the `History` " +"object. This is now the case!" msgstr "" -"Toutes les stratégies intégrées utilisent désormais `fraction_fit=1.0` et" -" `fraction_evaluate=1.0`, ce qui signifie qu'elles sélectionnent *tous* " -"les clients actuellement disponibles pour l'entraînement et l'évaluation." -" Les projets qui s'appuyaient sur les valeurs par défaut précédentes " -"peuvent retrouver le comportement antérieur en initialisant la stratégie " -"de la manière suivante :" - -#: ../../source/ref-changelog.md:576 -msgid "`strategy = FedAvg(fraction_fit=0.1, fraction_evaluate=0.1)`" -msgstr "`stratégie = FedAvg(fraction_fit=0.1, fraction_evaluate=0.1)`" +"La fonction `fit_metrics_aggregation_fn` peut être utilisée pour agréger " +"les mesures d'entraînement, mais les versions précédentes " +"n'enregistraient pas les résultats dans l'objet `History`. c'est " +"désormais le cas !" -#: ../../source/ref-changelog.md:578 +#: ../../source/ref-changelog.md:408 msgid "" -"**Add** `server_round` **to** `Strategy.evaluate` " -"([#1334](https://github.com/adap/flower/pull/1334))" +"**General improvements** " +"([#1659](https://github.com/adap/flower/pull/1659), " +"[#1646](https://github.com/adap/flower/pull/1646), " +"[#1647](https://github.com/adap/flower/pull/1647), " +"[#1471](https://github.com/adap/flower/pull/1471), " +"[#1648](https://github.com/adap/flower/pull/1648), " +"[#1651](https://github.com/adap/flower/pull/1651), " +"[#1652](https://github.com/adap/flower/pull/1652), " +"[#1653](https://github.com/adap/flower/pull/1653), " +"[#1659](https://github.com/adap/flower/pull/1659), " +"[#1665](https://github.com/adap/flower/pull/1665), " +"[#1670](https://github.com/adap/flower/pull/1670), " +"[#1672](https://github.com/adap/flower/pull/1672), " +"[#1677](https://github.com/adap/flower/pull/1677), " +"[#1684](https://github.com/adap/flower/pull/1684), " +"[#1683](https://github.com/adap/flower/pull/1683), " +"[#1686](https://github.com/adap/flower/pull/1686), " +"[#1682](https://github.com/adap/flower/pull/1682), " +"[#1685](https://github.com/adap/flower/pull/1685), " +"[#1692](https://github.com/adap/flower/pull/1692), " +"[#1705](https://github.com/adap/flower/pull/1705), " +"[#1708](https://github.com/adap/flower/pull/1708), " +"[#1711](https://github.com/adap/flower/pull/1711), " +"[#1713](https://github.com/adap/flower/pull/1713), " +"[#1714](https://github.com/adap/flower/pull/1714), " +"[#1718](https://github.com/adap/flower/pull/1718), " +"[#1716](https://github.com/adap/flower/pull/1716), " +"[#1723](https://github.com/adap/flower/pull/1723), " +"[#1735](https://github.com/adap/flower/pull/1735), " +"[#1678](https://github.com/adap/flower/pull/1678), " +"[#1750](https://github.com/adap/flower/pull/1750), " +"[#1753](https://github.com/adap/flower/pull/1753), " +"[#1736](https://github.com/adap/flower/pull/1736), " +"[#1766](https://github.com/adap/flower/pull/1766), " +"[#1760](https://github.com/adap/flower/pull/1760), " +"[#1775](https://github.com/adap/flower/pull/1775), " +"[#1776](https://github.com/adap/flower/pull/1776), " +"[#1777](https://github.com/adap/flower/pull/1777), " +"[#1779](https://github.com/adap/flower/pull/1779), " +"[#1784](https://github.com/adap/flower/pull/1784), " +"[#1773](https://github.com/adap/flower/pull/1773), " +"[#1755](https://github.com/adap/flower/pull/1755), " +"[#1789](https://github.com/adap/flower/pull/1789), " +"[#1788](https://github.com/adap/flower/pull/1788), " +"[#1798](https://github.com/adap/flower/pull/1798), " +"[#1799](https://github.com/adap/flower/pull/1799), " +"[#1739](https://github.com/adap/flower/pull/1739), " +"[#1800](https://github.com/adap/flower/pull/1800), " +"[#1804](https://github.com/adap/flower/pull/1804), " +"[#1805](https://github.com/adap/flower/pull/1805))" msgstr "" -"**Ajouter** `server_round` **à** `Strategy.evaluate` " -"([#1334](https://github.com/adap/flower/pull/1334))" +"**General improvements** " +"([#1659](https://github.com/adap/flower/pull/1659), " +"[#1646](https://github.com/adap/flower/pull/1646), " +"[#1647](https://github.com/adap/flower/pull/1647), " +"[#1471](https://github.com/adap/flower/pull/1471), " +"[#1648](https://github.com/adap/flower/pull/1648), " +"[#1651](https://github.com/adap/flower/pull/1651), " +"[#1652](https://github.com/adap/flower/pull/1652), " +"[#1653](https://github.com/adap/flower/pull/1653), " +"[#1659](https://github.com/adap/flower/pull/1659), " +"[#1665](https://github.com/adap/flower/pull/1665), " +"[#1670](https://github.com/adap/flower/pull/1670), " +"[#1672](https://github.com/adap/flower/pull/1672), " +"[#1677](https://github.com/adap/flower/pull/1677), " +"[#1684](https://github.com/adap/flower/pull/1684), " +"[#1683](https://github.com/adap/flower/pull/1683), " +"[#1686](https://github.com/adap/flower/pull/1686), " +"[#1682](https://github.com/adap/flower/pull/1682), " +"[#1685](https://github.com/adap/flower/pull/1685), " +"[#1692](https://github.com/adap/flower/pull/1692), " +"[#1705](https://github.com/ada" + +#: ../../source/ref-changelog.md:416 +msgid "v1.3.0 (2023-02-06)" +msgstr "v1.3.0 (2023-02-06)" -#: ../../source/ref-changelog.md:580 +#: ../../source/ref-changelog.md:422 msgid "" -"The `Strategy` method `evaluate` now receives the current round of " -"federated learning/evaluation as the first parameter." +"`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, " +"`Daniel J. Beutel`, `JDRanpariya`, `Lennart Behme`, `Taner Topal`" msgstr "" -"La méthode `Stratégie` `évaluer` reçoit maintenant le cycle actuel " -"d'apprentissage/évaluation fédéré comme premier paramètre." +"`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, " +"`Daniel J. Beutel`, `JDRanpariya`, `Lennart Behme`, `Taner Topal`" -#: ../../source/ref-changelog.md:582 +#: ../../source/ref-changelog.md:426 msgid "" -"**Add** `server_round` **and** `config` **parameters to** `evaluate_fn` " -"([#1334](https://github.com/adap/flower/pull/1334))" +"**Add support for** `workload_id` **and** `group_id` **in Driver API** " +"([#1595](https://github.com/adap/flower/pull/1595))" msgstr "" -"**Ajouter** `server_round` **et** `config` **paramètres à** `evaluate_fn`" -" ([#1334](https://github.com/adap/flower/pull/1334))" +"**Ajouter la prise en charge de** `workload_id` **et** `group_id` **dans " +"l'API du pilote** ([#1595](https://github.com/adap/flower/pull/1595))" -#: ../../source/ref-changelog.md:584 +#: ../../source/ref-changelog.md:428 msgid "" -"The `evaluate_fn` passed to built-in strategies like `FedAvg` now takes " -"three parameters: (1) The current round of federated learning/evaluation " -"(`server_round`), (2) the model parameters to evaluate (`parameters`), " -"and (3) a config dictionary (`config`)." +"The (experimental) Driver API now supports a `workload_id` that can be " +"used to identify which workload a task belongs to. It also supports a new" +" `group_id` that can be used, for example, to indicate the current " +"training round. Both the `workload_id` and `group_id` enable client nodes" +" to decide whether they want to handle a task or not." msgstr "" -"Le `evaluate_fn` passé aux stratégies intégrées comme `FedAvg` prend " -"maintenant trois paramètres : (1) le cycle actuel " -"d'apprentissage/évaluation fédéré (`server_round`), (2) les paramètres du" -" modèle à évaluer (`parameters`), et (3) un dictionnaire de configuration" -" (`config`)." +"L'API (expérimentale) Driver prend désormais en charge un `workload_id` " +"qui peut être utilisé pour identifier la charge de travail à laquelle une" +" tâche appartient. Elle prend également en charge un nouveau `group_id` " +"qui peut être utilisé, par exemple, pour indiquer le cycle de formation " +"en cours. Le `workload_id` et le `group_id` permettent tous deux aux " +"nœuds clients de décider s'ils veulent traiter une tâche ou non." -#: ../../source/ref-changelog.md:586 +#: ../../source/ref-changelog.md:430 msgid "" -"**Rename** `rnd` **to** `server_round` " -"([#1321](https://github.com/adap/flower/pull/1321))" +"**Make Driver API and Fleet API address configurable** " +"([#1637](https://github.com/adap/flower/pull/1637))" msgstr "" -"**Rename** `rnd` **to** `server_round` " -"([#1321](https://github.com/adap/flower/pull/1321))" +"**Faire en sorte que l'adresse de l'API du conducteur et de l'API de la " +"flotte soit configurable** " +"([#1637](https://github.com/adap/flower/pull/1637))" -#: ../../source/ref-changelog.md:588 +#: ../../source/ref-changelog.md:432 msgid "" -"Several Flower methods and functions (`evaluate_fn`, `configure_fit`, " -"`aggregate_fit`, `configure_evaluate`, `aggregate_evaluate`) receive the " -"current round of federated learning/evaluation as their first parameter. " -"To improve reaability and avoid confusion with *random*, this parameter " -"has been renamed from `rnd` to `server_round`." +"The (experimental) long-running Flower server (Driver API and Fleet API) " +"can now configure the server address of both Driver API (via `--driver-" +"api-address`) and Fleet API (via `--fleet-api-address`) when starting:" msgstr "" -"Plusieurs méthodes et fonctions de Flower (`evaluate_fn`, " -"`configure_fit`, `aggregate_fit`, `configure_evaluate`, " -"`aggregate_evaluate`) reçoivent le cycle actuel " -"d'apprentissage/évaluation fédéré comme premier paramètre. Pour améliorer" -" la fiabilité et éviter la confusion avec *random*, ce paramètre a été " -"renommé de `rnd` à `server_round`." +"Le serveur Flower (expérimental) de longue durée (Driver API et Fleet " +"API) peut maintenant configurer l'adresse du serveur de Driver API (via " +"`--driver-api-address`) et de Fleet API (via `--fleet-api-address`) lors " +"de son démarrage :" -#: ../../source/ref-changelog.md:590 +#: ../../source/ref-changelog.md:434 +#, fuzzy msgid "" -"**Move** `flwr.dataset` **to** `flwr_baselines` " -"([#1273](https://github.com/adap/flower/pull/1273))" +"`flower-server --driver-api-address \"0.0.0.0:8081\" --fleet-api-address " +"\"0.0.0.0:8086\"`" msgstr "" -"**Déplacer** `flwr.dataset` **vers** `flwr_baselines` " -"([#1273](https://github.com/adap/flower/pull/1273))" +"``flower-superlink --driver-api-address \"0.0.0.0:8081\" --fleet-api-" +"address \"0.0.0.0:8086\" ``" -#: ../../source/ref-changelog.md:592 -msgid "The experimental package `flwr.dataset` was migrated to Flower Baselines." -msgstr "Le paquet expérimental `flwr.dataset` a été migré vers Flower Baselines." +#: ../../source/ref-changelog.md:436 +msgid "Both IPv4 and IPv6 addresses are supported." +msgstr "Les adresses IPv4 et IPv6 sont toutes deux prises en charge." -#: ../../source/ref-changelog.md:594 +#: ../../source/ref-changelog.md:438 msgid "" -"**Remove experimental strategies** " -"([#1280](https://github.com/adap/flower/pull/1280))" +"**Add new example of Federated Learning using fastai and Flower** " +"([#1598](https://github.com/adap/flower/pull/1598))" msgstr "" -"**Supprimer les stratégies expérimentales** " -"([#1280](https://github.com/adap/flower/pull/1280))" +"**Ajouter un nouvel exemple d'apprentissage fédéré utilisant fastai et " +"Flower** ([#1598](https://github.com/adap/flower/pull/1598))" -#: ../../source/ref-changelog.md:596 +#: ../../source/ref-changelog.md:440 msgid "" -"Remove unmaintained experimental strategies (`FastAndSlow`, `FedFSv0`, " -"`FedFSv1`)." -msgstr "" -"Supprimer les stratégies expérimentales non maintenues (`FastAndSlow`, " -"`FedFSv0`, `FedFSv1`)." +"A new code example (`quickstart-fastai`) demonstrates federated learning " +"with [fastai](https://www.fast.ai/) and Flower. You can find it here: " +"[quickstart-fastai](https://github.com/adap/flower/tree/main/examples" +"/quickstart-fastai)." +msgstr "" +"Un nouvel exemple de code (`quickstart-fastai`) démontre l'apprentissage " +"fédéré avec [fastai](https://www.fast.ai/) et Flower. Tu peux le trouver " +"ici : [quickstart-" +"fastai](https://github.com/adap/flower/tree/main/examples/quickstart-" +"fastai)." -#: ../../source/ref-changelog.md:598 +#: ../../source/ref-changelog.md:442 msgid "" -"**Rename** `Weights` **to** `NDArrays` " -"([#1258](https://github.com/adap/flower/pull/1258), " -"[#1259](https://github.com/adap/flower/pull/1259))" +"**Make Android example compatible with** `flwr >= 1.0.0` **and the latest" +" versions of Android** " +"([#1603](https://github.com/adap/flower/pull/1603))" msgstr "" -"**Rename** `Weights` **to** `NDArrays` " -"([#1258](https://github.com/adap/flower/pull/1258), " -"[#1259](https://github.com/adap/flower/pull/1259))" +"**Rendre l'exemple Android compatible avec** `flwr >= 1.0.0` **et les " +"dernières versions d'Android** " +"([#1603](https://github.com/adap/flower/pull/1603))" -#: ../../source/ref-changelog.md:600 +#: ../../source/ref-changelog.md:444 +#, fuzzy msgid "" -"`flwr.common.Weights` was renamed to `flwr.common.NDArrays` to better " -"capture what this type is all about." +"The Android code example has received a substantial update: the project " +"is compatible with Flower 1.0 (and later), the UI received a full " +"refresh, and the project is updated to be compatible with newer Android " +"tooling." msgstr "" -"`flwr.common.Weights` a été renommé en `flwr.common.NDArys` pour mieux " -"rendre compte de la nature de ce type." +"L'exemple de code Android a reçu une mise à jour substantielle : le " +"projet est compatible avec Flower 1.0 et les versions ultérieures, " +"l'interface utilisateur a reçu un rafraîchissement complet, et le projet " +"est mis à jour pour être compatible avec les outils Android les plus " +"récents." -#: ../../source/ref-changelog.md:602 +#: ../../source/ref-changelog.md:446 msgid "" -"**Remove antiquated** `force_final_distributed_eval` **from** " -"`start_server` ([#1258](https://github.com/adap/flower/pull/1258), " -"[#1259](https://github.com/adap/flower/pull/1259))" +"**Add new `FedProx` strategy** " +"([#1619](https://github.com/adap/flower/pull/1619))" msgstr "" -"**Supprimez l'ancien** `force_final_distributed_eval` **de** " -"`start_server` ([#1258](https://github.com/adap/flower/pull/1258), " -"[#1259](https://github.com/adap/flower/pull/1259))" +"**Ajouter une nouvelle stratégie `FedProx`** " +"([#1619](https://github.com/adap/flower/pull/1619))" -#: ../../source/ref-changelog.md:604 +#: ../../source/ref-changelog.md:448 msgid "" -"The `start_server` parameter `force_final_distributed_eval` has long been" -" a historic artefact, in this release it is finally gone for good." +"This " +"[strategy](https://github.com/adap/flower/blob/main/src/py/flwr/server/strategy/fedprox.py)" +" is almost identical to " +"[`FedAvg`](https://github.com/adap/flower/blob/main/src/py/flwr/server/strategy/fedavg.py)," +" but helps users replicate what is described in this " +"[paper](https://arxiv.org/abs/1812.06127). It essentially adds a " +"parameter called `proximal_mu` to regularize the local models with " +"respect to the global models." msgstr "" -"Le paramètre `start_server` `force_final_distributed_eval` a longtemps " -"été un artefact historique, dans cette version il a finalement disparu " -"pour de bon." +"Cette " +"[stratégie](https://github.com/adap/flower/blob/main/src/py/flwr/server/strategy/fedprox.py)" +" est presque identique à " +"[`FedAvg`](https://github.com/adap/flower/blob/main/src/py/flwr/server/strategy/fedavg.py)," +" mais aide les utilisateurs à reproduire ce qui est décrit dans cet " +"[article](https://arxiv.org/abs/1812.06127). Elle ajoute essentiellement " +"un paramètre appelé `proximal_mu` pour régulariser les modèles locaux par" +" rapport aux modèles globaux." -#: ../../source/ref-changelog.md:606 +#: ../../source/ref-changelog.md:450 msgid "" -"**Make** `get_parameters` **configurable** " -"([#1242](https://github.com/adap/flower/pull/1242))" +"**Add new metrics to telemetry events** " +"([#1640](https://github.com/adap/flower/pull/1640))" msgstr "" -"**Make** `get_parameters` **configurable** " -"([#1242](https://github.com/adap/flower/pull/1242))" +"**Ajouter de nouvelles métriques aux événements de télémétrie** " +"([#1640](https://github.com/adap/flower/pull/1640))" -#: ../../source/ref-changelog.md:608 +#: ../../source/ref-changelog.md:452 msgid "" -"The `get_parameters` method now accepts a configuration dictionary, just " -"like `get_properties`, `fit`, and `evaluate`." +"An updated event structure allows, for example, the clustering of events " +"within the same workload." msgstr "" -"La méthode `get_parameters` accepte maintenant un dictionnaire de " -"configuration, tout comme `get_properties`, `fit`, et `evaluate`." +"Une structure d'événements mise à jour permet, par exemple, de regrouper " +"des événements au sein d'une même charge de travail." -#: ../../source/ref-changelog.md:610 +#: ../../source/ref-changelog.md:454 msgid "" -"**Replace** `num_rounds` **in** `start_simulation` **with new** `config` " -"**parameter** ([#1281](https://github.com/adap/flower/pull/1281))" +"**Add new custom strategy tutorial section** " +"[#1623](https://github.com/adap/flower/pull/1623)" msgstr "" -"**Remplace** `num_rounds` **dans** `start_simulation` **avec le nouveau**" -" `config` **paramètre** " -"([#1281](https://github.com/adap/flower/pull/1281))" +"**Ajouter une nouvelle section de tutoriel sur les stratégies " +"personnalisées** [#1623](https://github.com/adap/flower/pull/1623)" -#: ../../source/ref-changelog.md:612 +#: ../../source/ref-changelog.md:456 +#, fuzzy msgid "" -"The `start_simulation` function now accepts a configuration dictionary " -"`config` instead of the `num_rounds` integer. This improves the " -"consistency between `start_simulation` and `start_server` and makes " -"transitioning between the two easier." +"The Flower tutorial now has a new section that covers implementing a " +"custom strategy from scratch: [Open in " +"Colab](https://colab.research.google.com/github/adap/flower/blob/main/doc/source" +"/tutorial-build-a-strategy-from-scratch-pytorch.ipynb)" msgstr "" -"La fonction `start_simulation` accepte maintenant un dictionnaire de " -"configuration `config` au lieu de l'entier `num_rounds`. Cela améliore la" -" cohérence entre `start_simulation` et `start_server` et facilite la " -"transition entre les deux." +"Le tutoriel sur les fleurs comporte désormais une nouvelle section qui " +"traite de la mise en œuvre d'une stratégie personnalisée à partir de zéro" +" : [Ouvrir dans " +"Colab](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial/Flower-3-Building-a" +"-Strategy-PyTorch.ipynb)" -#: ../../source/ref-changelog.md:616 +#: ../../source/ref-changelog.md:458 msgid "" -"**Support Python 3.10** " -"([#1320](https://github.com/adap/flower/pull/1320))" +"**Add new custom serialization tutorial section** " +"([#1622](https://github.com/adap/flower/pull/1622))" msgstr "" -"**Support Python 3.10** " -"([#1320](https://github.com/adap/flower/pull/1320))" +"**Ajouter une nouvelle section de tutoriel sur la sérialisation " +"personnalisée** ([#1622](https://github.com/adap/flower/pull/1622))" -#: ../../source/ref-changelog.md:618 +#: ../../source/ref-changelog.md:460 +#, fuzzy msgid "" -"The previous Flower release introduced experimental support for Python " -"3.10, this release declares Python 3.10 support as stable." +"The Flower tutorial now has a new section that covers custom " +"serialization: [Open in " +"Colab](https://colab.research.google.com/github/adap/flower/blob/main/doc/source" +"/tutorial-customize-the-client-pytorch.ipynb)" msgstr "" -"La version précédente de Flower a introduit la prise en charge " -"expérimentale de Python 3.10, cette version déclare la prise en charge de" -" Python 3.10 comme stable." +"Le tutoriel sur les fleurs comporte désormais une nouvelle section qui " +"traite de la sérialisation personnalisée : [Ouvrir dans " +"Colab](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial/Flower-4" +"-Client-and-NumPyClient-PyTorch.ipynb)" -#: ../../source/ref-changelog.md:620 +#: ../../source/ref-changelog.md:462 msgid "" -"**Make all** `Client` **and** `NumPyClient` **methods optional** " -"([#1260](https://github.com/adap/flower/pull/1260), " -"[#1277](https://github.com/adap/flower/pull/1277))" +"**General improvements** " +"([#1638](https://github.com/adap/flower/pull/1638), " +"[#1634](https://github.com/adap/flower/pull/1634), " +"[#1636](https://github.com/adap/flower/pull/1636), " +"[#1635](https://github.com/adap/flower/pull/1635), " +"[#1633](https://github.com/adap/flower/pull/1633), " +"[#1632](https://github.com/adap/flower/pull/1632), " +"[#1631](https://github.com/adap/flower/pull/1631), " +"[#1630](https://github.com/adap/flower/pull/1630), " +"[#1627](https://github.com/adap/flower/pull/1627), " +"[#1593](https://github.com/adap/flower/pull/1593), " +"[#1616](https://github.com/adap/flower/pull/1616), " +"[#1615](https://github.com/adap/flower/pull/1615), " +"[#1607](https://github.com/adap/flower/pull/1607), " +"[#1609](https://github.com/adap/flower/pull/1609), " +"[#1608](https://github.com/adap/flower/pull/1608), " +"[#1603](https://github.com/adap/flower/pull/1603), " +"[#1590](https://github.com/adap/flower/pull/1590), " +"[#1580](https://github.com/adap/flower/pull/1580), " +"[#1599](https://github.com/adap/flower/pull/1599), " +"[#1600](https://github.com/adap/flower/pull/1600), " +"[#1601](https://github.com/adap/flower/pull/1601), " +"[#1597](https://github.com/adap/flower/pull/1597), " +"[#1595](https://github.com/adap/flower/pull/1595), " +"[#1591](https://github.com/adap/flower/pull/1591), " +"[#1588](https://github.com/adap/flower/pull/1588), " +"[#1589](https://github.com/adap/flower/pull/1589), " +"[#1587](https://github.com/adap/flower/pull/1587), " +"[#1573](https://github.com/adap/flower/pull/1573), " +"[#1581](https://github.com/adap/flower/pull/1581), " +"[#1578](https://github.com/adap/flower/pull/1578), " +"[#1574](https://github.com/adap/flower/pull/1574), " +"[#1572](https://github.com/adap/flower/pull/1572), " +"[#1586](https://github.com/adap/flower/pull/1586))" msgstr "" -"**Rendre toutes les **méthodes `Client` **et** `NumPyClient` " -"**facultatives** ([#1260](https://github.com/adap/flower/pull/1260), " -"[#1277](https://github.com/adap/flower/pull/1277))" +"**General improvements** " +"([#1638](https://github.com/adap/flower/pull/1638), " +"[#1634](https://github.com/adap/flower/pull/1634), " +"[#1636](https://github.com/adap/flower/pull/1636), " +"[#1635](https://github.com/adap/flower/pull/1635), " +"[#1633](https://github.com/adap/flower/pull/1633), " +"[#1632](https://github.com/adap/flower/pull/1632), " +"[#1631](https://github.com/adap/flower/pull/1631), " +"[#1630](https://github.com/adap/flower/pull/1630), " +"[#1627](https://github.com/adap/flower/pull/1627), " +"[#1593](https://github.com/adap/flower/pull/1593), " +"[#1616](https://github.com/adap/flower/pull/1616), " +"[#1615](https://github.com/adap/flower/pull/1615), " +"[#1607](https://github.com/adap/flower/pull/1607), " +"[#1609](https://github.com/adap/flower/pull/1609), " +"[#1608](https://github.com/adap/flower/pull/1608), " +"[#1603](https://github.com/adap/flower/pull/1603), " +"[#1590](https://github.com/adap/flower/pull/1590), " +"[#1580](https://github.com/adap/flower/pull/1580), " +"[#1599](https://github.com/adap/flower/pull/1599), " +"[#1600](https://github.com/ada" -#: ../../source/ref-changelog.md:622 +#: ../../source/ref-changelog.md:466 msgid "" -"The `Client`/`NumPyClient` methods `get_properties`, `get_parameters`, " -"`fit`, and `evaluate` are all optional. This enables writing clients that" -" implement, for example, only `fit`, but no other method. No need to " -"implement `evaluate` when using centralized evaluation!" +"**Updated documentation** " +"([#1629](https://github.com/adap/flower/pull/1629), " +"[#1628](https://github.com/adap/flower/pull/1628), " +"[#1620](https://github.com/adap/flower/pull/1620), " +"[#1618](https://github.com/adap/flower/pull/1618), " +"[#1617](https://github.com/adap/flower/pull/1617), " +"[#1613](https://github.com/adap/flower/pull/1613), " +"[#1614](https://github.com/adap/flower/pull/1614))" msgstr "" -"Les méthodes `Client`/`NumPyClient` `get_properties`, `get_parameters`, " -"`fit`, et `evaluate` sont toutes optionnelles. Cela permet d'écrire des " -"clients qui n'implémentent, par exemple, que `fit`, mais aucune autre " -"méthode. Pas besoin d'implémenter `evaluate` quand on utilise " -"l'évaluation centralisée !" +"**Mise à jour de la documentation** " +"([#1629](https://github.com/adap/flower/pull/1629), " +"[#1628](https://github.com/adap/flower/pull/1628), " +"[#1620](https://github.com/adap/flower/pull/1620), " +"[#1618](https://github.com/adap/flower/pull/1618), " +"[#1617](https://github.com/adap/flower/pull/1617), " +"[#1613](https://github.com/adap/flower/pull/1613), " +"[#1614](https://github.com/adap/flower/pull/1614))" -#: ../../source/ref-changelog.md:624 +#: ../../source/ref-changelog.md:468 ../../source/ref-changelog.md:535 msgid "" -"**Enable passing a** `Server` **instance to** `start_simulation` " -"([#1281](https://github.com/adap/flower/pull/1281))" +"As usual, the documentation has improved quite a bit. It is another step " +"in our effort to make the Flower documentation the best documentation of " +"any project. Stay tuned and as always, feel free to provide feedback!" msgstr "" -"**Autoriser le passage d'une **instance `Server` à** `start_simulation` " -"([#1281](https://github.com/adap/flower/pull/1281))" +"Comme d'habitude, la documentation s'est beaucoup améliorée. C'est une " +"autre étape dans notre effort pour faire de la documentation de Flower la" +" meilleure documentation de tout projet. Reste à l'écoute et comme " +"toujours, n'hésite pas à nous faire part de tes commentaires !" -#: ../../source/ref-changelog.md:626 +#: ../../source/ref-changelog.md:474 +msgid "v1.2.0 (2023-01-13)" +msgstr "v1.2.0 (2023-01-13)" + +#: ../../source/ref-changelog.md:480 msgid "" -"Similar to `start_server`, `start_simulation` now accepts a full `Server`" -" instance. This enables users to heavily customize the execution of " -"eperiments and opens the door to running, for example, async FL using the" -" Virtual Client Engine." +"`Adam Narozniak`, `Charles Beauville`, `Daniel J. Beutel`, `Edoardo`, `L." +" Jiang`, `Ragy`, `Taner Topal`, `dannymcy`" msgstr "" -"Comme pour `start_server`, `start_simulation` accepte maintenant une " -"instance complète de `Server`. Cela permet aux utilisateurs de " -"personnaliser fortement l'exécution des expériences et ouvre la porte à " -"l'exécution, par exemple, de FL asynchrones à l'aide du moteur de client " -"virtuel." +"adam Narozniak`, `Charles Beauville`, `Daniel J. Beutel`, `Edoardo`, `L. " +"Jiang`, `Ragy`, `Taner Topal`, `dannymcy`" -#: ../../source/ref-changelog.md:628 +#: ../../source/ref-changelog.md:484 msgid "" -"**Update code examples** " -"([#1291](https://github.com/adap/flower/pull/1291), " -"[#1286](https://github.com/adap/flower/pull/1286), " -"[#1282](https://github.com/adap/flower/pull/1282))" +"**Introduce new Flower Baseline: FedAvg MNIST** " +"([#1497](https://github.com/adap/flower/pull/1497), " +"[#1552](https://github.com/adap/flower/pull/1552))" msgstr "" -"**Mettre à jour les exemples de code** " -"([#1291](https://github.com/adap/flower/pull/1291), " -"[#1286](https://github.com/adap/flower/pull/1286), " -"[#1282](https://github.com/adap/flower/pull/1282))" +"**Introduire une nouvelle fleur Référence : FedAvg MNIST** " +"([#1497](https://github.com/adap/flower/pull/1497), " +"[#1552](https://github.com/adap/flower/pull/1552))" -#: ../../source/ref-changelog.md:630 +#: ../../source/ref-changelog.md:486 msgid "" -"Many code examples received small or even large maintenance updates, " -"among them are" +"Over the coming weeks, we will be releasing a number of new reference " +"implementations useful especially to FL newcomers. They will typically " +"revisit well known papers from the literature, and be suitable for " +"integration in your own application or for experimentation, in order to " +"deepen your knowledge of FL in general. Today's release is the first in " +"this series. [Read more.](https://flower.ai/blog/2023-01-12-fl-starter-" +"pack-fedavg-mnist-cnn/)" msgstr "" -"De nombreux exemples de code ont reçu de petites ou même de grandes mises" -" à jour de maintenance" - -#: ../../source/ref-changelog.md:632 -msgid "`scikit-learn`" -msgstr "`scikit-learn`" - -#: ../../source/ref-changelog.md:633 -msgid "`simulation_pytorch`" -msgstr "`simulation_pytorch`" - -#: ../../source/ref-changelog.md:634 -msgid "`quickstart_pytorch`" -msgstr "`quickstart_pytorch` (démarrage rapide)" - -#: ../../source/ref-changelog.md:635 -msgid "`quickstart_simulation`" -msgstr "`quickstart_simulation`" - -#: ../../source/ref-changelog.md:636 -msgid "`quickstart_tensorflow`" -msgstr "`quickstart_tensorflow`" - -#: ../../source/ref-changelog.md:637 -msgid "`advanced_tensorflow`" -msgstr "`advanced_tensorflow` (en anglais)" +"Au cours des prochaines semaines, nous publierons un certain nombre de " +"nouvelles implémentations de référence utiles en particulier pour les " +"nouveaux venus en FL. Elles revisiteront généralement des articles bien " +"connus de la littérature, et seront adaptées à l'intégration dans votre " +"propre application ou à l'expérimentation, afin d'approfondir votre " +"connaissance de FL en général. La publication d'aujourd'hui est la " +"première de cette série. [Lire la " +"suite.](https://flower.ai/blog/2023-01-12-fl-starter-pack-fedavg-mnist-" +"cnn/)" -#: ../../source/ref-changelog.md:639 +#: ../../source/ref-changelog.md:488 msgid "" -"**Remove the obsolete simulation example** " -"([#1328](https://github.com/adap/flower/pull/1328))" +"**Improve GPU support in simulations** " +"([#1555](https://github.com/adap/flower/pull/1555))" msgstr "" -"**Supprime l'exemple de simulation obsolète** " -"([#1328](https://github.com/adap/flower/pull/1328))" +"**Améliorer la prise en charge des GPU dans les simulations** " +"([#1555](https://github.com/adap/flower/pull/1555))" -#: ../../source/ref-changelog.md:641 +#: ../../source/ref-changelog.md:490 msgid "" -"Removes the obsolete `simulation` example and renames " -"`quickstart_simulation` to `simulation_tensorflow` so it fits withs the " -"naming of `simulation_pytorch`" +"The Ray-based Virtual Client Engine (`start_simulation`) has been updated" +" to improve GPU support. The update includes some of the hard-earned " +"lessons from scaling simulations in GPU cluster environments. New " +"defaults make running GPU-based simulations substantially more robust." msgstr "" -"Supprime l'exemple obsolète `simulation` et renomme " -"`quickstart_simulation` en `simulation_tensorflow` pour qu'il corresponde" -" au nom de `simulation_pytorch`" +"Le moteur client virtuel basé sur Ray (`start_simulation`) a été mis à " +"jour pour améliorer la prise en charge des GPU. La mise à jour inclut " +"certaines des leçons durement apprises lors de la mise à l'échelle des " +"simulations dans des environnements de grappes de GPU. De nouveaux " +"paramètres par défaut rendent l'exécution des simulations basées sur les " +"GPU beaucoup plus robuste." -#: ../../source/ref-changelog.md:643 +#: ../../source/ref-changelog.md:492 msgid "" -"**Update documentation** " -"([#1223](https://github.com/adap/flower/pull/1223), " -"[#1209](https://github.com/adap/flower/pull/1209), " -"[#1251](https://github.com/adap/flower/pull/1251), " -"[#1257](https://github.com/adap/flower/pull/1257), " -"[#1267](https://github.com/adap/flower/pull/1267), " -"[#1268](https://github.com/adap/flower/pull/1268), " -"[#1300](https://github.com/adap/flower/pull/1300), " -"[#1304](https://github.com/adap/flower/pull/1304), " -"[#1305](https://github.com/adap/flower/pull/1305), " -"[#1307](https://github.com/adap/flower/pull/1307))" +"**Improve GPU support in Jupyter Notebook tutorials** " +"([#1527](https://github.com/adap/flower/pull/1527), " +"[#1558](https://github.com/adap/flower/pull/1558))" msgstr "" -"**Mise à jour de la documentation** " -"([#1223](https://github.com/adap/flower/pull/1223), " -"[#1209](https://github.com/adap/flower/pull/1209), " -"[#1251](https://github.com/adap/flower/pull/1251), " -"[#1257](https://github.com/adap/flower/pull/1257), " -"[#1267](https://github.com/adap/flower/pull/1267), " -"[#1268](https://github.com/adap/flower/pull/1268), " -"[#1300](https://github.com/adap/flower/pull/1300), " -"[#1304](https://github.com/adap/flower/pull/1304), " -"[#1305](https://github.com/adap/flower/pull/1305), " -"[#1307](https://github.com/adap/flower/pull/1307))" +"**Améliorer la prise en charge du GPU dans les tutoriels Jupyter " +"Notebook** ([#1527](https://github.com/adap/flower/pull/1527), " +"[#1558](https://github.com/adap/flower/pull/1558))" -#: ../../source/ref-changelog.md:645 +#: ../../source/ref-changelog.md:494 msgid "" -"One substantial documentation update fixes multiple smaller rendering " -"issues, makes titles more succinct to improve navigation, removes a " -"deprecated library, updates documentation dependencies, includes the " -"`flwr.common` module in the API reference, includes support for markdown-" -"based documentation, migrates the changelog from `.rst` to `.md`, and " -"fixes a number of smaller details!" +"Some users reported that Jupyter Notebooks have not always been easy to " +"use on GPU instances. We listened and made improvements to all of our " +"Jupyter notebooks! Check out the updated notebooks here:" msgstr "" -"Une mise à jour substantielle de la documentation corrige plusieurs " -"petits problèmes de rendu, rend les titres plus succincts pour améliorer " -"la navigation, supprime une bibliothèque obsolète, met à jour les " -"dépendances de la documentation, inclut le module `flwr.common` dans la " -"référence de l'API, inclut le support de la documentation basée sur le " -"markdown, migre le changelog de `.rst` vers `.md`, et corrige un certain " -"nombre de détails plus petits !" - -#: ../../source/ref-changelog.md:647 ../../source/ref-changelog.md:702 -#: ../../source/ref-changelog.md:771 ../../source/ref-changelog.md:810 -msgid "**Minor updates**" -msgstr "**Mises à jour mineures**" +"Certains utilisateurs ont signalé que les carnets Jupyter n'ont pas " +"toujours été faciles à utiliser sur les instances GPU. Nous les avons " +"écoutés et avons apporté des améliorations à tous nos carnets Jupyter ! " +"Découvre les carnets mis à jour ici :" -#: ../../source/ref-changelog.md:649 +#: ../../source/ref-changelog.md:496 +#, fuzzy msgid "" -"Add round number to fit and evaluate log messages " -"([#1266](https://github.com/adap/flower/pull/1266))" +"[An Introduction to Federated Learning](https://flower.ai/docs/framework" +"/tutorial-get-started-with-flower-pytorch.html)" msgstr "" -"Ajoute un chiffre rond pour ajuster et évaluer les messages du journal " -"([#1266](https://github.com/adap/flower/pull/1266))" +"[Une introduction à l'apprentissage fédéré] " +"(https://flower.ai/docs/tutorial/Flower-1-Intro-to-FL-PyTorch.html)" -#: ../../source/ref-changelog.md:650 +#: ../../source/ref-changelog.md:497 +#, fuzzy msgid "" -"Add secure gRPC connection to the `advanced_tensorflow` code example " -"([#847](https://github.com/adap/flower/pull/847))" +"[Strategies in Federated Learning](https://flower.ai/docs/framework" +"/tutorial-use-a-federated-learning-strategy-pytorch.html)" msgstr "" -"Ajouter une connexion gRPC sécurisée à l'exemple de code " -"`advanced_tensorflow` ([#847](https://github.com/adap/flower/pull/847))" +"[Stratégies d'apprentissage fédéré] " +"(https://flower.ai/docs/tutorial/Flower-2-Strategies-in-FL-PyTorch.html)" -#: ../../source/ref-changelog.md:651 +#: ../../source/ref-changelog.md:498 +#, fuzzy msgid "" -"Update developer tooling " -"([#1231](https://github.com/adap/flower/pull/1231), " -"[#1276](https://github.com/adap/flower/pull/1276), " -"[#1301](https://github.com/adap/flower/pull/1301), " -"[#1310](https://github.com/adap/flower/pull/1310))" +"[Building a Strategy](https://flower.ai/docs/framework/tutorial-build-a" +"-strategy-from-scratch-pytorch.html)" msgstr "" -"Mettre à jour les outils de développement " -"([#1231](https://github.com/adap/flower/pull/1231), " -"[#1276](https://github.com/adap/flower/pull/1276), " -"[#1301](https://github.com/adap/flower/pull/1301), " -"[#1310](https://github.com/adap/flower/pull/1310))" +"[Construire une stratégie] " +"(https://flower.ai/docs/tutorial/Flower-3-Building-a-Strategy-" +"PyTorch.html)" -#: ../../source/ref-changelog.md:652 +#: ../../source/ref-changelog.md:499 +#, fuzzy msgid "" -"Rename ProtoBuf messages to improve consistency " -"([#1214](https://github.com/adap/flower/pull/1214), " -"[#1258](https://github.com/adap/flower/pull/1258), " -"[#1259](https://github.com/adap/flower/pull/1259))" +"[Client and NumPyClient](https://flower.ai/docs/framework/tutorial-" +"customize-the-client-pytorch.html)" msgstr "" -"Renomme les messages ProtoBuf pour améliorer la cohérence " -"([#1214](https://github.com/adap/flower/pull/1214), " -"[#1258](https://github.com/adap/flower/pull/1258), " -"[#1259](https://github.com/adap/flower/pull/1259))" - -#: ../../source/ref-changelog.md:654 -msgid "v0.19.0 (2022-05-18)" -msgstr "v0.19.0 (2022-05-18)" +"[Client et NumPyClient] (https://flower.ai/docs/tutorial/Flower-4-Client-" +"and-NumPyClient-PyTorch.html)" -#: ../../source/ref-changelog.md:658 +#: ../../source/ref-changelog.md:501 msgid "" -"**Flower Baselines (preview): FedOpt, FedBN, FedAvgM** " -"([#919](https://github.com/adap/flower/pull/919), " -"[#1127](https://github.com/adap/flower/pull/1127), " -"[#914](https://github.com/adap/flower/pull/914))" +"**Introduce optional telemetry** " +"([#1533](https://github.com/adap/flower/pull/1533), " +"[#1544](https://github.com/adap/flower/pull/1544), " +"[#1584](https://github.com/adap/flower/pull/1584))" msgstr "" -"**Flower Baselines (preview) : FedOpt, FedBN, FedAvgM** " -"([#919](https://github.com/adap/flower/pull/919), " -"[#1127](https://github.com/adap/flower/pull/1127), " -"[#914](https://github.com/adap/flower/pull/914))" +"**Introduire la télémétrie optionnelle** " +"([#1533](https://github.com/adap/flower/pull/1533), " +"[#1544](https://github.com/adap/flower/pull/1544), " +"[#1584](https://github.com/adap/flower/pull/1584))" -#: ../../source/ref-changelog.md:660 -#, fuzzy +#: ../../source/ref-changelog.md:503 msgid "" -"The first preview release of Flower Baselines has arrived! We're " -"kickstarting Flower Baselines with implementations of FedOpt (FedYogi, " -"FedAdam, FedAdagrad), FedBN, and FedAvgM. Check the documentation on how " -"to use [Flower Baselines](https://flower.ai/docs/using-baselines.html). " -"With this first preview release we're also inviting the community to " -"[contribute their own baselines](https://flower.ai/docs/baselines/how-to-" -"contribute-baselines.html)." +"After a [request for " +"feedback](https://github.com/adap/flower/issues/1534) from the community," +" the Flower open-source project introduces optional collection of " +"*anonymous* usage metrics to make well-informed decisions to improve " +"Flower. Doing this enables the Flower team to understand how Flower is " +"used and what challenges users might face." msgstr "" -"La première version préliminaire de Flower Baselines est arrivée ! Nous " -"démarrons Flower Baselines avec des implémentations de FedOpt (FedYogi, " -"FedAdam, FedAdagrad), FedBN, et FedAvgM. Consultez la documentation sur " -"l'utilisation de [Flower Baselines](https://flower.ai/docs/using-" -"baselines.html). Avec cette première version préliminaire, nous invitons " -"également la communauté à [contribuer à leurs propres lignes de " -"base](https://flower.ai/docs/baselines/how-to-contribute-baselines.html)." +"À la suite d'une [demande de commentaires] " +"(https://github.com/adap/flower/issues/1534) de la part de la communauté," +" le projet open-source Flower introduit la collecte optionnelle de " +"mesures d'utilisation *anonymes* afin de prendre des décisions éclairées " +"pour améliorer Flower. Cela permet à l'équipe de Flower de comprendre " +"comment Flower est utilisé et quels sont les défis auxquels les " +"utilisateurs peuvent être confrontés." -#: ../../source/ref-changelog.md:662 +#: ../../source/ref-changelog.md:505 +#, fuzzy msgid "" -"**C++ client SDK (preview) and code example** " -"([#1111](https://github.com/adap/flower/pull/1111))" +"**Flower is a friendly framework for collaborative AI and data science.**" +" Staying true to this statement, Flower makes it easy to disable " +"telemetry for users who do not want to share anonymous usage metrics. " +"[Read more.](https://flower.ai/docs/telemetry.html)." msgstr "" -"**SDK client C++ (aperçu) et exemple de code** " -"([#1111](https://github.com/adap/flower/pull/1111))" +"**Flower est un cadre convivial pour l'IA collaborative et la science des" +" données.** Restant fidèle à cette déclaration, Flower permet de " +"désactiver facilement la télémétrie pour les utilisateurs qui ne " +"souhaitent pas partager des métriques d'utilisation anonymes.[Lire la " +"suite.](https://flower.ai/docs/telemetry.html)." -#: ../../source/ref-changelog.md:664 +#: ../../source/ref-changelog.md:507 msgid "" -"Preview support for Flower clients written in C++. The C++ preview " -"includes a Flower client SDK and a quickstart code example that " -"demonstrates a simple C++ client using the SDK." -msgstr "" -"L'aperçu C++ comprend un SDK pour les clients Flower et un exemple de " -"code de démarrage rapide qui démontre un client C++ simple utilisant le " -"SDK." - -#: ../../source/ref-changelog.md:666 -msgid "" -"**Add experimental support for Python 3.10 and Python 3.11** " -"([#1135](https://github.com/adap/flower/pull/1135))" +"**Introduce (experimental) Driver API** " +"([#1520](https://github.com/adap/flower/pull/1520), " +"[#1525](https://github.com/adap/flower/pull/1525), " +"[#1545](https://github.com/adap/flower/pull/1545), " +"[#1546](https://github.com/adap/flower/pull/1546), " +"[#1550](https://github.com/adap/flower/pull/1550), " +"[#1551](https://github.com/adap/flower/pull/1551), " +"[#1567](https://github.com/adap/flower/pull/1567))" msgstr "" -"**Ajouter la prise en charge expérimentale de Python 3.10 et Python " -"3.11** ([#1135](https://github.com/adap/flower/pull/1135))" +"**([#1520](https://github.com/adap/flower/pull/1520), " +"[#1525](https://github.com/adap/flower/pull/1525), " +"[#1545](https://github.com/adap/flower/pull/1545), " +"[#1546](https://github.com/adap/flower/pull/1546), " +"[#1550](https://github.com/adap/flower/pull/1550), " +"[#1551](https://github.com/adap/flower/pull/1551), " +"[#1567](https://github.com/adap/flower/pull/1567))" -#: ../../source/ref-changelog.md:668 +#: ../../source/ref-changelog.md:509 msgid "" -"Python 3.10 is the latest stable release of Python and Python 3.11 is due" -" to be released in October. This Flower release adds experimental support" -" for both Python versions." +"Flower now has a new (experimental) Driver API which will enable fully " +"programmable, async, and multi-tenant Federated Learning and Federated " +"Analytics applications. Phew, that's a lot! Going forward, the Driver API" +" will be the abstraction that many upcoming features will be built on - " +"and you can start building those things now, too." msgstr "" -"Python 3.10 est la dernière version stable de Python et Python 3.11 " -"devrait sortir en octobre. Cette version de Flower ajoute une prise en " -"charge expérimentale pour les deux versions de Python." +"Flower dispose désormais d'une nouvelle API de pilote (expérimentale) qui" +" permettra de créer des applications Federated Learning et Federated " +"Analytics entièrement programmables, asynchrones et multi-tenant. Ouf, " +"c'est beaucoup ! À l'avenir, l'API de pilote sera l'abstraction sur " +"laquelle de nombreuses fonctionnalités à venir seront construites - et tu" +" peux commencer à construire ces choses dès maintenant, aussi." -#: ../../source/ref-changelog.md:670 +#: ../../source/ref-changelog.md:511 msgid "" -"**Aggregate custom metrics through user-provided functions** " -"([#1144](https://github.com/adap/flower/pull/1144))" +"The Driver API also enables a new execution mode in which the server runs" +" indefinitely. Multiple individual workloads can run concurrently and " +"start and stop their execution independent of the server. This is " +"especially useful for users who want to deploy Flower in production." msgstr "" -"**Agréger des mesures personnalisées grâce à des fonctions fournies par " -"l'utilisateur** ([#1144](https://github.com/adap/flower/pull/1144))" +"L'API du pilote permet également un nouveau mode d'exécution dans lequel " +"le serveur s'exécute indéfiniment. Plusieurs charges de travail " +"individuelles peuvent s'exécuter simultanément et démarrer et arrêter " +"leur exécution indépendamment du serveur. Ceci est particulièrement utile" +" pour les utilisateurs qui souhaitent déployer Flower en production." -#: ../../source/ref-changelog.md:672 +#: ../../source/ref-changelog.md:513 msgid "" -"Custom metrics (e.g., `accuracy`) can now be aggregated without having to" -" customize the strategy. Built-in strategies support two new arguments, " -"`fit_metrics_aggregation_fn` and `evaluate_metrics_aggregation_fn`, that " -"allow passing custom metric aggregation functions." +"To learn more, check out the `mt-pytorch` code example. We look forward " +"to you feedback!" msgstr "" -"Les stratégies intégrées prennent en charge deux nouveaux arguments, " -"`fit_metrics_aggregation_fn` et `evaluate_metrics_aggregation_fn`, qui " -"permettent de passer des fonctions d'agrégation de métriques " -"personnalisées." +"Pour en savoir plus, consulte l'exemple de code `mt-pytorch`. Nous " +"attendons tes commentaires avec impatience !" -#: ../../source/ref-changelog.md:674 +#: ../../source/ref-changelog.md:515 msgid "" -"**User-configurable round timeout** " -"([#1162](https://github.com/adap/flower/pull/1162))" +"Please note: *The Driver API is still experimental and will likely change" +" significantly over time.*" msgstr "" -"**Temps d'attente configurable par l'utilisateur** " -"([#1162](https://github.com/adap/flower/pull/1162))" +"Remarque : *L'API du pilote est encore expérimentale et est susceptible " +"de changer de manière significative au fil du temps.*" -#: ../../source/ref-changelog.md:676 +#: ../../source/ref-changelog.md:517 msgid "" -"A new configuration value allows the round timeout to be set for " -"`start_server` and `start_simulation`. If the `config` dictionary " -"contains a `round_timeout` key (with a `float` value in seconds), the " -"server will wait *at least* `round_timeout` seconds before it closes the " -"connection." +"**Add new Federated Analytics with Pandas example** " +"([#1469](https://github.com/adap/flower/pull/1469), " +"[#1535](https://github.com/adap/flower/pull/1535))" msgstr "" -"Si le dictionnaire `config` contient une clé `round_timeout` (avec une " -"valeur `float` en secondes), le serveur attendra *au moins* " -"`round_timeout` secondes avant de fermer la connexion." +"**Ajouter un nouvel exemple de Federated Analytics avec Pandas** " +"([#1469](https://github.com/adap/flower/pull/1469), " +"[#1535](https://github.com/adap/flower/pull/1535))" -#: ../../source/ref-changelog.md:678 +#: ../../source/ref-changelog.md:519 msgid "" -"**Enable both federated evaluation and centralized evaluation to be used " -"at the same time in all built-in strategies** " -"([#1091](https://github.com/adap/flower/pull/1091))" +"A new code example (`quickstart-pandas`) demonstrates federated analytics" +" with Pandas and Flower. You can find it here: [quickstart-" +"pandas](https://github.com/adap/flower/tree/main/examples/quickstart-" +"pandas)." msgstr "" -"**Permettre l'utilisation simultanée de l'évaluation fédérée et de " -"l'évaluation centralisée dans toutes les stratégies intégrées** " -"([#1091](https://github.com/adap/flower/pull/1091))" +"Un nouvel exemple de code (`quickstart-pandas`) démontre l'analyse " +"fédérée avec Pandas et Flower. Tu peux le trouver ici : [quickstart-" +"pandas](https://github.com/adap/flower/tree/main/examples/quickstart-" +"pandas)." -#: ../../source/ref-changelog.md:680 +#: ../../source/ref-changelog.md:521 msgid "" -"Built-in strategies can now perform both federated evaluation (i.e., " -"client-side) and centralized evaluation (i.e., server-side) in the same " -"round. Federated evaluation can be disabled by setting `fraction_eval` to" -" `0.0`." +"**Add new strategies: Krum and MultiKrum** " +"([#1481](https://github.com/adap/flower/pull/1481))" msgstr "" -"Les stratégies intégrées peuvent maintenant effectuer une évaluation " -"fédérée (c'est-à-dire côté client) et une évaluation centralisée " -"(c'est-à-dire côté serveur) dans le même tour. L'évaluation fédérée peut " -"être désactivée en réglant `fraction_eval` sur `0.0`." +"**Ajouter de nouvelles stratégies : Krum et MultiKrum** " +"([#1481](https://github.com/adap/flower/pull/1481))" -#: ../../source/ref-changelog.md:682 +#: ../../source/ref-changelog.md:523 msgid "" -"**Two new Jupyter Notebook tutorials** " -"([#1141](https://github.com/adap/flower/pull/1141))" +"Edoardo, a computer science student at the Sapienza University of Rome, " +"contributed a new `Krum` strategy that enables users to easily use Krum " +"and MultiKrum in their workloads." msgstr "" -"**Deux nouveaux tutoriels Jupyter Notebook** " -"([#1141](https://github.com/adap/flower/pull/1141))" +"Edoardo, étudiant en informatique à l'Université Sapienza de Rome, a " +"contribué à une nouvelle stratégie `Krum` qui permet aux utilisateurs " +"d'utiliser facilement Krum et MultiKrum dans leurs charges de travail." -#: ../../source/ref-changelog.md:684 +#: ../../source/ref-changelog.md:525 msgid "" -"Two Jupyter Notebook tutorials (compatible with Google Colab) explain " -"basic and intermediate Flower features:" +"**Update C++ example to be compatible with Flower v1.2.0** " +"([#1495](https://github.com/adap/flower/pull/1495))" msgstr "" -"Deux tutoriels Jupyter Notebook (compatibles avec Google Colab) " -"expliquent les fonctionnalités de base et intermédiaires de Flower :" +"**Mettre à jour l'exemple C++ pour qu'il soit compatible avec Flower " +"v1.2.0** ([#1495](https://github.com/adap/flower/pull/1495))" -#: ../../source/ref-changelog.md:686 +#: ../../source/ref-changelog.md:527 msgid "" -"*An Introduction to Federated Learning*: [Open in " -"Colab](https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-1" -"-Intro-to-FL-PyTorch.ipynb)" +"The C++ code example has received a substantial update to make it " +"compatible with the latest version of Flower." msgstr "" -"*Introduction à l'apprentissage fédéré* : [Open in " -"Colab](https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-1" -"-Intro-to-FL-PyTorch.ipynb)" +"L'exemple de code C++ a reçu une mise à jour substantielle pour le rendre" +" compatible avec la dernière version de Flower." -#: ../../source/ref-changelog.md:688 +#: ../../source/ref-changelog.md:529 msgid "" -"*Using Strategies in Federated Learning*: [Open in " -"Colab](https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-2" -"-Strategies-in-FL-PyTorch.ipynb)" +"**General improvements** " +"([#1491](https://github.com/adap/flower/pull/1491), " +"[#1504](https://github.com/adap/flower/pull/1504), " +"[#1506](https://github.com/adap/flower/pull/1506), " +"[#1514](https://github.com/adap/flower/pull/1514), " +"[#1522](https://github.com/adap/flower/pull/1522), " +"[#1523](https://github.com/adap/flower/pull/1523), " +"[#1526](https://github.com/adap/flower/pull/1526), " +"[#1528](https://github.com/adap/flower/pull/1528), " +"[#1547](https://github.com/adap/flower/pull/1547), " +"[#1549](https://github.com/adap/flower/pull/1549), " +"[#1560](https://github.com/adap/flower/pull/1560), " +"[#1564](https://github.com/adap/flower/pull/1564), " +"[#1566](https://github.com/adap/flower/pull/1566))" msgstr "" -"*Utiliser des stratégies dans l'apprentissage fédéré* : [Open in " -"Colab](https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-2" -"-Strategies-in-FL-PyTorch.ipynb)" +"**Améliorations générales** " +"([#1491](https://github.com/adap/flower/pull/1491), " +"[#1504](https://github.com/adap/flower/pull/1504), " +"[#1506](https://github.com/adap/flower/pull/1506), " +"[#1514](https://github.com/adap/flower/pull/1514), " +"[#1522](https://github.com/adap/flower/pull/1522), " +"[#1523](https://github.com/adap/flower/pull/1523), " +"[#1526](https://github.com/adap/flower/pull/1526), " +"[#1528](https://github.com/adap/flower/pull/1528), " +"[#1547](https://github.com/adap/flower/pull/1547), " +"[#1549](https://github.com/adap/flower/pull/1549), " +"[#1560](https://github.com/adap/flower/pull/1560), " +"[#1564](https://github.com/adap/flower/pull/1564), " +"[#1566](https://github.com/adap/flower/pull/1566))" -#: ../../source/ref-changelog.md:690 +#: ../../source/ref-changelog.md:533 msgid "" -"**New FedAvgM strategy (Federated Averaging with Server Momentum)** " -"([#1076](https://github.com/adap/flower/pull/1076))" +"**Updated documentation** " +"([#1494](https://github.com/adap/flower/pull/1494), " +"[#1496](https://github.com/adap/flower/pull/1496), " +"[#1500](https://github.com/adap/flower/pull/1500), " +"[#1503](https://github.com/adap/flower/pull/1503), " +"[#1505](https://github.com/adap/flower/pull/1505), " +"[#1524](https://github.com/adap/flower/pull/1524), " +"[#1518](https://github.com/adap/flower/pull/1518), " +"[#1519](https://github.com/adap/flower/pull/1519), " +"[#1515](https://github.com/adap/flower/pull/1515))" msgstr "" -"**Nouvelle stratégie FedAvgM (Federated Averaging with Server Momentum)**" -" ([#1076](https://github.com/adap/flower/pull/1076))" +"**Documentation mise à jour** " +"([#1494](https://github.com/adap/flower/pull/1494), " +"[#1496](https://github.com/adap/flower/pull/1496), " +"[#1500](https://github.com/adap/flower/pull/1500), " +"[#1503](https://github.com/adap/flower/pull/1503), " +"[#1505](https://github.com/adap/flower/pull/1505), " +"[#1524](https://github.com/adap/flower/pull/1524), " +"[#1518](https://github.com/adap/flower/pull/1518), " +"[#1519](https://github.com/adap/flower/pull/1519), " +"[#1515](https://github.com/adap/flower/pull/1515))" -#: ../../source/ref-changelog.md:692 -#, fuzzy +#: ../../source/ref-changelog.md:537 msgid "" -"The new `FedAvgM` strategy implements Federated Averaging with Server " -"Momentum \\[Hsu et al., 2019\\]." +"One highlight is the new [first time contributor " +"guide](https://flower.ai/docs/first-time-contributors.html): if you've " +"never contributed on GitHub before, this is the perfect place to start!" msgstr "" -"La nouvelle stratégie `FedAvgM` met en œuvre la moyenne fédérée avec le " -"momentum du serveur [Hsu et al., 2019]." +"L'un des points forts est le nouveau [guide du premier contributeur] " +"(https://flower.ai/docs/first-time-contributors.html) : si tu n'as jamais" +" contribué sur GitHub auparavant, c'est l'endroit idéal pour commencer !" -#: ../../source/ref-changelog.md:694 +#: ../../source/ref-changelog.md:543 +msgid "v1.1.0 (2022-10-31)" +msgstr "v1.1.0 (2022-10-31)" + +#: ../../source/ref-changelog.md:547 msgid "" -"**New advanced PyTorch code example** " -"([#1007](https://github.com/adap/flower/pull/1007))" +"We would like to give our **special thanks** to all the contributors who " +"made the new version of Flower possible (in `git shortlog` order):" msgstr "" -"**Nouvel exemple de code PyTorch avancé** " -"([#1007](https://github.com/adap/flower/pull/1007))" +"Nous aimerions **remercier tout particulièrement** tous les contributeurs" +" qui ont rendu possible la nouvelle version de Flower (dans l'ordre `git " +"shortlog`) :" -#: ../../source/ref-changelog.md:696 +#: ../../source/ref-changelog.md:549 msgid "" -"A new code example (`advanced_pytorch`) demonstrates advanced Flower " -"concepts with PyTorch." -msgstr "" -"Un nouvel exemple de code (`advanced_pytorch`) démontre des concepts de " -"fleur avancés avec PyTorch." +"`Akis Linardos`, `Christopher S`, `Daniel J. Beutel`, `George`, `Jan " +"Schlicht`, `Mohammad Fares`, `Pedro Porto Buarque de Gusmão`, `Philipp " +"Wiesner`, `Rob Luke`, `Taner Topal`, `VasundharaAgarwal`, " +"`danielnugraha`, `edogab33`" +msgstr "" +"`Akis Linardos`, `Christopher S`, `Daniel J. Beutel`, `George`, `Jan " +"Schlicht`, `Mohammad Fares`, `Pedro Porto Buarque de Gusmão`, `Philipp " +"Wiesner`, `Rob Luke`, `Taner Topal`, `VasundharaAgarwal`, " +"`danielnugraha`, `edogab33`" -#: ../../source/ref-changelog.md:698 +#: ../../source/ref-changelog.md:553 msgid "" -"**New JAX code example** " -"([#906](https://github.com/adap/flower/pull/906), " -"[#1143](https://github.com/adap/flower/pull/1143))" +"**Introduce Differential Privacy wrappers (preview)** " +"([#1357](https://github.com/adap/flower/pull/1357), " +"[#1460](https://github.com/adap/flower/pull/1460))" msgstr "" -"**Nouvel exemple de code JAX** " -"([#906](https://github.com/adap/flower/pull/906), " -"[#1143](https://github.com/adap/flower/pull/1143))" +"**Introduire les enveloppes de confidentialité différentielle (aperçu)** " +"([#1357](https://github.com/adap/flower/pull/1357), " +"[#1460](https://github.com/adap/flower/pull/1460))" -#: ../../source/ref-changelog.md:700 +#: ../../source/ref-changelog.md:555 msgid "" -"A new code example (`jax_from_centralized_to_federated`) shows federated " -"learning with JAX and Flower." +"The first (experimental) preview of pluggable Differential Privacy " +"wrappers enables easy configuration and usage of differential privacy " +"(DP). The pluggable DP wrappers enable framework-agnostic **and** " +"strategy-agnostic usage of both client-side DP and server-side DP. Head " +"over to the Flower docs, a new explainer goes into more detail." msgstr "" -"Un nouvel exemple de code (`jax_from_centralized_to_federated`) montre " -"l'apprentissage fédéré avec JAX et Flower." +"Le premier aperçu (expérimental) des wrappers enfichables de " +"confidentialité différentielle permet de configurer et d'utiliser " +"facilement la confidentialité différentielle (DP). Les wrappers DP " +"enfichables permettent une utilisation agnostique du cadre **et** de la " +"stratégie à la fois de la DP côté client et de la DP côté serveur. Va " +"voir les documents de Flower, un nouvel explicatif va plus loin dans les " +"détails." -#: ../../source/ref-changelog.md:704 +#: ../../source/ref-changelog.md:557 msgid "" -"New option to keep Ray running if Ray was already initialized in " -"`start_simulation` ([#1177](https://github.com/adap/flower/pull/1177))" +"**New iOS CoreML code example** " +"([#1289](https://github.com/adap/flower/pull/1289))" msgstr "" -"Nouvelle option pour continuer à faire fonctionner Ray si Ray a déjà été " -"initialisé dans `start_simulation` " -"([#1177](https://github.com/adap/flower/pull/1177))" +"**Nouvel exemple de code CoreML pour iOS** " +"([#1289](https://github.com/adap/flower/pull/1289))" -#: ../../source/ref-changelog.md:705 +#: ../../source/ref-changelog.md:559 msgid "" -"Add support for custom `ClientManager` as a `start_simulation` parameter " -"([#1171](https://github.com/adap/flower/pull/1171))" +"Flower goes iOS! A massive new code example shows how Flower clients can " +"be built for iOS. The code example contains both Flower iOS SDK " +"components that can be used for many tasks, and one task example running " +"on CoreML." msgstr "" -"Ajout de la prise en charge d'un `ClientManager` personnalisé comme " -"paramètre de `start_simulation` " -"([#1171](https://github.com/adap/flower/pull/1171))" +"Flower passe à iOS ! Un nouvel exemple de code massif montre comment les " +"clients Flower peuvent être construits pour iOS. L'exemple de code " +"contient à la fois des composants Flower iOS SDK qui peuvent être " +"utilisés pour de nombreuses tâches, et un exemple de tâche fonctionnant " +"sur CoreML." -#: ../../source/ref-changelog.md:706 -#, fuzzy +#: ../../source/ref-changelog.md:561 msgid "" -"New documentation for [implementing " -"strategies](https://flower.ai/docs/framework/how-to-implement-" -"strategies.html) ([#1097](https://github.com/adap/flower/pull/1097), " -"[#1175](https://github.com/adap/flower/pull/1175))" +"**New FedMedian strategy** " +"([#1461](https://github.com/adap/flower/pull/1461))" msgstr "" -"Nouvelle documentation pour [mettre en œuvre des " -"stratégies](https://flower.ai/docs/framework/how-to-implement-" -"strategies.html) ([#1097](https://github.com/adap/flower/pull/1097), " -"[#1175](https://github.com/adap/flower/pull/1175))" +"**Nouvelle stratégie de FedMedian** " +"([#1461](https://github.com/adap/flower/pull/1461))" -#: ../../source/ref-changelog.md:707 +#: ../../source/ref-changelog.md:563 msgid "" -"New mobile-friendly documentation theme " -"([#1174](https://github.com/adap/flower/pull/1174))" +"The new `FedMedian` strategy implements Federated Median (FedMedian) by " +"[Yin et al., 2018](https://arxiv.org/pdf/1803.01498v1.pdf)." msgstr "" -"Nouveau thème de documentation adapté aux mobiles " -"([#1174](https://github.com/adap/flower/pull/1174))" +"La nouvelle stratégie `FedMedian` met en œuvre Federated Median " +"(FedMedian) par [Yin et al., 2018] " +"(https://arxiv.org/pdf/1803.01498v1.pdf)." -#: ../../source/ref-changelog.md:708 +#: ../../source/ref-changelog.md:565 msgid "" -"Limit version range for (optional) `ray` dependency to include only " -"compatible releases (`>=1.9.2,<1.12.0`) " -"([#1205](https://github.com/adap/flower/pull/1205))" +"**Log** `Client` **exceptions in Virtual Client Engine** " +"([#1493](https://github.com/adap/flower/pull/1493))" msgstr "" -"Limite la plage de versions pour la dépendance (optionnelle) `ray` pour " -"n'inclure que les versions compatibles (`>=1.9.2,<1.12.0`) " -"([#1205](https://github.com/adap/flower/pull/1205))" +"**Log** `Client` **exceptions dans le moteur de client virtuel** " +"([#1493](https://github.com/adap/flower/pull/1493))" -#: ../../source/ref-changelog.md:712 +#: ../../source/ref-changelog.md:567 msgid "" -"**Remove deprecated support for Python 3.6** " -"([#871](https://github.com/adap/flower/pull/871))" +"All `Client` exceptions happening in the VCE are now logged by default " +"and not just exposed to the configured `Strategy` (via the `failures` " +"argument)." msgstr "" -"**Supprime la prise en charge obsolète de Python 3.6** " -"([#871](https://github.com/adap/flower/pull/871))" +"Toutes les exceptions `Client` qui se produisent dans le VCE sont " +"maintenant enregistrées par défaut et ne sont pas seulement exposées à la" +" `Stratégie` configurée (via l'argument `failures`)." -#: ../../source/ref-changelog.md:713 +#: ../../source/ref-changelog.md:569 msgid "" -"**Remove deprecated KerasClient** " -"([#857](https://github.com/adap/flower/pull/857))" +"**Improve Virtual Client Engine internals** " +"([#1401](https://github.com/adap/flower/pull/1401), " +"[#1453](https://github.com/adap/flower/pull/1453))" msgstr "" -"**Supprimez KerasClient** " -"([#857](https://github.com/adap/flower/pull/857))" +"**Améliorer le moteur du client virtuel** " +"([#1401](https://github.com/adap/flower/pull/1401), " +"[#1453](https://github.com/adap/flower/pull/1453))" -#: ../../source/ref-changelog.md:714 +#: ../../source/ref-changelog.md:571 msgid "" -"**Remove deprecated no-op extra installs** " -"([#973](https://github.com/adap/flower/pull/973))" +"Some internals of the Virtual Client Engine have been revamped. The VCE " +"now uses Ray 2.0 under the hood, the value type of the `client_resources`" +" dictionary changed to `float` to allow fractions of resources to be " +"allocated." msgstr "" -"**Supprimer les installations supplémentaires no-op dépréciées** " -"([#973](https://github.com/adap/flower/pull/973))" +"Le VCE utilise maintenant Ray 2.0 sous le capot, le type de valeur du " +"dictionnaire `client_resources` a été remplacé par `float` pour permettre" +" l'allocation de fractions de ressources." -#: ../../source/ref-changelog.md:715 +#: ../../source/ref-changelog.md:573 msgid "" -"**Remove deprecated proto fields from** `FitRes` **and** `EvaluateRes` " -"([#869](https://github.com/adap/flower/pull/869))" +"**Support optional** `Client`**/**`NumPyClient` **methods in Virtual " +"Client Engine**" msgstr "" -"**Supprimez les champs proto obsolètes de** `FitRes` **et** `EvaluateRes`" -" ([#869](https://github.com/adap/flower/pull/869))" +"**Support optional** `Client`**/**`NumPyClient` **methods in Virtual " +"Client Engine**" -#: ../../source/ref-changelog.md:716 +#: ../../source/ref-changelog.md:575 msgid "" -"**Remove deprecated QffedAvg strategy (replaced by QFedAvg)** " -"([#1107](https://github.com/adap/flower/pull/1107))" +"The Virtual Client Engine now has full support for optional `Client` (and" +" `NumPyClient`) methods." msgstr "" -"**Supprime la stratégie QffedAvg (remplacée par QFedAvg)** " -"([#1107](https://github.com/adap/flower/pull/1107))" +"Le moteur de client virtuel prend désormais en charge les méthodes " +"optionnelles `Client` (et `NumPyClient`)." -#: ../../source/ref-changelog.md:717 +#: ../../source/ref-changelog.md:577 msgid "" -"**Remove deprecated DefaultStrategy strategy** " -"([#1142](https://github.com/adap/flower/pull/1142))" +"**Provide type information to packages using** `flwr` " +"([#1377](https://github.com/adap/flower/pull/1377))" msgstr "" -"**Supprime la stratégie DefaultStrategy qui est obsolète** " -"([#1142](https://github.com/adap/flower/pull/1142))" +"**Fournir des informations de type aux paquets en utilisant** `flwr` " +"([#1377](https://github.com/adap/flower/pull/1377))" -#: ../../source/ref-changelog.md:718 +#: ../../source/ref-changelog.md:579 msgid "" -"**Remove deprecated support for eval_fn accuracy return value** " -"([#1142](https://github.com/adap/flower/pull/1142))" +"The package `flwr` is now bundled with a `py.typed` file indicating that " +"the package is typed. This enables typing support for projects or " +"packages that use `flwr` by enabling them to improve their code using " +"static type checkers like `mypy`." msgstr "" -"**Supprimer la prise en charge obsolète de la valeur de retour de la " -"précision eval_fn** ([#1142](https://github.com/adap/flower/pull/1142))" +"Le paquet `flwr` est maintenant accompagné d'un fichier `py.typed` " +"indiquant que le paquet est typé. Cela permet de prendre en charge le " +"typage pour les projets ou les paquets qui utilisent `flwr` en leur " +"permettant d'améliorer leur code à l'aide de vérificateurs de types " +"statiques comme `mypy`." -#: ../../source/ref-changelog.md:719 +#: ../../source/ref-changelog.md:581 msgid "" -"**Remove deprecated support for passing initial parameters as NumPy " -"ndarrays** ([#1142](https://github.com/adap/flower/pull/1142))" +"**Updated code example** " +"([#1344](https://github.com/adap/flower/pull/1344), " +"[#1347](https://github.com/adap/flower/pull/1347))" msgstr "" -"**Supprime la prise en charge obsolète du passage des paramètres initiaux" -" en tant que ndarrays NumPy** " -"([#1142](https://github.com/adap/flower/pull/1142))" - -#: ../../source/ref-changelog.md:721 -msgid "v0.18.0 (2022-02-28)" -msgstr "v0.18.0 (2022-02-28)" +"**Exemple de code mis à jour** " +"([#1344](https://github.com/adap/flower/pull/1344), " +"[#1347](https://github.com/adap/flower/pull/1347))" -#: ../../source/ref-changelog.md:725 +#: ../../source/ref-changelog.md:583 msgid "" -"**Improved Virtual Client Engine compatibility with Jupyter Notebook / " -"Google Colab** ([#866](https://github.com/adap/flower/pull/866), " -"[#872](https://github.com/adap/flower/pull/872), " -"[#833](https://github.com/adap/flower/pull/833), " -"[#1036](https://github.com/adap/flower/pull/1036))" +"The code examples covering scikit-learn and PyTorch Lightning have been " +"updated to work with the latest version of Flower." msgstr "" -"**Amélioration de la compatibilité du moteur de client virtuel avec " -"Jupyter Notebook / Google Colab** " -"([#866](https://github.com/adap/flower/pull/866), " -"[#872](https://github.com/adap/flower/pull/872), " -"[#833](https://github.com/adap/flower/pull/833), " -"[#1036](https://github.com/adap/flower/pull/1036))" +"Les exemples de code couvrant scikit-learn et PyTorch Lightning ont été " +"mis à jour pour fonctionner avec la dernière version de Flower." -#: ../../source/ref-changelog.md:727 +#: ../../source/ref-changelog.md:585 msgid "" -"Simulations (using the Virtual Client Engine through `start_simulation`) " -"now work more smoothly on Jupyter Notebooks (incl. Google Colab) after " -"installing Flower with the `simulation` extra (`pip install " -"flwr[simulation]`)." +"**Updated documentation** " +"([#1355](https://github.com/adap/flower/pull/1355), " +"[#1558](https://github.com/adap/flower/pull/1558), " +"[#1379](https://github.com/adap/flower/pull/1379), " +"[#1380](https://github.com/adap/flower/pull/1380), " +"[#1381](https://github.com/adap/flower/pull/1381), " +"[#1332](https://github.com/adap/flower/pull/1332), " +"[#1391](https://github.com/adap/flower/pull/1391), " +"[#1403](https://github.com/adap/flower/pull/1403), " +"[#1364](https://github.com/adap/flower/pull/1364), " +"[#1409](https://github.com/adap/flower/pull/1409), " +"[#1419](https://github.com/adap/flower/pull/1419), " +"[#1444](https://github.com/adap/flower/pull/1444), " +"[#1448](https://github.com/adap/flower/pull/1448), " +"[#1417](https://github.com/adap/flower/pull/1417), " +"[#1449](https://github.com/adap/flower/pull/1449), " +"[#1465](https://github.com/adap/flower/pull/1465), " +"[#1467](https://github.com/adap/flower/pull/1467))" msgstr "" -"Les simulations (utilisant le moteur de client virtuel via " -"`start_simulation`) fonctionnent maintenant plus facilement sur les " -"Notebooks Jupyter (y compris Google Colab) après avoir installé Flower " -"avec l'option `simulation` (`pip install flwr[simulation]`)." +"**Documentation mise à jour** " +"([#1355](https://github.com/adap/flower/pull/1355), " +"[#1558](https://github.com/adap/flower/pull/1558), " +"[#1379](https://github.com/adap/flower/pull/1379), " +"[#1380](https://github.com/adap/flower/pull/1380), " +"[#1381](https://github.com/adap/flower/pull/1381), " +"[#1332](https://github.com/adap/flower/pull/1332), " +"[#1391](https://github.com/adap/flower/pull/1391), " +"[#1403](https://github.com/adap/flower/pull/1403), " +"[#1364](https://github.com/adap/flower/pull/1364), " +"[#1409](https://github.com/adap/flower/pull/1409), " +"[#1419](https://github.com/adap/flower/pull/1419), " +"[#1444](https://github.com/adap/flower/pull/1444), " +"[#1448](https://github.com/adap/flower/pull/1448), " +"[#1417](https://github.com/adap/flower/pull/1417), " +"[#1449](https://github.com/adap/flower/pull/1449), " +"[#1465](https://github.com/adap/flower/pull/1465), " +"[#1467](https://github.com/adap/flower/pull/1467))" -#: ../../source/ref-changelog.md:729 +#: ../../source/ref-changelog.md:587 msgid "" -"**New Jupyter Notebook code example** " -"([#833](https://github.com/adap/flower/pull/833))" +"There have been so many documentation updates that it doesn't even make " +"sense to list them individually." msgstr "" -"**Nouvel exemple de code Jupyter Notebook** " -"([#833](https://github.com/adap/flower/pull/833))" +"Il y a eu tellement de mises à jour de la documentation que cela n'a même" +" pas de sens de les énumérer individuellement." -#: ../../source/ref-changelog.md:731 +#: ../../source/ref-changelog.md:589 msgid "" -"A new code example (`quickstart_simulation`) demonstrates Flower " -"simulations using the Virtual Client Engine through Jupyter Notebook " -"(incl. Google Colab)." +"**Restructured documentation** " +"([#1387](https://github.com/adap/flower/pull/1387))" msgstr "" -"Un nouvel exemple de code (`quickstart_simulation`) démontre des " -"simulations de Flower en utilisant le moteur de client virtuel via " -"Jupyter Notebook (y compris Google Colab)." +"**Documentation restructurée** " +"([#1387](https://github.com/adap/flower/pull/1387))" -#: ../../source/ref-changelog.md:733 +#: ../../source/ref-changelog.md:591 msgid "" -"**Client properties (feature preview)** " -"([#795](https://github.com/adap/flower/pull/795))" +"The documentation has been restructured to make it easier to navigate. " +"This is just the first step in a larger effort to make the Flower " +"documentation the best documentation of any project ever. Stay tuned!" msgstr "" -"**Propriétés du client (aperçu des fonctionnalités)** " -"([#795](https://github.com/adap/flower/pull/795))" +"La documentation a été restructurée pour faciliter la navigation. Ce " +"n'est que la première étape d'un effort plus important visant à faire de " +"la documentation de Flower la meilleure documentation de tous les projets" -#: ../../source/ref-changelog.md:735 +#: ../../source/ref-changelog.md:593 msgid "" -"Clients can implement a new method `get_properties` to enable server-side" -" strategies to query client properties." +"**Open in Colab button** " +"([#1389](https://github.com/adap/flower/pull/1389))" msgstr "" -"Les clients peuvent implémenter une nouvelle méthode `get_properties` " -"pour permettre aux stratégies côté serveur d'interroger les propriétés du" -" client." +"**Ouvrir dans le bouton Colab** " +"([#1389](https://github.com/adap/flower/pull/1389))" -#: ../../source/ref-changelog.md:737 +#: ../../source/ref-changelog.md:595 msgid "" -"**Experimental Android support with TFLite** " -"([#865](https://github.com/adap/flower/pull/865))" +"The four parts of the Flower Federated Learning Tutorial now come with a " +"new `Open in Colab` button. No need to install anything on your local " +"machine, you can now use and learn about Flower in your browser, it's " +"only a single click away." msgstr "" -"**Support expérimental d'Android avec TFLite** " -"([#865](https://github.com/adap/flower/pull/865))" +"Les quatre parties du didacticiel d'apprentissage fédéré Flower sont " +"maintenant accompagnées d'un nouveau bouton \"Ouvrir dans Colab\". Pas " +"besoin d'installer quoi que ce soit sur ta machine locale, tu peux " +"maintenant utiliser et apprendre à connaître Flower dans ton navigateur, " +"il te suffit d'un simple clic." -#: ../../source/ref-changelog.md:739 +#: ../../source/ref-changelog.md:597 msgid "" -"Android support has finally arrived in `main`! Flower is both client-" -"agnostic and framework-agnostic by design. One can integrate arbitrary " -"client platforms and with this release, using Flower on Android has " -"become a lot easier." +"**Improved tutorial** ([#1468](https://github.com/adap/flower/pull/1468)," +" [#1470](https://github.com/adap/flower/pull/1470), " +"[#1472](https://github.com/adap/flower/pull/1472), " +"[#1473](https://github.com/adap/flower/pull/1473), " +"[#1474](https://github.com/adap/flower/pull/1474), " +"[#1475](https://github.com/adap/flower/pull/1475))" msgstr "" -"La prise en charge d'Android est enfin arrivée dans `main` ! Flower est à" -" la fois agnostique au niveau du client et du cadre de travail. On peut " -"intégrer des plates-formes client arbitraires et avec cette version, " -"l'utilisation de Flower sur Android est devenue beaucoup plus facile." +"**Tutoriel amélioré** ([#1468](https://github.com/adap/flower/pull/1468)," +" [#1470](https://github.com/adap/flower/pull/1470), " +"[#1472](https://github.com/adap/flower/pull/1472), " +"[#1473](https://github.com/adap/flower/pull/1473), " +"[#1474](https://github.com/adap/flower/pull/1474), " +"[#1475](https://github.com/adap/flower/pull/1475))" -#: ../../source/ref-changelog.md:741 +#: ../../source/ref-changelog.md:599 msgid "" -"The example uses TFLite on the client side, along with a new " -"`FedAvgAndroid` strategy. The Android client and `FedAvgAndroid` are " -"still experimental, but they are a first step towards a fully-fledged " -"Android SDK and a unified `FedAvg` implementation that integrated the new" -" functionality from `FedAvgAndroid`." +"The Flower Federated Learning Tutorial has two brand-new parts covering " +"custom strategies (still WIP) and the distinction between `Client` and " +"`NumPyClient`. The existing parts one and two have also been improved " +"(many small changes and fixes)." msgstr "" -"L'exemple utilise TFLite du côté client, ainsi qu'une nouvelle stratégie " -"`FedAvgAndroid`. Le client Android et `FedAvgAndroid` sont encore " -"expérimentaux, mais ils constituent un premier pas vers un SDK Android à " -"part entière et une implémentation unifiée de `FedAvg` intégrant la " -"nouvelle fonctionnalité de `FedAvgAndroid`." +"Le tutoriel sur l'apprentissage fédéré des fleurs a deux toutes nouvelles" +" parties couvrant les stratégies personnalisées (encore WIP) et la " +"distinction entre `Client` et `NumPyClient`. Les parties un et deux " +"existantes ont également été améliorées (beaucoup de petits changements " +"et de corrections)." + +#: ../../source/ref-changelog.md:605 +msgid "v1.0.0 (2022-07-28)" +msgstr "v1.0.0 (2022-07-28)" + +#: ../../source/ref-changelog.md:607 +msgid "Highlights" +msgstr "Points forts" + +#: ../../source/ref-changelog.md:609 +msgid "Stable **Virtual Client Engine** (accessible via `start_simulation`)" +msgstr "Moteur de client virtuel stable** (accessible via `start_simulation`)" + +#: ../../source/ref-changelog.md:610 +msgid "All `Client`/`NumPyClient` methods are now optional" +msgstr "Toutes les méthodes `Client`/`NumPyClient` sont maintenant optionnelles" + +#: ../../source/ref-changelog.md:611 +msgid "Configurable `get_parameters`" +msgstr "`get_parameters` configurable" -#: ../../source/ref-changelog.md:743 +#: ../../source/ref-changelog.md:612 msgid "" -"**Make gRPC keepalive time user-configurable and decrease default " -"keepalive time** ([#1069](https://github.com/adap/flower/pull/1069))" +"Tons of small API cleanups resulting in a more coherent developer " +"experience" msgstr "" -"**Rendre le temps de garde gRPC configurable par l'utilisateur et " -"diminuer le temps de garde par défaut** " -"([#1069](https://github.com/adap/flower/pull/1069))" +"Des tonnes de petits nettoyages d'API résultant en une expérience plus " +"cohérente pour les développeurs" -#: ../../source/ref-changelog.md:745 +#: ../../source/ref-changelog.md:616 msgid "" -"The default gRPC keepalive time has been reduced to increase the " -"compatibility of Flower with more cloud environments (for example, " -"Microsoft Azure). Users can configure the keepalive time to customize the" -" gRPC stack based on specific requirements." +"We would like to give our **special thanks** to all the contributors who " +"made Flower 1.0 possible (in reverse [GitHub " +"Contributors](https://github.com/adap/flower/graphs/contributors) order):" msgstr "" -"Le temps de keepalive gRPC par défaut a été réduit pour augmenter la " -"compatibilité de Flower avec davantage d'environnements cloud (par " -"exemple, Microsoft Azure). Les utilisateurs peuvent configurer le temps " -"de keepalive pour personnaliser la pile gRPC en fonction d'exigences " -"spécifiques." +"Nous tenons à remercier **particulièrement** tous les contributeurs qui " +"ont rendu Flower 1.0 possible (dans l'ordre inverse de [GitHub " +"Contributors](https://github.com/adap/flower/graphs/contributors)) :" -#: ../../source/ref-changelog.md:747 +#: ../../source/ref-changelog.md:618 msgid "" -"**New differential privacy example using Opacus and PyTorch** " -"([#805](https://github.com/adap/flower/pull/805))" +"[@rtaiello](https://github.com/rtaiello), " +"[@g-pichler](https://github.com/g-pichler), [@rob-" +"luke](https://github.com/rob-luke), [@andreea-zaharia](https://github.com" +"/andreea-zaharia), [@kinshukdua](https://github.com/kinshukdua), " +"[@nfnt](https://github.com/nfnt), " +"[@tatiana-s](https://github.com/tatiana-s), " +"[@TParcollet](https://github.com/TParcollet), " +"[@vballoli](https://github.com/vballoli), " +"[@negedng](https://github.com/negedng), " +"[@RISHIKESHAVAN](https://github.com/RISHIKESHAVAN), " +"[@hei411](https://github.com/hei411), " +"[@SebastianSpeitel](https://github.com/SebastianSpeitel), " +"[@AmitChaulwar](https://github.com/AmitChaulwar), " +"[@Rubiel1](https://github.com/Rubiel1), [@FANTOME-PAN](https://github.com" +"/FANTOME-PAN), [@Rono-BC](https://github.com/Rono-BC), " +"[@lbhm](https://github.com/lbhm), " +"[@sishtiaq](https://github.com/sishtiaq), " +"[@remde](https://github.com/remde), [@Jueun-Park](https://github.com" +"/Jueun-Park), [@architjen](https://github.com/architjen), " +"[@PratikGarai](https://github.com/PratikGarai), " +"[@mrinaald](https://github.com/mrinaald), " +"[@zliel](https://github.com/zliel), " +"[@MeiruiJiang](https://github.com/MeiruiJiang), " +"[@sancarlim](https://github.com/sancarlim), " +"[@gubertoli](https://github.com/gubertoli), " +"[@Vingt100](https://github.com/Vingt100), " +"[@MakGulati](https://github.com/MakGulati), " +"[@cozek](https://github.com/cozek), " +"[@jafermarq](https://github.com/jafermarq), " +"[@sisco0](https://github.com/sisco0), " +"[@akhilmathurs](https://github.com/akhilmathurs), " +"[@CanTuerk](https://github.com/CanTuerk), " +"[@mariaboerner1987](https://github.com/mariaboerner1987), " +"[@pedropgusmao](https://github.com/pedropgusmao), " +"[@tanertopal](https://github.com/tanertopal), " +"[@danieljanes](https://github.com/danieljanes)." msgstr "" -"**Nouvel exemple de confidentialité différentielle utilisant Opacus et " -"PyTorch** ([#805](https://github.com/adap/flower/pull/805))" +"[@rtaiello](https://github.com/rtaiello), " +"[@g-pichler](https://github.com/g-pichler), [@rob-" +"luke](https://github.com/rob-luke), [@andreea-zaharia](https://github.com" +"/andreea-zaharia), [@kinshukdua](https://github.com/kinshukdua), " +"[@nfnt](https://github.com/nfnt), " +"[@tatiana-s](https://github.com/tatiana-s), " +"[@TParcollet](https://github.com/TParcollet), " +"[@vballoli](https://github.com/vballoli), " +"[@negedng](https://github.com/negedng), " +"[@RISHIKESHAVAN](https://github.com/RISHIKESHAVAN), " +"[@hei411](https://github.com/hei411), " +"[@SebastianSpeitel](https://github.com/SebastianSpeitel), " +"[@AmitChaulwar](https://github.com/AmitChaulwar), " +"[@Rubiel1](https://github.com/Rubiel1), [@FANTOME-PAN](https://github.com" +"/FANTOME-PAN), [@Rono-BC](https://github.com/Rono-BC), " +"[@lbhm](https://github.com/lbhm), " +"[@sishtiaq](https://github.com/sishtiaq), " +"[@remde](https://github.com/remde), [@Jueun-Park](https://github.com" +"/Jueun-Park), [@architjen](https://github.com/architjen), " +"[@PratikGarai](https://github.com/PratikGarai), [@mrinaald](" -#: ../../source/ref-changelog.md:749 +#: ../../source/ref-changelog.md:622 msgid "" -"A new code example (`opacus`) demonstrates differentially-private " -"federated learning with Opacus, PyTorch, and Flower." +"**All arguments must be passed as keyword arguments** " +"([#1338](https://github.com/adap/flower/pull/1338))" msgstr "" -"Un nouvel exemple de code (`opacus`) démontre l'apprentissage fédéré " -"différentiellement privé avec Opacus, PyTorch et Flower." +"**Tous les arguments doivent être passés comme des arguments de mot-clé**" +" ([#1338](https://github.com/adap/flower/pull/1338))" -#: ../../source/ref-changelog.md:751 +#: ../../source/ref-changelog.md:624 +#, fuzzy msgid "" -"**New Hugging Face Transformers code example** " -"([#863](https://github.com/adap/flower/pull/863))" +"Pass all arguments as keyword arguments, positional arguments are not " +"longer supported. Code that uses positional arguments (e.g., " +"`start_client(\"127.0.0.1:8080\", FlowerClient())`) must add the keyword " +"for each positional argument (e.g., " +"`start_client(server_address=\"127.0.0.1:8080\", " +"client=FlowerClient())`)." msgstr "" -"**Nouvel exemple de code pour les Transformers à visage embrassant** " -"([#863](https://github.com/adap/flower/pull/863))" +"Le code qui utilise des arguments positionnels (par exemple, " +"``start_client(\"127.0.0.1:8080\", FlowerClient())`) doit ajouter le mot-" +"clé pour chaque argument positionnel (par exemple, " +"``start_client(server_address=\"127.0.0.1:8080\", " +"client=FlowerClient())`)." -#: ../../source/ref-changelog.md:753 +#: ../../source/ref-changelog.md:626 msgid "" -"A new code example (`quickstart_huggingface`) demonstrates usage of " -"Hugging Face Transformers with Flower." +"**Introduce configuration object** `ServerConfig` **in** `start_server` " +"**and** `start_simulation` " +"([#1317](https://github.com/adap/flower/pull/1317))" msgstr "" -"Un nouvel exemple de code (`quickstart_huggingface`) démontre " -"l'utilisation des transformateurs Hugging Face avec Flower." +"**Introduire l'objet de configuration** `ServerConfig` **dans** " +"`start_server` **et** `start_simulation` " +"([#1317](https://github.com/adap/flower/pull/1317))" -#: ../../source/ref-changelog.md:755 +#: ../../source/ref-changelog.md:628 msgid "" -"**New MLCube code example** " -"([#779](https://github.com/adap/flower/pull/779), " -"[#1034](https://github.com/adap/flower/pull/1034), " -"[#1065](https://github.com/adap/flower/pull/1065), " -"[#1090](https://github.com/adap/flower/pull/1090))" +"Instead of a config dictionary `{\"num_rounds\": 3, \"round_timeout\": " +"600.0}`, `start_server` and `start_simulation` now expect a configuration" +" object of type `flwr.server.ServerConfig`. `ServerConfig` takes the same" +" arguments that as the previous config dict, but it makes writing type-" +"safe code easier and the default parameters values more transparent." msgstr "" -"**Nouvel exemple de code MLCube** " -"([#779](https://github.com/adap/flower/pull/779), " -"[#1034](https://github.com/adap/flower/pull/1034), " -"[#1065](https://github.com/adap/flower/pull/1065), " -"[#1090](https://github.com/adap/flower/pull/1090))" +"Au lieu d'un dictionnaire de configuration `{\"num_rounds\" : 3, " +"\"round_timeout\" : 600.0}`, `start_server` et `start_simulation` " +"attendent maintenant un objet de configuration de type " +"`flwr.server.ServerConfig`. `ServerConfig` prend les mêmes arguments que " +"le dict de configuration précédent, mais il rend l'écriture de code " +"sécurisé plus facile et les valeurs des paramètres par défaut plus " +"transparentes." -#: ../../source/ref-changelog.md:757 +#: ../../source/ref-changelog.md:630 msgid "" -"A new code example (`quickstart_mlcube`) demonstrates usage of MLCube " -"with Flower." +"**Rename built-in strategy parameters for clarity** " +"([#1334](https://github.com/adap/flower/pull/1334))" msgstr "" -"Un nouvel exemple de code (`quickstart_mlcube`) démontre l'utilisation de" -" MLCube avec Flower." +"**Renommer les paramètres de la stratégie intégrée pour plus de clarté** " +"([#1334](https://github.com/adap/flower/pull/1334))" -#: ../../source/ref-changelog.md:759 +#: ../../source/ref-changelog.md:632 msgid "" -"**SSL-enabled server and client** " -"([#842](https://github.com/adap/flower/pull/842), " -"[#844](https://github.com/adap/flower/pull/844), " -"[#845](https://github.com/adap/flower/pull/845), " -"[#847](https://github.com/adap/flower/pull/847), " -"[#993](https://github.com/adap/flower/pull/993), " -"[#994](https://github.com/adap/flower/pull/994))" +"The following built-in strategy parameters were renamed to improve " +"readability and consistency with other API's:" msgstr "" -"**([#842](https://github.com/adap/flower/pull/842), " -"[#844](https://github.com/adap/flower/pull/844), " -"[#845](https://github.com/adap/flower/pull/845), " -"[#847](https://github.com/adap/flower/pull/847), " -"[#993](https://github.com/adap/flower/pull/993), " -"[#994](https://github.com/adap/flower/pull/994))" +"Les paramètres de stratégie intégrés suivants ont été renommés pour " +"améliorer la lisibilité et la cohérence avec d'autres API :" + +#: ../../source/ref-changelog.md:634 +msgid "`fraction_eval` --> `fraction_evaluate`" +msgstr "`fraction_eval` --> `fraction_evaluate`" + +#: ../../source/ref-changelog.md:635 +msgid "`min_eval_clients` --> `min_evaluate_clients`" +msgstr "`min_eval_clients` --> `min_evaluate_clients`" + +#: ../../source/ref-changelog.md:636 +msgid "`eval_fn` --> `evaluate_fn`" +msgstr "`eval_fn` --> `evaluate_fn`" -#: ../../source/ref-changelog.md:761 +#: ../../source/ref-changelog.md:638 msgid "" -"SSL enables secure encrypted connections between clients and servers. " -"This release open-sources the Flower secure gRPC implementation to make " -"encrypted communication channels accessible to all Flower users." +"**Update default arguments of built-in strategies** " +"([#1278](https://github.com/adap/flower/pull/1278))" msgstr "" -"SSL permet d'établir des connexions cryptées et sécurisées entre les " -"clients et les serveurs. Cette version met en open-source " -"l'implémentation gRPC sécurisée de Flower afin de rendre les canaux de " -"communication cryptés accessibles à tous les utilisateurs de Flower." +"**Mettre à jour les arguments par défaut des stratégies intégrées** " +"([#1278](https://github.com/adap/flower/pull/1278))" -#: ../../source/ref-changelog.md:763 +#: ../../source/ref-changelog.md:640 msgid "" -"**Updated** `FedAdam` **and** `FedYogi` **strategies** " -"([#885](https://github.com/adap/flower/pull/885), " -"[#895](https://github.com/adap/flower/pull/895))" +"All built-in strategies now use `fraction_fit=1.0` and " +"`fraction_evaluate=1.0`, which means they select *all* currently " +"available clients for training and evaluation. Projects that relied on " +"the previous default values can get the previous behaviour by " +"initializing the strategy in the following way:" msgstr "" -"**Mise à jour** `FedAdam` **et** `FedYogi` **stratégies** " -"([#885](https://github.com/adap/flower/pull/885), " -"[#895](https://github.com/adap/flower/pull/895))" +"Toutes les stratégies intégrées utilisent désormais `fraction_fit=1.0` et" +" `fraction_evaluate=1.0`, ce qui signifie qu'elles sélectionnent *tous* " +"les clients actuellement disponibles pour l'entraînement et l'évaluation." +" Les projets qui s'appuyaient sur les valeurs par défaut précédentes " +"peuvent retrouver le comportement antérieur en initialisant la stratégie " +"de la manière suivante :" + +#: ../../source/ref-changelog.md:642 +msgid "`strategy = FedAvg(fraction_fit=0.1, fraction_evaluate=0.1)`" +msgstr "`stratégie = FedAvg(fraction_fit=0.1, fraction_evaluate=0.1)`" -#: ../../source/ref-changelog.md:765 +#: ../../source/ref-changelog.md:644 msgid "" -"`FedAdam` and `FedAdam` match the latest version of the Adaptive " -"Federated Optimization paper." +"**Add** `server_round` **to** `Strategy.evaluate` " +"([#1334](https://github.com/adap/flower/pull/1334))" msgstr "" -"`FedAdam` et `FedAdam` correspondent à la dernière version de l'article " -"sur l'optimisation fédérée adaptative." +"**Ajouter** `server_round` **à** `Strategy.evaluate` " +"([#1334](https://github.com/adap/flower/pull/1334))" -#: ../../source/ref-changelog.md:767 +#: ../../source/ref-changelog.md:646 msgid "" -"**Initialize** `start_simulation` **with a list of client IDs** " -"([#860](https://github.com/adap/flower/pull/860))" +"The `Strategy` method `evaluate` now receives the current round of " +"federated learning/evaluation as the first parameter." msgstr "" -"**Initialise** `start_simulation` **avec une liste d'ID de clients** " -"([#860](https://github.com/adap/flower/pull/860))" +"La méthode `Stratégie` `évaluer` reçoit maintenant le cycle actuel " +"d'apprentissage/évaluation fédéré comme premier paramètre." -#: ../../source/ref-changelog.md:769 +#: ../../source/ref-changelog.md:648 msgid "" -"`start_simulation` can now be called with a list of client IDs " -"(`clients_ids`, type: `List[str]`). Those IDs will be passed to the " -"`client_fn` whenever a client needs to be initialized, which can make it " -"easier to load data partitions that are not accessible through `int` " -"identifiers." +"**Add** `server_round` **and** `config` **parameters to** `evaluate_fn` " +"([#1334](https://github.com/adap/flower/pull/1334))" msgstr "" -"`start_simulation` peut maintenant être appelé avec une liste " -"d'identifiants de clients (`clients_ids`, type : `List[str]`). Ces " -"identifiants seront passés à `client_fn` chaque fois qu'un client doit " -"être initialisé, ce qui peut faciliter le chargement de partitions de " -"données qui ne sont pas accessibles par des identifiants `int`." +"**Ajouter** `server_round` **et** `config` **paramètres à** `evaluate_fn`" +" ([#1334](https://github.com/adap/flower/pull/1334))" -#: ../../source/ref-changelog.md:773 +#: ../../source/ref-changelog.md:650 msgid "" -"Update `num_examples` calculation in PyTorch code examples in " -"([#909](https://github.com/adap/flower/pull/909))" +"The `evaluate_fn` passed to built-in strategies like `FedAvg` now takes " +"three parameters: (1) The current round of federated learning/evaluation " +"(`server_round`), (2) the model parameters to evaluate (`parameters`), " +"and (3) a config dictionary (`config`)." msgstr "" -"Mettre à jour le calcul de `num_examples` dans les exemples de code " -"PyTorch dans ([#909](https://github.com/adap/flower/pull/909))" +"Le `evaluate_fn` passé aux stratégies intégrées comme `FedAvg` prend " +"maintenant trois paramètres : (1) le cycle actuel " +"d'apprentissage/évaluation fédéré (`server_round`), (2) les paramètres du" +" modèle à évaluer (`parameters`), et (3) un dictionnaire de configuration" +" (`config`)." -#: ../../source/ref-changelog.md:774 +#: ../../source/ref-changelog.md:652 msgid "" -"Expose Flower version through `flwr.__version__` " -"([#952](https://github.com/adap/flower/pull/952))" +"**Rename** `rnd` **to** `server_round` " +"([#1321](https://github.com/adap/flower/pull/1321))" msgstr "" -"Exposer la version de Flower à travers `flwr.__version__` " -"([#952](https://github.com/adap/flower/pull/952))" +"**Rename** `rnd` **to** `server_round` " +"([#1321](https://github.com/adap/flower/pull/1321))" -#: ../../source/ref-changelog.md:775 +#: ../../source/ref-changelog.md:654 msgid "" -"`start_server` in `app.py` now returns a `History` object containing " -"metrics from training ([#974](https://github.com/adap/flower/pull/974))" +"Several Flower methods and functions (`evaluate_fn`, `configure_fit`, " +"`aggregate_fit`, `configure_evaluate`, `aggregate_evaluate`) receive the " +"current round of federated learning/evaluation as their first parameter. " +"To improve reaability and avoid confusion with *random*, this parameter " +"has been renamed from `rnd` to `server_round`." msgstr "" -"`start_server` dans `app.py` renvoie maintenant un objet `History` " -"contenant les métriques de l'entraînement " -"([#974](https://github.com/adap/flower/pull/974))" +"Plusieurs méthodes et fonctions de Flower (`evaluate_fn`, " +"`configure_fit`, `aggregate_fit`, `configure_evaluate`, " +"`aggregate_evaluate`) reçoivent le cycle actuel " +"d'apprentissage/évaluation fédéré comme premier paramètre. Pour améliorer" +" la fiabilité et éviter la confusion avec *random*, ce paramètre a été " +"renommé de `rnd` à `server_round`." -#: ../../source/ref-changelog.md:776 +#: ../../source/ref-changelog.md:656 msgid "" -"Make `max_workers` (used by `ThreadPoolExecutor`) configurable " -"([#978](https://github.com/adap/flower/pull/978))" +"**Move** `flwr.dataset` **to** `flwr_baselines` " +"([#1273](https://github.com/adap/flower/pull/1273))" msgstr "" -"Rendre `max_workers` (utilisé par `ThreadPoolExecutor`) configurable " -"([#978](https://github.com/adap/flower/pull/978))" +"**Déplacer** `flwr.dataset` **vers** `flwr_baselines` " +"([#1273](https://github.com/adap/flower/pull/1273))" + +#: ../../source/ref-changelog.md:658 +msgid "The experimental package `flwr.dataset` was migrated to Flower Baselines." +msgstr "Le paquet expérimental `flwr.dataset` a été migré vers Flower Baselines." -#: ../../source/ref-changelog.md:777 +#: ../../source/ref-changelog.md:660 msgid "" -"Increase sleep time after server start to three seconds in all code " -"examples ([#1086](https://github.com/adap/flower/pull/1086))" +"**Remove experimental strategies** " +"([#1280](https://github.com/adap/flower/pull/1280))" msgstr "" -"Augmente le temps de sommeil après le démarrage du serveur à trois " -"secondes dans tous les exemples de code " -"([#1086](https://github.com/adap/flower/pull/1086))" +"**Supprimer les stratégies expérimentales** " +"([#1280](https://github.com/adap/flower/pull/1280))" -#: ../../source/ref-changelog.md:778 +#: ../../source/ref-changelog.md:662 msgid "" -"Added a new FAQ section to the documentation " -"([#948](https://github.com/adap/flower/pull/948))" +"Remove unmaintained experimental strategies (`FastAndSlow`, `FedFSv0`, " +"`FedFSv1`)." msgstr "" -"Ajout d'une nouvelle section FAQ à la documentation " -"([#948](https://github.com/adap/flower/pull/948))" +"Supprimer les stratégies expérimentales non maintenues (`FastAndSlow`, " +"`FedFSv0`, `FedFSv1`)." -#: ../../source/ref-changelog.md:779 +#: ../../source/ref-changelog.md:664 msgid "" -"And many more under-the-hood changes, library updates, documentation " -"changes, and tooling improvements!" +"**Rename** `Weights` **to** `NDArrays` " +"([#1258](https://github.com/adap/flower/pull/1258), " +"[#1259](https://github.com/adap/flower/pull/1259))" msgstr "" -"Et bien d'autres changements sous le capot, des mises à jour de la " -"bibliothèque, des modifications de la documentation et des améliorations " -"de l'outillage !" +"**Rename** `Weights` **to** `NDArrays` " +"([#1258](https://github.com/adap/flower/pull/1258), " +"[#1259](https://github.com/adap/flower/pull/1259))" -#: ../../source/ref-changelog.md:783 +#: ../../source/ref-changelog.md:666 msgid "" -"**Removed** `flwr_example` **and** `flwr_experimental` **from release " -"build** ([#869](https://github.com/adap/flower/pull/869))" +"`flwr.common.Weights` was renamed to `flwr.common.NDArrays` to better " +"capture what this type is all about." msgstr "" -"**Supprimé** `flwr_example` **et** `flwr_experimental` **de la version " -"release build** ([#869](https://github.com/adap/flower/pull/869))" +"`flwr.common.Weights` a été renommé en `flwr.common.NDArys` pour mieux " +"rendre compte de la nature de ce type." -#: ../../source/ref-changelog.md:785 +#: ../../source/ref-changelog.md:668 msgid "" -"The packages `flwr_example` and `flwr_experimental` have been deprecated " -"since Flower 0.12.0 and they are not longer included in Flower release " -"builds. The associated extras (`baseline`, `examples-pytorch`, `examples-" -"tensorflow`, `http-logger`, `ops`) are now no-op and will be removed in " -"an upcoming release." +"**Remove antiquated** `force_final_distributed_eval` **from** " +"`start_server` ([#1258](https://github.com/adap/flower/pull/1258), " +"[#1259](https://github.com/adap/flower/pull/1259))" msgstr "" -"Les paquets `flwr_example` et `flwr_experimental` ont été dépréciés " -"depuis Flower 0.12.0 et ils ne sont plus inclus dans les builds de " -"Flower. Les extras associés (`baseline`, `examples-pytorch`, `examples-" -"tensorflow`, `http-logger`, `ops`) sont maintenant no-op et seront " -"supprimés dans une prochaine version." - -#: ../../source/ref-changelog.md:787 -msgid "v0.17.0 (2021-09-24)" -msgstr "v0.17.0 (2021-09-24)" +"**Supprimez l'ancien** `force_final_distributed_eval` **de** " +"`start_server` ([#1258](https://github.com/adap/flower/pull/1258), " +"[#1259](https://github.com/adap/flower/pull/1259))" -#: ../../source/ref-changelog.md:791 +#: ../../source/ref-changelog.md:670 msgid "" -"**Experimental virtual client engine** " -"([#781](https://github.com/adap/flower/pull/781) " -"[#790](https://github.com/adap/flower/pull/790) " -"[#791](https://github.com/adap/flower/pull/791))" +"The `start_server` parameter `force_final_distributed_eval` has long been" +" a historic artefact, in this release it is finally gone for good." msgstr "" -"**Moteur expérimental de client virtuel** " -"([#781](https://github.com/adap/flower/pull/781) " -"[#790](https://github.com/adap/flower/pull/790) " -"[#791](https://github.com/adap/flower/pull/791))" +"Le paramètre `start_server` `force_final_distributed_eval` a longtemps " +"été un artefact historique, dans cette version il a finalement disparu " +"pour de bon." -#: ../../source/ref-changelog.md:793 +#: ../../source/ref-changelog.md:672 msgid "" -"One of Flower's goals is to enable research at scale. This release " -"enables a first (experimental) peek at a major new feature, codenamed the" -" virtual client engine. Virtual clients enable simulations that scale to " -"a (very) large number of clients on a single machine or compute cluster. " -"The easiest way to test the new functionality is to look at the two new " -"code examples called `quickstart_simulation` and `simulation_pytorch`." +"**Make** `get_parameters` **configurable** " +"([#1242](https://github.com/adap/flower/pull/1242))" msgstr "" -"L'un des objectifs de Flower est de permettre la recherche à grande " -"échelle. Cette version donne un premier aperçu (expérimental) d'une " -"nouvelle fonctionnalité majeure, connue sous le nom de code de moteur de " -"client virtuel. Les clients virtuels permettent des simulations qui " -"s'étendent à un (très) grand nombre de clients sur une seule machine ou " -"une grappe de calcul. La façon la plus simple de tester la nouvelle " -"fonctionnalité est de regarder les deux nouveaux exemples de code appelés" -" `quickstart_simulation` et `simulation_pytorch`." +"**Make** `get_parameters` **configurable** " +"([#1242](https://github.com/adap/flower/pull/1242))" -#: ../../source/ref-changelog.md:795 +#: ../../source/ref-changelog.md:674 msgid "" -"The feature is still experimental, so there's no stability guarantee for " -"the API. It's also not quite ready for prime time and comes with a few " -"known caveats. However, those who are curious are encouraged to try it " -"out and share their thoughts." +"The `get_parameters` method now accepts a configuration dictionary, just " +"like `get_properties`, `fit`, and `evaluate`." msgstr "" -"La fonction est encore expérimentale, il n'y a donc aucune garantie de " -"stabilité pour l'API. Elle n'est pas non plus tout à fait prête pour le " -"prime time et s'accompagne de quelques mises en garde connues. Cependant," -" les personnes curieuses sont encouragées à l'essayer et à faire part de " -"leurs réflexions." +"La méthode `get_parameters` accepte maintenant un dictionnaire de " +"configuration, tout comme `get_properties`, `fit`, et `evaluate`." -#: ../../source/ref-changelog.md:797 +#: ../../source/ref-changelog.md:676 msgid "" -"**New built-in strategies** " -"([#828](https://github.com/adap/flower/pull/828) " -"[#822](https://github.com/adap/flower/pull/822))" +"**Replace** `num_rounds` **in** `start_simulation` **with new** `config` " +"**parameter** ([#1281](https://github.com/adap/flower/pull/1281))" msgstr "" -"**Nouvelles stratégies intégrées** " -"([#828](https://github.com/adap/flower/pull/828) " -"[#822](https://github.com/adap/flower/pull/822))" +"**Remplace** `num_rounds` **dans** `start_simulation` **avec le nouveau**" +" `config` **paramètre** " +"([#1281](https://github.com/adap/flower/pull/1281))" -#: ../../source/ref-changelog.md:799 +#: ../../source/ref-changelog.md:678 msgid "" -"FedYogi - Federated learning strategy using Yogi on server-side. " -"Implementation based on https://arxiv.org/abs/2003.00295" +"The `start_simulation` function now accepts a configuration dictionary " +"`config` instead of the `num_rounds` integer. This improves the " +"consistency between `start_simulation` and `start_server` and makes " +"transitioning between the two easier." msgstr "" -"FedYogi - Stratégie d'apprentissage fédéré utilisant Yogi côté serveur. " -"Mise en oeuvre basée sur https://arxiv.org/abs/2003.00295" +"La fonction `start_simulation` accepte maintenant un dictionnaire de " +"configuration `config` au lieu de l'entier `num_rounds`. Cela améliore la" +" cohérence entre `start_simulation` et `start_server` et facilite la " +"transition entre les deux." -#: ../../source/ref-changelog.md:800 +#: ../../source/ref-changelog.md:682 msgid "" -"FedAdam - Federated learning strategy using Adam on server-side. " -"Implementation based on https://arxiv.org/abs/2003.00295" +"**Support Python 3.10** " +"([#1320](https://github.com/adap/flower/pull/1320))" msgstr "" -"FedAdam - Stratégie d'apprentissage fédéré utilisant Adam côté serveur. " -"Mise en œuvre basée sur https://arxiv.org/abs/2003.00295" +"**Support Python 3.10** " +"([#1320](https://github.com/adap/flower/pull/1320))" -#: ../../source/ref-changelog.md:802 +#: ../../source/ref-changelog.md:684 msgid "" -"**New PyTorch Lightning code example** " -"([#617](https://github.com/adap/flower/pull/617))" +"The previous Flower release introduced experimental support for Python " +"3.10, this release declares Python 3.10 support as stable." msgstr "" -"**Nouvel exemple de code PyTorch Lightning** " -"([#617](https://github.com/adap/flower/pull/617))" +"La version précédente de Flower a introduit la prise en charge " +"expérimentale de Python 3.10, cette version déclare la prise en charge de" +" Python 3.10 comme stable." -#: ../../source/ref-changelog.md:804 +#: ../../source/ref-changelog.md:686 msgid "" -"**New Variational Auto-Encoder code example** " -"([#752](https://github.com/adap/flower/pull/752))" +"**Make all** `Client` **and** `NumPyClient` **methods optional** " +"([#1260](https://github.com/adap/flower/pull/1260), " +"[#1277](https://github.com/adap/flower/pull/1277))" msgstr "" -"**Nouvel exemple de code d'autocodage variationnel** " -"([#752](https://github.com/adap/flower/pull/752))" +"**Rendre toutes les **méthodes `Client` **et** `NumPyClient` " +"**facultatives** ([#1260](https://github.com/adap/flower/pull/1260), " +"[#1277](https://github.com/adap/flower/pull/1277))" -#: ../../source/ref-changelog.md:806 +#: ../../source/ref-changelog.md:688 msgid "" -"**New scikit-learn code example** " -"([#748](https://github.com/adap/flower/pull/748))" +"The `Client`/`NumPyClient` methods `get_properties`, `get_parameters`, " +"`fit`, and `evaluate` are all optional. This enables writing clients that" +" implement, for example, only `fit`, but no other method. No need to " +"implement `evaluate` when using centralized evaluation!" msgstr "" -"**Nouvel exemple de code scikit-learn** " -"([#748](https://github.com/adap/flower/pull/748))" +"Les méthodes `Client`/`NumPyClient` `get_properties`, `get_parameters`, " +"`fit`, et `evaluate` sont toutes optionnelles. Cela permet d'écrire des " +"clients qui n'implémentent, par exemple, que `fit`, mais aucune autre " +"méthode. Pas besoin d'implémenter `evaluate` quand on utilise " +"l'évaluation centralisée !" -#: ../../source/ref-changelog.md:808 +#: ../../source/ref-changelog.md:690 msgid "" -"**New experimental TensorBoard strategy** " -"([#789](https://github.com/adap/flower/pull/789))" +"**Enable passing a** `Server` **instance to** `start_simulation` " +"([#1281](https://github.com/adap/flower/pull/1281))" msgstr "" -"**Nouvelle stratégie expérimentale TensorBoard** " -"([#789](https://github.com/adap/flower/pull/789))" +"**Autoriser le passage d'une **instance `Server` à** `start_simulation` " +"([#1281](https://github.com/adap/flower/pull/1281))" -#: ../../source/ref-changelog.md:812 +#: ../../source/ref-changelog.md:692 msgid "" -"Improved advanced TensorFlow code example " -"([#769](https://github.com/adap/flower/pull/769))" +"Similar to `start_server`, `start_simulation` now accepts a full `Server`" +" instance. This enables users to heavily customize the execution of " +"eperiments and opens the door to running, for example, async FL using the" +" Virtual Client Engine." msgstr "" -"Amélioration de l'exemple de code TensorFlow avancé " -"([#769](https://github.com/adap/flower/pull/769))" +"Comme pour `start_server`, `start_simulation` accepte maintenant une " +"instance complète de `Server`. Cela permet aux utilisateurs de " +"personnaliser fortement l'exécution des expériences et ouvre la porte à " +"l'exécution, par exemple, de FL asynchrones à l'aide du moteur de client " +"virtuel." -#: ../../source/ref-changelog.md:813 +#: ../../source/ref-changelog.md:694 msgid "" -"Warning when `min_available_clients` is misconfigured " -"([#830](https://github.com/adap/flower/pull/830))" +"**Update code examples** " +"([#1291](https://github.com/adap/flower/pull/1291), " +"[#1286](https://github.com/adap/flower/pull/1286), " +"[#1282](https://github.com/adap/flower/pull/1282))" msgstr "" -"Avertissement lorsque `min_available_clients` est mal configuré " -"([#830](https://github.com/adap/flower/pull/830))" +"**Mettre à jour les exemples de code** " +"([#1291](https://github.com/adap/flower/pull/1291), " +"[#1286](https://github.com/adap/flower/pull/1286), " +"[#1282](https://github.com/adap/flower/pull/1282))" -#: ../../source/ref-changelog.md:814 +#: ../../source/ref-changelog.md:696 msgid "" -"Improved gRPC server docs " -"([#841](https://github.com/adap/flower/pull/841))" +"Many code examples received small or even large maintenance updates, " +"among them are" msgstr "" -"Amélioration de la documentation sur le serveur gRPC " -"([#841](https://github.com/adap/flower/pull/841))" +"De nombreux exemples de code ont reçu de petites ou même de grandes mises" +" à jour de maintenance" -#: ../../source/ref-changelog.md:815 +#: ../../source/ref-changelog.md:698 +msgid "`scikit-learn`" +msgstr "`scikit-learn`" + +#: ../../source/ref-changelog.md:699 +msgid "`simulation_pytorch`" +msgstr "`simulation_pytorch`" + +#: ../../source/ref-changelog.md:700 +msgid "`quickstart_pytorch`" +msgstr "`quickstart_pytorch` (démarrage rapide)" + +#: ../../source/ref-changelog.md:701 +msgid "`quickstart_simulation`" +msgstr "`quickstart_simulation`" + +#: ../../source/ref-changelog.md:702 +msgid "`quickstart_tensorflow`" +msgstr "`quickstart_tensorflow`" + +#: ../../source/ref-changelog.md:703 +msgid "`advanced_tensorflow`" +msgstr "`advanced_tensorflow` (en anglais)" + +#: ../../source/ref-changelog.md:705 msgid "" -"Improved error message in `NumPyClient` " -"([#851](https://github.com/adap/flower/pull/851))" +"**Remove the obsolete simulation example** " +"([#1328](https://github.com/adap/flower/pull/1328))" msgstr "" -"Amélioration du message d'erreur dans `NumPyClient` " -"([#851](https://github.com/adap/flower/pull/851))" +"**Supprime l'exemple de simulation obsolète** " +"([#1328](https://github.com/adap/flower/pull/1328))" -#: ../../source/ref-changelog.md:816 +#: ../../source/ref-changelog.md:707 msgid "" -"Improved PyTorch quickstart code example " -"([#852](https://github.com/adap/flower/pull/852))" +"Removes the obsolete `simulation` example and renames " +"`quickstart_simulation` to `simulation_tensorflow` so it fits withs the " +"naming of `simulation_pytorch`" msgstr "" -"Exemple de code de démarrage rapide PyTorch amélioré " -"([#852](https://github.com/adap/flower/pull/852))" +"Supprime l'exemple obsolète `simulation` et renomme " +"`quickstart_simulation` en `simulation_tensorflow` pour qu'il corresponde" +" au nom de `simulation_pytorch`" -#: ../../source/ref-changelog.md:820 +#: ../../source/ref-changelog.md:709 msgid "" -"**Disabled final distributed evaluation** " -"([#800](https://github.com/adap/flower/pull/800))" +"**Update documentation** " +"([#1223](https://github.com/adap/flower/pull/1223), " +"[#1209](https://github.com/adap/flower/pull/1209), " +"[#1251](https://github.com/adap/flower/pull/1251), " +"[#1257](https://github.com/adap/flower/pull/1257), " +"[#1267](https://github.com/adap/flower/pull/1267), " +"[#1268](https://github.com/adap/flower/pull/1268), " +"[#1300](https://github.com/adap/flower/pull/1300), " +"[#1304](https://github.com/adap/flower/pull/1304), " +"[#1305](https://github.com/adap/flower/pull/1305), " +"[#1307](https://github.com/adap/flower/pull/1307))" msgstr "" -"**Désactivé l'évaluation finale distribuée** " -"([#800](https://github.com/adap/flower/pull/800))" +"**Mise à jour de la documentation** " +"([#1223](https://github.com/adap/flower/pull/1223), " +"[#1209](https://github.com/adap/flower/pull/1209), " +"[#1251](https://github.com/adap/flower/pull/1251), " +"[#1257](https://github.com/adap/flower/pull/1257), " +"[#1267](https://github.com/adap/flower/pull/1267), " +"[#1268](https://github.com/adap/flower/pull/1268), " +"[#1300](https://github.com/adap/flower/pull/1300), " +"[#1304](https://github.com/adap/flower/pull/1304), " +"[#1305](https://github.com/adap/flower/pull/1305), " +"[#1307](https://github.com/adap/flower/pull/1307))" -#: ../../source/ref-changelog.md:822 +#: ../../source/ref-changelog.md:711 msgid "" -"Prior behaviour was to perform a final round of distributed evaluation on" -" all connected clients, which is often not required (e.g., when using " -"server-side evaluation). The prior behaviour can be enabled by passing " -"`force_final_distributed_eval=True` to `start_server`." +"One substantial documentation update fixes multiple smaller rendering " +"issues, makes titles more succinct to improve navigation, removes a " +"deprecated library, updates documentation dependencies, includes the " +"`flwr.common` module in the API reference, includes support for markdown-" +"based documentation, migrates the changelog from `.rst` to `.md`, and " +"fixes a number of smaller details!" msgstr "" -"Le comportement précédent consistait à effectuer un dernier tour " -"d'évaluation distribuée sur tous les clients connectés, ce qui n'est " -"souvent pas nécessaire (par exemple, lors de l'utilisation de " -"l'évaluation côté serveur). Le comportement précédent peut être activé en" -" passant `force_final_distributed_eval=True` à `start_server`." +"Une mise à jour substantielle de la documentation corrige plusieurs " +"petits problèmes de rendu, rend les titres plus succincts pour améliorer " +"la navigation, supprime une bibliothèque obsolète, met à jour les " +"dépendances de la documentation, inclut le module `flwr.common` dans la " +"référence de l'API, inclut le support de la documentation basée sur le " +"markdown, migre le changelog de `.rst` vers `.md`, et corrige un certain " +"nombre de détails plus petits !" + +#: ../../source/ref-changelog.md:713 ../../source/ref-changelog.md:768 +#: ../../source/ref-changelog.md:837 ../../source/ref-changelog.md:876 +msgid "**Minor updates**" +msgstr "**Mises à jour mineures**" -#: ../../source/ref-changelog.md:824 +#: ../../source/ref-changelog.md:715 msgid "" -"**Renamed q-FedAvg strategy** " -"([#802](https://github.com/adap/flower/pull/802))" +"Add round number to fit and evaluate log messages " +"([#1266](https://github.com/adap/flower/pull/1266))" msgstr "" -"**Renommé stratégie q-FedAvg** " -"([#802](https://github.com/adap/flower/pull/802))" +"Ajoute un chiffre rond pour ajuster et évaluer les messages du journal " +"([#1266](https://github.com/adap/flower/pull/1266))" -#: ../../source/ref-changelog.md:826 +#: ../../source/ref-changelog.md:716 msgid "" -"The strategy named `QffedAvg` was renamed to `QFedAvg` to better reflect " -"the notation given in the original paper (q-FFL is the optimization " -"objective, q-FedAvg is the proposed solver). Note the original (now " -"deprecated) `QffedAvg` class is still available for compatibility reasons" -" (it will be removed in a future release)." +"Add secure gRPC connection to the `advanced_tensorflow` code example " +"([#847](https://github.com/adap/flower/pull/847))" msgstr "" -"La stratégie nommée `QffedAvg` a été renommée en `QFedAvg` pour mieux " -"refléter la notation donnée dans l'article original (q-FFL est l'objectif" -" d'optimisation, q-FedAvg est le solveur proposé). Notez que la classe " -"`QffedAvg` originale (maintenant obsolète) est toujours disponible pour " -"des raisons de compatibilité (elle sera supprimée dans une prochaine " -"version)." +"Ajouter une connexion gRPC sécurisée à l'exemple de code " +"`advanced_tensorflow` ([#847](https://github.com/adap/flower/pull/847))" -#: ../../source/ref-changelog.md:828 +#: ../../source/ref-changelog.md:717 msgid "" -"**Deprecated and renamed code example** `simulation_pytorch` **to** " -"`simulation_pytorch_legacy` " -"([#791](https://github.com/adap/flower/pull/791))" +"Update developer tooling " +"([#1231](https://github.com/adap/flower/pull/1231), " +"[#1276](https://github.com/adap/flower/pull/1276), " +"[#1301](https://github.com/adap/flower/pull/1301), " +"[#1310](https://github.com/adap/flower/pull/1310))" msgstr "" -"**Exemple de code déprécié et renommé** `simulation_pytorch` **en** " -"`simulation_pytorch_legacy` " -"([#791](https://github.com/adap/flower/pull/791))" +"Mettre à jour les outils de développement " +"([#1231](https://github.com/adap/flower/pull/1231), " +"[#1276](https://github.com/adap/flower/pull/1276), " +"[#1301](https://github.com/adap/flower/pull/1301), " +"[#1310](https://github.com/adap/flower/pull/1310))" -#: ../../source/ref-changelog.md:830 +#: ../../source/ref-changelog.md:718 msgid "" -"This example has been replaced by a new example. The new example is based" -" on the experimental virtual client engine, which will become the new " -"default way of doing most types of large-scale simulations in Flower. The" -" existing example was kept for reference purposes, but it might be " -"removed in the future." -msgstr "" -"Cet exemple a été remplacé par un nouvel exemple. Le nouvel exemple est " -"basé sur le moteur expérimental du client virtuel, qui deviendra la " -"nouvelle méthode par défaut pour effectuer la plupart des types de " -"simulations à grande échelle dans Flower. L'exemple existant a été " -"conservé à des fins de référence, mais il pourrait être supprimé à " -"l'avenir." +"Rename ProtoBuf messages to improve consistency " +"([#1214](https://github.com/adap/flower/pull/1214), " +"[#1258](https://github.com/adap/flower/pull/1258), " +"[#1259](https://github.com/adap/flower/pull/1259))" +msgstr "" +"Renomme les messages ProtoBuf pour améliorer la cohérence " +"([#1214](https://github.com/adap/flower/pull/1214), " +"[#1258](https://github.com/adap/flower/pull/1258), " +"[#1259](https://github.com/adap/flower/pull/1259))" -#: ../../source/ref-changelog.md:832 -msgid "v0.16.0 (2021-05-11)" -msgstr "v0.16.0 (2021-05-11)" +#: ../../source/ref-changelog.md:720 +msgid "v0.19.0 (2022-05-18)" +msgstr "v0.19.0 (2022-05-18)" -#: ../../source/ref-changelog.md:836 +#: ../../source/ref-changelog.md:724 msgid "" -"**New built-in strategies** " -"([#549](https://github.com/adap/flower/pull/549))" +"**Flower Baselines (preview): FedOpt, FedBN, FedAvgM** " +"([#919](https://github.com/adap/flower/pull/919), " +"[#1127](https://github.com/adap/flower/pull/1127), " +"[#914](https://github.com/adap/flower/pull/914))" msgstr "" -"**Nouvelles stratégies intégrées** " -"([#549](https://github.com/adap/flower/pull/549))" - -#: ../../source/ref-changelog.md:838 -msgid "(abstract) FedOpt" -msgstr "(résumé) FedOpt" +"**Flower Baselines (preview) : FedOpt, FedBN, FedAvgM** " +"([#919](https://github.com/adap/flower/pull/919), " +"[#1127](https://github.com/adap/flower/pull/1127), " +"[#914](https://github.com/adap/flower/pull/914))" -#: ../../source/ref-changelog.md:841 +#: ../../source/ref-changelog.md:726 +#, fuzzy msgid "" -"**Custom metrics for server and strategies** " -"([#717](https://github.com/adap/flower/pull/717))" +"The first preview release of Flower Baselines has arrived! We're " +"kickstarting Flower Baselines with implementations of FedOpt (FedYogi, " +"FedAdam, FedAdagrad), FedBN, and FedAvgM. Check the documentation on how " +"to use [Flower Baselines](https://flower.ai/docs/using-baselines.html). " +"With this first preview release we're also inviting the community to " +"[contribute their own baselines](https://flower.ai/docs/baselines/how-to-" +"contribute-baselines.html)." msgstr "" -"**Métriques personnalisées pour le serveur et les stratégies** " -"([#717](https://github.com/adap/flower/pull/717))" +"La première version préliminaire de Flower Baselines est arrivée ! Nous " +"démarrons Flower Baselines avec des implémentations de FedOpt (FedYogi, " +"FedAdam, FedAdagrad), FedBN, et FedAvgM. Consultez la documentation sur " +"l'utilisation de [Flower Baselines](https://flower.ai/docs/using-" +"baselines.html). Avec cette première version préliminaire, nous invitons " +"également la communauté à [contribuer à leurs propres lignes de " +"base](https://flower.ai/docs/baselines/how-to-contribute-baselines.html)." -#: ../../source/ref-changelog.md:843 +#: ../../source/ref-changelog.md:728 msgid "" -"The Flower server is now fully task-agnostic, all remaining instances of " -"task-specific metrics (such as `accuracy`) have been replaced by custom " -"metrics dictionaries. Flower 0.15 introduced the capability to pass a " -"dictionary containing custom metrics from client to server. As of this " -"release, custom metrics replace task-specific metrics on the server." +"**C++ client SDK (preview) and code example** " +"([#1111](https://github.com/adap/flower/pull/1111))" msgstr "" -"Le serveur Flower est maintenant totalement agnostique, toutes les " -"instances restantes de métriques spécifiques à une tâche (telles que " -"`accuracy`) ont été remplacées par des dictionnaires de métriques " -"personnalisées. Flower 0.15 a introduit la possibilité de passer un " -"dictionnaire contenant des métriques personnalisées du client au serveur." -" À partir de cette version, les métriques personnalisées remplacent les " -"métriques spécifiques à une tâche sur le serveur." +"**SDK client C++ (aperçu) et exemple de code** " +"([#1111](https://github.com/adap/flower/pull/1111))" -#: ../../source/ref-changelog.md:845 -#, fuzzy +#: ../../source/ref-changelog.md:730 msgid "" -"Custom metric dictionaries are now used in two user-facing APIs: they are" -" returned from Strategy methods `aggregate_fit`/`aggregate_evaluate` and " -"they enable evaluation functions passed to built-in strategies (via " -"`eval_fn`) to return more than two evaluation metrics. Strategies can " -"even return *aggregated* metrics dictionaries for the server to keep " -"track of." +"Preview support for Flower clients written in C++. The C++ preview " +"includes a Flower client SDK and a quickstart code example that " +"demonstrates a simple C++ client using the SDK." msgstr "" -"Les dictionnaires de métriques personnalisés sont maintenant utilisés " -"dans deux API orientées vers l'utilisateur : ils sont renvoyés par les " -"méthodes de stratégie `aggregate_fit`/`aggregate_evaluate` et ils " -"permettent aux fonctions d'évaluation passées aux stratégies intégrées " -"(via `eval_fn`) de renvoyer plus de deux métriques d'évaluation. Les " -"stratégies peuvent même renvoyer des dictionnaires de métriques " -"*agrégées* pour que le serveur puisse en garder la trace." +"L'aperçu C++ comprend un SDK pour les clients Flower et un exemple de " +"code de démarrage rapide qui démontre un client C++ simple utilisant le " +"SDK." -#: ../../source/ref-changelog.md:847 -#, fuzzy +#: ../../source/ref-changelog.md:732 msgid "" -"Strategy implementations should migrate their `aggregate_fit` and " -"`aggregate_evaluate` methods to the new return type (e.g., by simply " -"returning an empty `{}`), server-side evaluation functions should migrate" -" from `return loss, accuracy` to `return loss, {\"accuracy\": accuracy}`." +"**Add experimental support for Python 3.10 and Python 3.11** " +"([#1135](https://github.com/adap/flower/pull/1135))" msgstr "" -"Les implémentations de Stratey doivent migrer leurs méthodes " -"`aggregate_fit` et `aggregate_evaluate` vers le nouveau type de retour " -"(par exemple, en renvoyant simplement un `{}` vide), les fonctions " -"d'évaluation côté serveur doivent migrer de `return loss, accuracy` à " -"`return loss, {\"accuracy\" : accuracy}`." +"**Ajouter la prise en charge expérimentale de Python 3.10 et Python " +"3.11** ([#1135](https://github.com/adap/flower/pull/1135))" -#: ../../source/ref-changelog.md:849 +#: ../../source/ref-changelog.md:734 msgid "" -"Flower 0.15-style return types are deprecated (but still supported), " -"compatibility will be removed in a future release." +"Python 3.10 is the latest stable release of Python and Python 3.11 is due" +" to be released in October. This Flower release adds experimental support" +" for both Python versions." msgstr "" -"Les types de retour du style Flower 0.15 sont dépréciés (mais toujours " -"pris en charge), la compatibilité sera supprimée dans une prochaine " -"version." +"Python 3.10 est la dernière version stable de Python et Python 3.11 " +"devrait sortir en octobre. Cette version de Flower ajoute une prise en " +"charge expérimentale pour les deux versions de Python." -#: ../../source/ref-changelog.md:851 +#: ../../source/ref-changelog.md:736 msgid "" -"**Migration warnings for deprecated functionality** " -"([#690](https://github.com/adap/flower/pull/690))" +"**Aggregate custom metrics through user-provided functions** " +"([#1144](https://github.com/adap/flower/pull/1144))" msgstr "" -"**Avertissements de migration pour les fonctionnalités obsolètes** " -"([#690](https://github.com/adap/flower/pull/690))" +"**Agréger des mesures personnalisées grâce à des fonctions fournies par " +"l'utilisateur** ([#1144](https://github.com/adap/flower/pull/1144))" -#: ../../source/ref-changelog.md:853 +#: ../../source/ref-changelog.md:738 msgid "" -"Earlier versions of Flower were often migrated to new APIs, while " -"maintaining compatibility with legacy APIs. This release introduces " -"detailed warning messages if usage of deprecated APIs is detected. The " -"new warning messages often provide details on how to migrate to more " -"recent APIs, thus easing the transition from one release to another." +"Custom metrics (e.g., `accuracy`) can now be aggregated without having to" +" customize the strategy. Built-in strategies support two new arguments, " +"`fit_metrics_aggregation_fn` and `evaluate_metrics_aggregation_fn`, that " +"allow passing custom metric aggregation functions." msgstr "" -"Les versions antérieures de Flower ont souvent été migrées vers de " -"nouvelles API, tout en maintenant la compatibilité avec les anciennes " -"API. Cette version introduit des messages d'avertissement détaillés si " -"l'utilisation d'API obsolètes est détectée. Les nouveaux messages " -"d'avertissement fournissent souvent des détails sur la façon de migrer " -"vers des API plus récentes, facilitant ainsi la transition d'une version " -"à l'autre." +"Les stratégies intégrées prennent en charge deux nouveaux arguments, " +"`fit_metrics_aggregation_fn` et `evaluate_metrics_aggregation_fn`, qui " +"permettent de passer des fonctions d'agrégation de métriques " +"personnalisées." -#: ../../source/ref-changelog.md:855 +#: ../../source/ref-changelog.md:740 msgid "" -"Improved docs and docstrings " -"([#691](https://github.com/adap/flower/pull/691) " -"[#692](https://github.com/adap/flower/pull/692) " -"[#713](https://github.com/adap/flower/pull/713))" +"**User-configurable round timeout** " +"([#1162](https://github.com/adap/flower/pull/1162))" msgstr "" -"Amélioration des docs et des docstrings " -"([#691](https://github.com/adap/flower/pull/691) " -"[#692](https://github.com/adap/flower/pull/692) " -"[#713](https://github.com/adap/flower/pull/713))" - -#: ../../source/ref-changelog.md:857 -msgid "MXNet example and documentation" -msgstr "Exemple et documentation MXNet" +"**Temps d'attente configurable par l'utilisateur** " +"([#1162](https://github.com/adap/flower/pull/1162))" -#: ../../source/ref-changelog.md:859 +#: ../../source/ref-changelog.md:742 msgid "" -"FedBN implementation in example PyTorch: From Centralized To Federated " -"([#696](https://github.com/adap/flower/pull/696) " -"[#702](https://github.com/adap/flower/pull/702) " -"[#705](https://github.com/adap/flower/pull/705))" +"A new configuration value allows the round timeout to be set for " +"`start_server` and `start_simulation`. If the `config` dictionary " +"contains a `round_timeout` key (with a `float` value in seconds), the " +"server will wait *at least* `round_timeout` seconds before it closes the " +"connection." msgstr "" -"Mise en œuvre de FedBN dans l'exemple PyTorch : De la centralisation à la" -" fédération ([#696](https://github.com/adap/flower/pull/696) " -"[#702](https://github.com/adap/flower/pull/702) " -"[#705](https://github.com/adap/flower/pull/705))" +"Si le dictionnaire `config` contient une clé `round_timeout` (avec une " +"valeur `float` en secondes), le serveur attendra *au moins* " +"`round_timeout` secondes avant de fermer la connexion." -#: ../../source/ref-changelog.md:863 +#: ../../source/ref-changelog.md:744 msgid "" -"**Serialization-agnostic server** " -"([#721](https://github.com/adap/flower/pull/721))" +"**Enable both federated evaluation and centralized evaluation to be used " +"at the same time in all built-in strategies** " +"([#1091](https://github.com/adap/flower/pull/1091))" msgstr "" -"**Serveur agnostique de sérialisation** " -"([#721](https://github.com/adap/flower/pull/721))" +"**Permettre l'utilisation simultanée de l'évaluation fédérée et de " +"l'évaluation centralisée dans toutes les stratégies intégrées** " +"([#1091](https://github.com/adap/flower/pull/1091))" -#: ../../source/ref-changelog.md:865 +#: ../../source/ref-changelog.md:746 msgid "" -"The Flower server is now fully serialization-agnostic. Prior usage of " -"class `Weights` (which represents parameters as deserialized NumPy " -"ndarrays) was replaced by class `Parameters` (e.g., in `Strategy`). " -"`Parameters` objects are fully serialization-agnostic and represents " -"parameters as byte arrays, the `tensor_type` attributes indicates how " -"these byte arrays should be interpreted (e.g., for " -"serialization/deserialization)." +"Built-in strategies can now perform both federated evaluation (i.e., " +"client-side) and centralized evaluation (i.e., server-side) in the same " +"round. Federated evaluation can be disabled by setting `fraction_eval` to" +" `0.0`." msgstr "" -"Le serveur Flower est désormais totalement agnostique en matière de " -"sérialisation. L'utilisation antérieure de la classe `Weights` (qui " -"représente les paramètres sous forme de tableaux NumPy désérialisés) a " -"été remplacée par la classe `Parameters` (par exemple, dans `Strategy`). " -"Les objets `Parameters` sont totalement agnostiques en matière de " -"sérialisation et représentent les paramètres sous forme de tableaux " -"d'octets, les attributs `tensor_type` indiquent comment ces tableaux " -"d'octets doivent être interprétés (par exemple, pour la " -"sérialisation/désérialisation)." +"Les stratégies intégrées peuvent maintenant effectuer une évaluation " +"fédérée (c'est-à-dire côté client) et une évaluation centralisée " +"(c'est-à-dire côté serveur) dans le même tour. L'évaluation fédérée peut " +"être désactivée en réglant `fraction_eval` sur `0.0`." -#: ../../source/ref-changelog.md:867 +#: ../../source/ref-changelog.md:748 msgid "" -"Built-in strategies implement this approach by handling serialization and" -" deserialization to/from `Weights` internally. Custom/3rd-party Strategy " -"implementations should update to the slightly changed Strategy method " -"definitions. Strategy authors can consult PR " -"[#721](https://github.com/adap/flower/pull/721) to see how strategies can" -" easily migrate to the new format." +"**Two new Jupyter Notebook tutorials** " +"([#1141](https://github.com/adap/flower/pull/1141))" msgstr "" -"Les stratégies intégrées mettent en œuvre cette approche en gérant en " -"interne la sérialisation et la désérialisation de `Weights`. Les " -"implémentations de stratégies personnalisées ou tierces doivent être " -"mises à jour avec les définitions de méthodes de stratégie légèrement " -"modifiées. Les auteurs de stratégies peuvent consulter le PR " -"[#721](https://github.com/adap/flower/pull/721) pour voir comment les " -"stratégies peuvent facilement migrer vers le nouveau format." +"**Deux nouveaux tutoriels Jupyter Notebook** " +"([#1141](https://github.com/adap/flower/pull/1141))" -#: ../../source/ref-changelog.md:869 +#: ../../source/ref-changelog.md:750 msgid "" -"Deprecated `flwr.server.Server.evaluate`, use " -"`flwr.server.Server.evaluate_round` instead " -"([#717](https://github.com/adap/flower/pull/717))" +"Two Jupyter Notebook tutorials (compatible with Google Colab) explain " +"basic and intermediate Flower features:" msgstr "" -"Déclassé `flwr.server.Server.evaluate`, utiliser " -"`flwr.server.Server.evaluate_round` à la place " -"([#717](https://github.com/adap/flower/pull/717))" - -#: ../../source/ref-changelog.md:871 -msgid "v0.15.0 (2021-03-12)" -msgstr "v0.15.0 (2021-03-12)" +"Deux tutoriels Jupyter Notebook (compatibles avec Google Colab) " +"expliquent les fonctionnalités de base et intermédiaires de Flower :" -#: ../../source/ref-changelog.md:875 +#: ../../source/ref-changelog.md:752 msgid "" -"**Server-side parameter initialization** " -"([#658](https://github.com/adap/flower/pull/658))" +"*An Introduction to Federated Learning*: [Open in " +"Colab](https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-1" +"-Intro-to-FL-PyTorch.ipynb)" msgstr "" -"**Initialisation des paramètres côté serveur** " -"([#658](https://github.com/adap/flower/pull/658))" +"*Introduction à l'apprentissage fédéré* : [Open in " +"Colab](https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-1" +"-Intro-to-FL-PyTorch.ipynb)" -#: ../../source/ref-changelog.md:877 +#: ../../source/ref-changelog.md:754 msgid "" -"Model parameters can now be initialized on the server-side. Server-side " -"parameter initialization works via a new `Strategy` method called " -"`initialize_parameters`." +"*Using Strategies in Federated Learning*: [Open in " +"Colab](https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-2" +"-Strategies-in-FL-PyTorch.ipynb)" msgstr "" -"Les paramètres du modèle peuvent maintenant être initialisés côté " -"serveur. L'initialisation des paramètres côté serveur fonctionne via une " -"nouvelle méthode `Strategy` appelée `initialize_parameters`." +"*Utiliser des stratégies dans l'apprentissage fédéré* : [Open in " +"Colab](https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-2" +"-Strategies-in-FL-PyTorch.ipynb)" -#: ../../source/ref-changelog.md:879 +#: ../../source/ref-changelog.md:756 msgid "" -"Built-in strategies support a new constructor argument called " -"`initial_parameters` to set the initial parameters. Built-in strategies " -"will provide these initial parameters to the server on startup and then " -"delete them to free the memory afterwards." +"**New FedAvgM strategy (Federated Averaging with Server Momentum)** " +"([#1076](https://github.com/adap/flower/pull/1076))" msgstr "" -"Les stratégies intégrées prennent en charge un nouvel argument du " -"constructeur appelé `initial_parameters` pour définir les paramètres " -"initiaux. Les stratégies intégrées fourniront ces paramètres initiaux au " -"serveur au démarrage et les supprimeront ensuite pour libérer la mémoire." +"**Nouvelle stratégie FedAvgM (Federated Averaging with Server Momentum)**" +" ([#1076](https://github.com/adap/flower/pull/1076))" -#: ../../source/ref-changelog.md:898 +#: ../../source/ref-changelog.md:758 +#, fuzzy msgid "" -"If no initial parameters are provided to the strategy, the server will " -"continue to use the current behaviour (namely, it will ask one of the " -"connected clients for its parameters and use these as the initial global " -"parameters)." +"The new `FedAvgM` strategy implements Federated Averaging with Server " +"Momentum \\[Hsu et al., 2019\\]." msgstr "" -"Si aucun paramètre initial n'est fourni à la stratégie, le serveur " -"continuera à utiliser le comportement actuel (à savoir qu'il demandera à " -"l'un des clients connectés ses paramètres et les utilisera comme " -"paramètres globaux initiaux)." - -#: ../../source/ref-changelog.md:900 -msgid "Deprecations" -msgstr "Dépréciations" +"La nouvelle stratégie `FedAvgM` met en œuvre la moyenne fédérée avec le " +"momentum du serveur [Hsu et al., 2019]." -#: ../../source/ref-changelog.md:902 +#: ../../source/ref-changelog.md:760 msgid "" -"Deprecate `flwr.server.strategy.DefaultStrategy` (migrate to " -"`flwr.server.strategy.FedAvg`, which is equivalent)" +"**New advanced PyTorch code example** " +"([#1007](https://github.com/adap/flower/pull/1007))" msgstr "" -"Déclasser `flwr.server.strategy.DefaultStrategy` (migrer vers " -"`flwr.server.strategy.FedAvg`, qui est équivalent)" - -#: ../../source/ref-changelog.md:904 -msgid "v0.14.0 (2021-02-18)" -msgstr "v0.14.0 (2021-02-18)" +"**Nouvel exemple de code PyTorch avancé** " +"([#1007](https://github.com/adap/flower/pull/1007))" -#: ../../source/ref-changelog.md:908 +#: ../../source/ref-changelog.md:762 msgid "" -"**Generalized** `Client.fit` **and** `Client.evaluate` **return values** " -"([#610](https://github.com/adap/flower/pull/610) " -"[#572](https://github.com/adap/flower/pull/572) " -"[#633](https://github.com/adap/flower/pull/633))" +"A new code example (`advanced_pytorch`) demonstrates advanced Flower " +"concepts with PyTorch." msgstr "" -"**Généralisé** `Client.fit` **et** `Client.evaluate` **valeurs de " -"retour** ([#610](https://github.com/adap/flower/pull/610) " -"[#572](https://github.com/adap/flower/pull/572) " -"[#633](https://github.com/adap/flower/pull/633))" +"Un nouvel exemple de code (`advanced_pytorch`) démontre des concepts de " +"fleur avancés avec PyTorch." -#: ../../source/ref-changelog.md:910 +#: ../../source/ref-changelog.md:764 msgid "" -"Clients can now return an additional dictionary mapping `str` keys to " -"values of the following types: `bool`, `bytes`, `float`, `int`, `str`. " -"This means one can return almost arbitrary values from `fit`/`evaluate` " -"and make use of them on the server side!" +"**New JAX code example** " +"([#906](https://github.com/adap/flower/pull/906), " +"[#1143](https://github.com/adap/flower/pull/1143))" msgstr "" -"Les clients peuvent maintenant renvoyer un dictionnaire supplémentaire " -"associant les clés `str` aux valeurs des types suivants : `bool`, " -"`bytes`, `float`, `int`, `str`. Cela signifie que l'on peut renvoyer des " -"valeurs presque arbitraires de `fit`/`evaluate` et les utiliser du côté " -"du serveur !" +"**Nouvel exemple de code JAX** " +"([#906](https://github.com/adap/flower/pull/906), " +"[#1143](https://github.com/adap/flower/pull/1143))" -#: ../../source/ref-changelog.md:912 +#: ../../source/ref-changelog.md:766 msgid "" -"This improvement also allowed for more consistent return types between " -"`fit` and `evaluate`: `evaluate` should now return a tuple `(float, int, " -"dict)` representing the loss, number of examples, and a dictionary " -"holding arbitrary problem-specific values like accuracy." +"A new code example (`jax_from_centralized_to_federated`) shows federated " +"learning with JAX and Flower." msgstr "" -"Cette amélioration a également permis de rendre plus cohérents les types " -"de retour entre `fit` et `evaluate` : `evaluate` devrait maintenant " -"retourner un tuple `(float, int, dict)` représentant la perte, le nombre " -"d'exemples, et un dictionnaire contenant des valeurs arbitraires " -"spécifiques au problème comme la précision." +"Un nouvel exemple de code (`jax_from_centralized_to_federated`) montre " +"l'apprentissage fédéré avec JAX et Flower." -#: ../../source/ref-changelog.md:914 +#: ../../source/ref-changelog.md:770 msgid "" -"In case you wondered: this feature is compatible with existing projects, " -"the additional dictionary return value is optional. New code should " -"however migrate to the new return types to be compatible with upcoming " -"Flower releases (`fit`: `List[np.ndarray], int, Dict[str, Scalar]`, " -"`evaluate`: `float, int, Dict[str, Scalar]`). See the example below for " -"details." +"New option to keep Ray running if Ray was already initialized in " +"`start_simulation` ([#1177](https://github.com/adap/flower/pull/1177))" msgstr "" -"Au cas où tu te poserais la question : cette fonctionnalité est " -"compatible avec les projets existants, la valeur de retour supplémentaire" -" du dictionnaire est facultative. Le nouveau code doit cependant migrer " -"vers les nouveaux types de retour pour être compatible avec les " -"prochaines versions de Flower (`fit` : `List[np.ndarray], int, Dict[str, " -"Scalar]`, `evaluate` : `float, int, Dict[str, Scalar]`). Voir l'exemple " -"ci-dessous pour plus de détails." +"Nouvelle option pour continuer à faire fonctionner Ray si Ray a déjà été " +"initialisé dans `start_simulation` " +"([#1177](https://github.com/adap/flower/pull/1177))" -#: ../../source/ref-changelog.md:916 +#: ../../source/ref-changelog.md:771 msgid "" -"*Code example:* note the additional dictionary return values in both " -"`FlwrClient.fit` and `FlwrClient.evaluate`:" +"Add support for custom `ClientManager` as a `start_simulation` parameter " +"([#1171](https://github.com/adap/flower/pull/1171))" msgstr "" -"*Exemple de code:* note les valeurs de retour du dictionnaire " -"supplémentaires dans `FlwrClient.fit` et `FlwrClient.evaluate` :" +"Ajout de la prise en charge d'un `ClientManager` personnalisé comme " +"paramètre de `start_simulation` " +"([#1171](https://github.com/adap/flower/pull/1171))" -#: ../../source/ref-changelog.md:931 +#: ../../source/ref-changelog.md:772 +#, fuzzy msgid "" -"**Generalized** `config` **argument in** `Client.fit` **and** " -"`Client.evaluate` ([#595](https://github.com/adap/flower/pull/595))" +"New documentation for [implementing " +"strategies](https://flower.ai/docs/framework/how-to-implement-" +"strategies.html) ([#1097](https://github.com/adap/flower/pull/1097), " +"[#1175](https://github.com/adap/flower/pull/1175))" msgstr "" -"**Généralisé** `config` **argument dans** `Client.fit` **et** " -"`Client.evaluate` ([#595](https://github.com/adap/flower/pull/595))" +"Nouvelle documentation pour [mettre en œuvre des " +"stratégies](https://flower.ai/docs/framework/how-to-implement-" +"strategies.html) ([#1097](https://github.com/adap/flower/pull/1097), " +"[#1175](https://github.com/adap/flower/pull/1175))" -#: ../../source/ref-changelog.md:933 +#: ../../source/ref-changelog.md:773 msgid "" -"The `config` argument used to be of type `Dict[str, str]`, which means " -"that dictionary values were expected to be strings. The new release " -"generalizes this to enable values of the following types: `bool`, " -"`bytes`, `float`, `int`, `str`." +"New mobile-friendly documentation theme " +"([#1174](https://github.com/adap/flower/pull/1174))" msgstr "" -"L'argument `config` était auparavant de type `Dict[str, str]`, ce qui " -"signifie que les valeurs du dictionnaire devaient être des chaînes. La " -"nouvelle version généralise cela pour permettre les valeurs des types " -"suivants : `bool`, `bytes`, `float`, `int`, `str`." +"Nouveau thème de documentation adapté aux mobiles " +"([#1174](https://github.com/adap/flower/pull/1174))" -#: ../../source/ref-changelog.md:935 +#: ../../source/ref-changelog.md:774 msgid "" -"This means one can now pass almost arbitrary values to `fit`/`evaluate` " -"using the `config` dictionary. Yay, no more `str(epochs)` on the server-" -"side and `int(config[\"epochs\"])` on the client side!" +"Limit version range for (optional) `ray` dependency to include only " +"compatible releases (`>=1.9.2,<1.12.0`) " +"([#1205](https://github.com/adap/flower/pull/1205))" msgstr "" -"Cela signifie que l'on peut maintenant passer des valeurs presque " -"arbitraires à `fit`/`evaluate` en utilisant le dictionnaire `config`. " -"Yay, plus de `str(epochs)` du côté serveur et `int(config[\"epochs\"])` " -"du côté client !" +"Limite la plage de versions pour la dépendance (optionnelle) `ray` pour " +"n'inclure que les versions compatibles (`>=1.9.2,<1.12.0`) " +"([#1205](https://github.com/adap/flower/pull/1205))" -#: ../../source/ref-changelog.md:937 +#: ../../source/ref-changelog.md:778 msgid "" -"*Code example:* note that the `config` dictionary now contains non-`str` " -"values in both `Client.fit` and `Client.evaluate`:" +"**Remove deprecated support for Python 3.6** " +"([#871](https://github.com/adap/flower/pull/871))" msgstr "" -"*Exemple de code:* Notez que le dictionnaire `config` contient maintenant" -" des valeurs autres que `str` dans `Client.fit` et `Client.evaluate` :" - -#: ../../source/ref-changelog.md:954 -msgid "v0.13.0 (2021-01-08)" -msgstr "v0.13.0 (2021-01-08)" +"**Supprime la prise en charge obsolète de Python 3.6** " +"([#871](https://github.com/adap/flower/pull/871))" -#: ../../source/ref-changelog.md:958 +#: ../../source/ref-changelog.md:779 msgid "" -"New example: PyTorch From Centralized To Federated " -"([#549](https://github.com/adap/flower/pull/549))" -msgstr "" -"Nouvel exemple : PyTorch de centralisé à fédéré " -"([#549](https://github.com/adap/flower/pull/549))" - -#: ../../source/ref-changelog.md:959 -msgid "Improved documentation" -msgstr "Amélioration de la documentation" - -#: ../../source/ref-changelog.md:960 -msgid "New documentation theme ([#551](https://github.com/adap/flower/pull/551))" +"**Remove deprecated KerasClient** " +"([#857](https://github.com/adap/flower/pull/857))" msgstr "" -"Nouveau thème de documentation " -"([#551](https://github.com/adap/flower/pull/551))" - -#: ../../source/ref-changelog.md:961 -msgid "New API reference ([#554](https://github.com/adap/flower/pull/554))" -msgstr "Nouvelle référence API ([#554](https://github.com/adap/flower/pull/554))" +"**Supprimez KerasClient** " +"([#857](https://github.com/adap/flower/pull/857))" -#: ../../source/ref-changelog.md:962 +#: ../../source/ref-changelog.md:780 msgid "" -"Updated examples documentation " -"([#549](https://github.com/adap/flower/pull/549))" +"**Remove deprecated no-op extra installs** " +"([#973](https://github.com/adap/flower/pull/973))" msgstr "" -"Mise à jour de la documentation des exemples " -"([#549](https://github.com/adap/flower/pull/549))" +"**Supprimer les installations supplémentaires no-op dépréciées** " +"([#973](https://github.com/adap/flower/pull/973))" -#: ../../source/ref-changelog.md:963 +#: ../../source/ref-changelog.md:781 msgid "" -"Removed obsolete documentation " -"([#548](https://github.com/adap/flower/pull/548))" +"**Remove deprecated proto fields from** `FitRes` **and** `EvaluateRes` " +"([#869](https://github.com/adap/flower/pull/869))" msgstr "" -"Suppression de la documentation obsolète " -"([#548](https://github.com/adap/flower/pull/548))" - -#: ../../source/ref-changelog.md:965 -msgid "Bugfix:" -msgstr "Correction de bogues :" +"**Supprimez les champs proto obsolètes de** `FitRes` **et** `EvaluateRes`" +" ([#869](https://github.com/adap/flower/pull/869))" -#: ../../source/ref-changelog.md:967 +#: ../../source/ref-changelog.md:782 msgid "" -"`Server.fit` does not disconnect clients when finished, disconnecting the" -" clients is now handled in `flwr.server.start_server` " -"([#553](https://github.com/adap/flower/pull/553) " -"[#540](https://github.com/adap/flower/issues/540))." +"**Remove deprecated QffedAvg strategy (replaced by QFedAvg)** " +"([#1107](https://github.com/adap/flower/pull/1107))" msgstr "" -"`Server.fit` ne déconnecte pas les clients lorsqu'il est terminé, la " -"déconnexion des clients est maintenant gérée dans " -"`flwr.server.start_server` " -"([#553](https://github.com/adap/flower/pull/553) " -"[#540](https://github.com/adap/flower/issues/540))." - -#: ../../source/ref-changelog.md:969 -msgid "v0.12.0 (2020-12-07)" -msgstr "v0.12.0 (2020-12-07)" - -#: ../../source/ref-changelog.md:971 ../../source/ref-changelog.md:987 -msgid "Important changes:" -msgstr "Changements importants :" +"**Supprime la stratégie QffedAvg (remplacée par QFedAvg)** " +"([#1107](https://github.com/adap/flower/pull/1107))" -#: ../../source/ref-changelog.md:973 +#: ../../source/ref-changelog.md:783 msgid "" -"Added an example for embedded devices " -"([#507](https://github.com/adap/flower/pull/507))" +"**Remove deprecated DefaultStrategy strategy** " +"([#1142](https://github.com/adap/flower/pull/1142))" msgstr "" -"Ajout d'un exemple pour les périphériques embarqués " -"([#507](https://github.com/adap/flower/pull/507))" +"**Supprime la stratégie DefaultStrategy qui est obsolète** " +"([#1142](https://github.com/adap/flower/pull/1142))" -#: ../../source/ref-changelog.md:974 +#: ../../source/ref-changelog.md:784 msgid "" -"Added a new NumPyClient (in addition to the existing KerasClient) " -"([#504](https://github.com/adap/flower/pull/504) " -"[#508](https://github.com/adap/flower/pull/508))" +"**Remove deprecated support for eval_fn accuracy return value** " +"([#1142](https://github.com/adap/flower/pull/1142))" msgstr "" -"Ajout d'un nouveau NumPyClient (en plus du KerasClient existant) " -"([#504](https://github.com/adap/flower/pull/504) " -"[#508](https://github.com/adap/flower/pull/508))" +"**Supprimer la prise en charge obsolète de la valeur de retour de la " +"précision eval_fn** ([#1142](https://github.com/adap/flower/pull/1142))" -#: ../../source/ref-changelog.md:975 +#: ../../source/ref-changelog.md:785 msgid "" -"Deprecated `flwr_example` package and started to migrate examples into " -"the top-level `examples` directory " -"([#494](https://github.com/adap/flower/pull/494) " -"[#512](https://github.com/adap/flower/pull/512))" +"**Remove deprecated support for passing initial parameters as NumPy " +"ndarrays** ([#1142](https://github.com/adap/flower/pull/1142))" msgstr "" -"Déclassement du paquet `flwr_example` et migration des exemples dans le " -"répertoire de premier niveau `examples` " -"([#494](https://github.com/adap/flower/pull/494) " -"[#512](https://github.com/adap/flower/pull/512))" - -#: ../../source/ref-changelog.md:977 -msgid "v0.11.0 (2020-11-30)" -msgstr "v0.11.0 (2020-11-30)" +"**Supprime la prise en charge obsolète du passage des paramètres initiaux" +" en tant que ndarrays NumPy** " +"([#1142](https://github.com/adap/flower/pull/1142))" -#: ../../source/ref-changelog.md:979 -msgid "Incompatible changes:" -msgstr "Changements incompatibles :" +#: ../../source/ref-changelog.md:787 +msgid "v0.18.0 (2022-02-28)" +msgstr "v0.18.0 (2022-02-28)" -#: ../../source/ref-changelog.md:981 +#: ../../source/ref-changelog.md:791 msgid "" -"Renamed strategy methods " -"([#486](https://github.com/adap/flower/pull/486)) to unify the naming of " -"Flower's public APIs. Other public methods/functions (e.g., every method " -"in `Client`, but also `Strategy.evaluate`) do not use the `on_` prefix, " -"which is why we're removing it from the four methods in Strategy. To " -"migrate rename the following `Strategy` methods accordingly:" +"**Improved Virtual Client Engine compatibility with Jupyter Notebook / " +"Google Colab** ([#866](https://github.com/adap/flower/pull/866), " +"[#872](https://github.com/adap/flower/pull/872), " +"[#833](https://github.com/adap/flower/pull/833), " +"[#1036](https://github.com/adap/flower/pull/1036))" msgstr "" -"Renommé les méthodes de stratégie " -"([#486](https://github.com/adap/flower/pull/486)) pour unifier le nommage" -" des API publiques de Flower. D'autres méthodes/fonctions publiques (par " -"exemple, toutes les méthodes de `Client`, mais aussi `Strategy.evaluate`)" -" n'utilisent pas le préfixe `on_`, c'est pourquoi nous le supprimons des " -"quatre méthodes de Stratégie. Pour migrer, renommez les méthodes de " -"`Strategy` suivantes en conséquence :" - -#: ../../source/ref-changelog.md:982 -msgid "`on_configure_evaluate` => `configure_evaluate`" -msgstr "`on_configure_evaluate` => `configure_evaluate`" - -#: ../../source/ref-changelog.md:983 -msgid "`on_aggregate_evaluate` => `aggregate_evaluate`" -msgstr "`on_aggregate_evaluate` => `aggregate_evaluate`" - -#: ../../source/ref-changelog.md:984 -msgid "`on_configure_fit` => `configure_fit`" -msgstr "`on_configure_fit` => `configure_fit`" - -#: ../../source/ref-changelog.md:985 -msgid "`on_aggregate_fit` => `aggregate_fit`" -msgstr "`on_aggregate_fit` => `aggregate_fit`" +"**Amélioration de la compatibilité du moteur de client virtuel avec " +"Jupyter Notebook / Google Colab** " +"([#866](https://github.com/adap/flower/pull/866), " +"[#872](https://github.com/adap/flower/pull/872), " +"[#833](https://github.com/adap/flower/pull/833), " +"[#1036](https://github.com/adap/flower/pull/1036))" -#: ../../source/ref-changelog.md:989 +#: ../../source/ref-changelog.md:793 msgid "" -"Deprecated `DefaultStrategy` " -"([#479](https://github.com/adap/flower/pull/479)). To migrate use " -"`FedAvg` instead." +"Simulations (using the Virtual Client Engine through `start_simulation`) " +"now work more smoothly on Jupyter Notebooks (incl. Google Colab) after " +"installing Flower with the `simulation` extra (`pip install " +"flwr[simulation]`)." msgstr "" -"Déclassé `DefaultStrategy` " -"([#479](https://github.com/adap/flower/pull/479)). Pour migrer, utilisez " -"`FedAvg` à la place." +"Les simulations (utilisant le moteur de client virtuel via " +"`start_simulation`) fonctionnent maintenant plus facilement sur les " +"Notebooks Jupyter (y compris Google Colab) après avoir installé Flower " +"avec l'option `simulation` (`pip install flwr[simulation]`)." -#: ../../source/ref-changelog.md:990 +#: ../../source/ref-changelog.md:795 msgid "" -"Simplified examples and baselines " -"([#484](https://github.com/adap/flower/pull/484))." +"**New Jupyter Notebook code example** " +"([#833](https://github.com/adap/flower/pull/833))" msgstr "" -"Exemples simplifiés et lignes de base " -"([#484](https://github.com/adap/flower/pull/484))." +"**Nouvel exemple de code Jupyter Notebook** " +"([#833](https://github.com/adap/flower/pull/833))" -#: ../../source/ref-changelog.md:991 +#: ../../source/ref-changelog.md:797 msgid "" -"Removed presently unused `on_conclude_round` from strategy interface " -"([#483](https://github.com/adap/flower/pull/483))." +"A new code example (`quickstart_simulation`) demonstrates Flower " +"simulations using the Virtual Client Engine through Jupyter Notebook " +"(incl. Google Colab)." msgstr "" -"Suppression de `on_conclude_round` actuellement inutilisé de l'interface " -"de stratégie ([#483](https://github.com/adap/flower/pull/483))." +"Un nouvel exemple de code (`quickstart_simulation`) démontre des " +"simulations de Flower en utilisant le moteur de client virtuel via " +"Jupyter Notebook (y compris Google Colab)." -#: ../../source/ref-changelog.md:992 +#: ../../source/ref-changelog.md:799 msgid "" -"Set minimal Python version to 3.6.1 instead of 3.6.9 " -"([#471](https://github.com/adap/flower/pull/471))." +"**Client properties (feature preview)** " +"([#795](https://github.com/adap/flower/pull/795))" msgstr "" -"Fixe la version minimale de Python à 3.6.1 au lieu de 3.6.9 " -"([#471](https://github.com/adap/flower/pull/471))." +"**Propriétés du client (aperçu des fonctionnalités)** " +"([#795](https://github.com/adap/flower/pull/795))" -#: ../../source/ref-changelog.md:993 +#: ../../source/ref-changelog.md:801 msgid "" -"Improved `Strategy` docstrings " -"([#470](https://github.com/adap/flower/pull/470))." +"Clients can implement a new method `get_properties` to enable server-side" +" strategies to query client properties." msgstr "" -"Amélioration des docstrings `Stratégie` " -"([#470](https://github.com/adap/flower/pull/470))." - -#: ../../source/ref-example-projects.rst:2 -#, fuzzy -msgid "Example projects" -msgstr "Exemples de PyTorch" +"Les clients peuvent implémenter une nouvelle méthode `get_properties` " +"pour permettre aux stratégies côté serveur d'interroger les propriétés du" +" client." -#: ../../source/ref-example-projects.rst:4 +#: ../../source/ref-changelog.md:803 msgid "" -"Flower comes with a number of usage examples. The examples demonstrate " -"how Flower can be used to federate different kinds of existing machine " -"learning pipelines, usually leveraging popular machine learning " -"frameworks such as `PyTorch `_ or `TensorFlow " -"`_." +"**Experimental Android support with TFLite** " +"([#865](https://github.com/adap/flower/pull/865))" msgstr "" -"Flower est livré avec un certain nombre d'exemples d'utilisation, qui " -"montrent comment Flower peut être utilisé pour fédérer différents types " -"de pipelines d'apprentissage automatique existants, qui s'appuient " -"généralement sur des frameworks d'apprentissage automatique populaires " -"tels que `PyTorch `_ ou `TensorFlow " -"`_." +"**Support expérimental d'Android avec TFLite** " +"([#865](https://github.com/adap/flower/pull/865))" -#: ../../source/ref-example-projects.rst:10 -#, fuzzy +#: ../../source/ref-changelog.md:805 msgid "" -"The following examples are available as standalone projects. Quickstart " -"TensorFlow/Keras ---------------------------" -msgstr "Les exemples suivants sont disponibles sous forme de projets autonomes." +"Android support has finally arrived in `main`! Flower is both client-" +"agnostic and framework-agnostic by design. One can integrate arbitrary " +"client platforms and with this release, using Flower on Android has " +"become a lot easier." +msgstr "" +"La prise en charge d'Android est enfin arrivée dans `main` ! Flower est à" +" la fois agnostique au niveau du client et du cadre de travail. On peut " +"intégrer des plates-formes client arbitraires et avec cette version, " +"l'utilisation de Flower sur Android est devenue beaucoup plus facile." -#: ../../source/ref-example-projects.rst:14 +#: ../../source/ref-changelog.md:807 msgid "" -"The TensorFlow/Keras quickstart example shows CIFAR-10 image " -"classification with MobileNetV2:" +"The example uses TFLite on the client side, along with a new " +"`FedAvgAndroid` strategy. The Android client and `FedAvgAndroid` are " +"still experimental, but they are a first step towards a fully-fledged " +"Android SDK and a unified `FedAvg` implementation that integrated the new" +" functionality from `FedAvgAndroid`." msgstr "" -"L'exemple de démarrage rapide TensorFlow/Keras montre la classification " -"d'images CIFAR-10 avec MobileNetV2 :" +"L'exemple utilise TFLite du côté client, ainsi qu'une nouvelle stratégie " +"`FedAvgAndroid`. Le client Android et `FedAvgAndroid` sont encore " +"expérimentaux, mais ils constituent un premier pas vers un SDK Android à " +"part entière et une implémentation unifiée de `FedAvg` intégrant la " +"nouvelle fonctionnalité de `FedAvgAndroid`." -#: ../../source/ref-example-projects.rst:17 -#, fuzzy +#: ../../source/ref-changelog.md:809 msgid "" -"`Quickstart TensorFlow (Code) " -"`_" +"**Make gRPC keepalive time user-configurable and decrease default " +"keepalive time** ([#1069](https://github.com/adap/flower/pull/1069))" msgstr "" -"`Quickstart TensorFlow (Code) " -"`_" +"**Rendre le temps de garde gRPC configurable par l'utilisateur et " +"diminuer le temps de garde par défaut** " +"([#1069](https://github.com/adap/flower/pull/1069))" -#: ../../source/ref-example-projects.rst:18 -#, fuzzy -msgid ":doc:`Quickstart TensorFlow (Tutorial) `" +#: ../../source/ref-changelog.md:811 +msgid "" +"The default gRPC keepalive time has been reduced to increase the " +"compatibility of Flower with more cloud environments (for example, " +"Microsoft Azure). Users can configure the keepalive time to customize the" +" gRPC stack based on specific requirements." msgstr "" -"`Quickstart TensorFlow (Tutorial) `_" +"Le temps de keepalive gRPC par défaut a été réduit pour augmenter la " +"compatibilité de Flower avec davantage d'environnements cloud (par " +"exemple, Microsoft Azure). Les utilisateurs peuvent configurer le temps " +"de keepalive pour personnaliser la pile gRPC en fonction d'exigences " +"spécifiques." -#: ../../source/ref-example-projects.rst:19 +#: ../../source/ref-changelog.md:813 msgid "" -"`Quickstart TensorFlow (Blog Post) `_" +"**New differential privacy example using Opacus and PyTorch** " +"([#805](https://github.com/adap/flower/pull/805))" msgstr "" -"`Quickstart TensorFlow (Blog Post) `_" - -#: ../../source/ref-example-projects.rst:23 -#: ../../source/tutorial-quickstart-pytorch.rst:5 -msgid "Quickstart PyTorch" -msgstr "Démarrage rapide de PyTorch" +"**Nouvel exemple de confidentialité différentielle utilisant Opacus et " +"PyTorch** ([#805](https://github.com/adap/flower/pull/805))" -#: ../../source/ref-example-projects.rst:25 +#: ../../source/ref-changelog.md:815 msgid "" -"The PyTorch quickstart example shows CIFAR-10 image classification with a" -" simple Convolutional Neural Network:" +"A new code example (`opacus`) demonstrates differentially-private " +"federated learning with Opacus, PyTorch, and Flower." msgstr "" -"L'exemple de démarrage rapide PyTorch montre la classification d'images " -"CIFAR-10 avec un simple réseau neuronal convolutif :" +"Un nouvel exemple de code (`opacus`) démontre l'apprentissage fédéré " +"différentiellement privé avec Opacus, PyTorch et Flower." -#: ../../source/ref-example-projects.rst:28 -#, fuzzy +#: ../../source/ref-changelog.md:817 msgid "" -"`Quickstart PyTorch (Code) " -"`_" +"**New Hugging Face Transformers code example** " +"([#863](https://github.com/adap/flower/pull/863))" msgstr "" -"`Quickstart PyTorch (Code) " -"`_" +"**Nouvel exemple de code pour les Transformers à visage embrassant** " +"([#863](https://github.com/adap/flower/pull/863))" -#: ../../source/ref-example-projects.rst:29 -#, fuzzy -msgid ":doc:`Quickstart PyTorch (Tutorial) `" +#: ../../source/ref-changelog.md:819 +msgid "" +"A new code example (`quickstart_huggingface`) demonstrates usage of " +"Hugging Face Transformers with Flower." msgstr "" -"`Quickstart PyTorch (Tutorial) `_" - -#: ../../source/ref-example-projects.rst:33 -msgid "PyTorch: From Centralized To Federated" -msgstr "PyTorch : De la centralisation à la fédération" +"Un nouvel exemple de code (`quickstart_huggingface`) démontre " +"l'utilisation des transformateurs Hugging Face avec Flower." -#: ../../source/ref-example-projects.rst:35 +#: ../../source/ref-changelog.md:821 msgid "" -"This example shows how a regular PyTorch project can be federated using " -"Flower:" +"**New MLCube code example** " +"([#779](https://github.com/adap/flower/pull/779), " +"[#1034](https://github.com/adap/flower/pull/1034), " +"[#1065](https://github.com/adap/flower/pull/1065), " +"[#1090](https://github.com/adap/flower/pull/1090))" msgstr "" -"Cet exemple montre comment un projet PyTorch ordinaire peut être fédéré à" -" l'aide de Flower :" +"**Nouvel exemple de code MLCube** " +"([#779](https://github.com/adap/flower/pull/779), " +"[#1034](https://github.com/adap/flower/pull/1034), " +"[#1065](https://github.com/adap/flower/pull/1065), " +"[#1090](https://github.com/adap/flower/pull/1090))" -#: ../../source/ref-example-projects.rst:37 -#, fuzzy +#: ../../source/ref-changelog.md:823 msgid "" -"`PyTorch: From Centralized To Federated (Code) " -"`_" +"A new code example (`quickstart_mlcube`) demonstrates usage of MLCube " +"with Flower." msgstr "" -"`PyTorch : De la centralisation à la fédération (Code) " -"`_" +"Un nouvel exemple de code (`quickstart_mlcube`) démontre l'utilisation de" +" MLCube avec Flower." -#: ../../source/ref-example-projects.rst:38 -#, fuzzy +#: ../../source/ref-changelog.md:825 msgid "" -":doc:`PyTorch: From Centralized To Federated (Tutorial) `" +"**SSL-enabled server and client** " +"([#842](https://github.com/adap/flower/pull/842), " +"[#844](https://github.com/adap/flower/pull/844), " +"[#845](https://github.com/adap/flower/pull/845), " +"[#847](https://github.com/adap/flower/pull/847), " +"[#993](https://github.com/adap/flower/pull/993), " +"[#994](https://github.com/adap/flower/pull/994))" msgstr "" -"`PyTorch : De la centralisation à la fédération (Tutoriel) " -"`_" - -#: ../../source/ref-example-projects.rst:42 -msgid "Federated Learning on Raspberry Pi and Nvidia Jetson" -msgstr "Apprentissage fédéré sur Raspberry Pi et Nvidia Jetson" +"**([#842](https://github.com/adap/flower/pull/842), " +"[#844](https://github.com/adap/flower/pull/844), " +"[#845](https://github.com/adap/flower/pull/845), " +"[#847](https://github.com/adap/flower/pull/847), " +"[#993](https://github.com/adap/flower/pull/993), " +"[#994](https://github.com/adap/flower/pull/994))" -#: ../../source/ref-example-projects.rst:44 +#: ../../source/ref-changelog.md:827 msgid "" -"This example shows how Flower can be used to build a federated learning " -"system that run across Raspberry Pi and Nvidia Jetson:" +"SSL enables secure encrypted connections between clients and servers. " +"This release open-sources the Flower secure gRPC implementation to make " +"encrypted communication channels accessible to all Flower users." msgstr "" -"Cet exemple montre comment Flower peut être utilisé pour construire un " -"système d'apprentissage fédéré qui fonctionne sur Raspberry Pi et Nvidia " -"Jetson :" +"SSL permet d'établir des connexions cryptées et sécurisées entre les " +"clients et les serveurs. Cette version met en open-source " +"l'implémentation gRPC sécurisée de Flower afin de rendre les canaux de " +"communication cryptés accessibles à tous les utilisateurs de Flower." -#: ../../source/ref-example-projects.rst:46 -#, fuzzy +#: ../../source/ref-changelog.md:829 msgid "" -"`Federated Learning on Raspberry Pi and Nvidia Jetson (Code) " -"`_" +"**Updated** `FedAdam` **and** `FedYogi` **strategies** " +"([#885](https://github.com/adap/flower/pull/885), " +"[#895](https://github.com/adap/flower/pull/895))" msgstr "" -"`L'apprentissage fédéré sur Raspberry Pi et Nvidia Jetson (Code) " -"`_" +"**Mise à jour** `FedAdam` **et** `FedYogi` **stratégies** " +"([#885](https://github.com/adap/flower/pull/885), " +"[#895](https://github.com/adap/flower/pull/895))" -#: ../../source/ref-example-projects.rst:47 +#: ../../source/ref-changelog.md:831 msgid "" -"`Federated Learning on Raspberry Pi and Nvidia Jetson (Blog Post) " -"`_" +"`FedAdam` and `FedAdam` match the latest version of the Adaptive " +"Federated Optimization paper." msgstr "" -"`L'apprentissage fédéré sur Raspberry Pi et Nvidia Jetson (Blog Post) " -"`_" +"`FedAdam` et `FedAdam` correspondent à la dernière version de l'article " +"sur l'optimisation fédérée adaptative." -#: ../../source/ref-faq.rst:4 +#: ../../source/ref-changelog.md:833 msgid "" -"This page collects answers to commonly asked questions about Federated " -"Learning with Flower." +"**Initialize** `start_simulation` **with a list of client IDs** " +"([#860](https://github.com/adap/flower/pull/860))" msgstr "" -"Cette page rassemble les réponses aux questions les plus fréquemment " -"posées sur l'apprentissage fédéré avec Flower." +"**Initialise** `start_simulation` **avec une liste d'ID de clients** " +"([#860](https://github.com/adap/flower/pull/860))" -#: ../../source/ref-faq.rst -#, fuzzy -msgid ":fa:`eye,mr-1` Can Flower run on Jupyter Notebooks / Google Colab?" +#: ../../source/ref-changelog.md:835 +msgid "" +"`start_simulation` can now be called with a list of client IDs " +"(`clients_ids`, type: `List[str]`). Those IDs will be passed to the " +"`client_fn` whenever a client needs to be initialized, which can make it " +"easier to load data partitions that are not accessible through `int` " +"identifiers." msgstr "" -":fa:`eye,mr-1` Flower peut-il fonctionner sur les ordinateurs portables " -"Juptyter / Google Colab ?" +"`start_simulation` peut maintenant être appelé avec une liste " +"d'identifiants de clients (`clients_ids`, type : `List[str]`). Ces " +"identifiants seront passés à `client_fn` chaque fois qu'un client doit " +"être initialisé, ce qui peut faciliter le chargement de partitions de " +"données qui ne sont pas accessibles par des identifiants `int`." -#: ../../source/ref-faq.rst:8 +#: ../../source/ref-changelog.md:839 msgid "" -"Yes, it can! Flower even comes with a few under-the-hood optimizations to" -" make it work even better on Colab. Here's a quickstart example:" +"Update `num_examples` calculation in PyTorch code examples in " +"([#909](https://github.com/adap/flower/pull/909))" msgstr "" -"Oui, c'est possible ! Flower est même livré avec quelques optimisations " -"pour qu'il fonctionne encore mieux sur Colab. Voici un exemple de " -"démarrage rapide :" +"Mettre à jour le calcul de `num_examples` dans les exemples de code " +"PyTorch dans ([#909](https://github.com/adap/flower/pull/909))" -#: ../../source/ref-faq.rst:10 -#, fuzzy +#: ../../source/ref-changelog.md:840 msgid "" -"`Flower simulation PyTorch " -"`_" +"Expose Flower version through `flwr.__version__` " +"([#952](https://github.com/adap/flower/pull/952))" msgstr "" -"`Flower Quickstart (TensorFlow/Keras) " -"`_" +"Exposer la version de Flower à travers `flwr.__version__` " +"([#952](https://github.com/adap/flower/pull/952))" -#: ../../source/ref-faq.rst:11 -#, fuzzy +#: ../../source/ref-changelog.md:841 msgid "" -"`Flower simulation TensorFlow/Keras " -"`_" +"`start_server` in `app.py` now returns a `History` object containing " +"metrics from training ([#974](https://github.com/adap/flower/pull/974))" msgstr "" -"`Flower Quickstart (TensorFlow/Keras) " -"`_" +"`start_server` dans `app.py` renvoie maintenant un objet `History` " +"contenant les métriques de l'entraînement " +"([#974](https://github.com/adap/flower/pull/974))" -#: ../../source/ref-faq.rst -msgid ":fa:`eye,mr-1` How can I run Federated Learning on a Raspberry Pi?" +#: ../../source/ref-changelog.md:842 +msgid "" +"Make `max_workers` (used by `ThreadPoolExecutor`) configurable " +"([#978](https://github.com/adap/flower/pull/978))" msgstr "" -":fa:`eye,mr-1` Comment puis-je faire fonctionner l'apprentissage fédéré " -"sur un Raspberry Pi ?" +"Rendre `max_workers` (utilisé par `ThreadPoolExecutor`) configurable " +"([#978](https://github.com/adap/flower/pull/978))" -#: ../../source/ref-faq.rst:15 -#, fuzzy +#: ../../source/ref-changelog.md:843 msgid "" -"Find the `blog post about federated learning on embedded device here " -"`_" -" and the corresponding `GitHub code example " -"`_." +"Increase sleep time after server start to three seconds in all code " +"examples ([#1086](https://github.com/adap/flower/pull/1086))" msgstr "" -"Trouve le `blog post about federated learning on embedded device ici " -"`_" -" et l'exemple de code GitHub correspondant " -"`_." +"Augmente le temps de sommeil après le démarrage du serveur à trois " +"secondes dans tous les exemples de code " +"([#1086](https://github.com/adap/flower/pull/1086))" -#: ../../source/ref-faq.rst -msgid ":fa:`eye,mr-1` Does Flower support federated learning on Android devices?" +#: ../../source/ref-changelog.md:844 +msgid "" +"Added a new FAQ section to the documentation " +"([#948](https://github.com/adap/flower/pull/948))" msgstr "" -":fa:`eye,mr-1` Est-ce que Flower prend en charge l'apprentissage fédéré " -"sur les appareils Android ?" +"Ajout d'une nouvelle section FAQ à la documentation " +"([#948](https://github.com/adap/flower/pull/948))" -#: ../../source/ref-faq.rst:19 -#, fuzzy +#: ../../source/ref-changelog.md:845 msgid "" -"Yes, it does. Please take a look at our `blog post " -"`_ or check out the code examples:" +"And many more under-the-hood changes, library updates, documentation " +"changes, and tooling improvements!" msgstr "" -"Oui. Jetez un coup d'œil à notre `blog post " -"`_ ou consultez l'`exemple de code Android sur GitHub " -"`_." +"Et bien d'autres changements sous le capot, des mises à jour de la " +"bibliothèque, des modifications de la documentation et des améliorations " +"de l'outillage !" -#: ../../source/ref-faq.rst:21 +#: ../../source/ref-changelog.md:849 msgid "" -"`Android Kotlin example `_" +"**Removed** `flwr_example` **and** `flwr_experimental` **from release " +"build** ([#869](https://github.com/adap/flower/pull/869))" msgstr "" +"**Supprimé** `flwr_example` **et** `flwr_experimental` **de la version " +"release build** ([#869](https://github.com/adap/flower/pull/869))" -#: ../../source/ref-faq.rst:22 -msgid "`Android Java example `_" +#: ../../source/ref-changelog.md:851 +msgid "" +"The packages `flwr_example` and `flwr_experimental` have been deprecated " +"since Flower 0.12.0 and they are not longer included in Flower release " +"builds. The associated extras (`baseline`, `examples-pytorch`, `examples-" +"tensorflow`, `http-logger`, `ops`) are now no-op and will be removed in " +"an upcoming release." msgstr "" +"Les paquets `flwr_example` et `flwr_experimental` ont été dépréciés " +"depuis Flower 0.12.0 et ils ne sont plus inclus dans les builds de " +"Flower. Les extras associés (`baseline`, `examples-pytorch`, `examples-" +"tensorflow`, `http-logger`, `ops`) sont maintenant no-op et seront " +"supprimés dans une prochaine version." -#: ../../source/ref-faq.rst -msgid ":fa:`eye,mr-1` Can I combine federated learning with blockchain?" -msgstr "" -":fa:`eye,mr-1` Puis-je combiner l'apprentissage fédéré avec la blockchain" -" ?" +#: ../../source/ref-changelog.md:853 +msgid "v0.17.0 (2021-09-24)" +msgstr "v0.17.0 (2021-09-24)" -#: ../../source/ref-faq.rst:26 +#: ../../source/ref-changelog.md:857 msgid "" -"Yes, of course. A list of available examples using Flower within a " -"blockchain environment is available here:" +"**Experimental virtual client engine** " +"([#781](https://github.com/adap/flower/pull/781) " +"[#790](https://github.com/adap/flower/pull/790) " +"[#791](https://github.com/adap/flower/pull/791))" msgstr "" -"Oui, bien sûr, une liste d'exemples disponibles utilisant Flower dans un " -"environnement blockchain est disponible ici :" +"**Moteur expérimental de client virtuel** " +"([#781](https://github.com/adap/flower/pull/781) " +"[#790](https://github.com/adap/flower/pull/790) " +"[#791](https://github.com/adap/flower/pull/791))" -#: ../../source/ref-faq.rst:28 +#: ../../source/ref-changelog.md:859 msgid "" -"`Flower meets Nevermined GitHub Repository `_." +"One of Flower's goals is to enable research at scale. This release " +"enables a first (experimental) peek at a major new feature, codenamed the" +" virtual client engine. Virtual clients enable simulations that scale to " +"a (very) large number of clients on a single machine or compute cluster. " +"The easiest way to test the new functionality is to look at the two new " +"code examples called `quickstart_simulation` and `simulation_pytorch`." msgstr "" -"`Flower meets Nevermined GitHub Repository `_." +"L'un des objectifs de Flower est de permettre la recherche à grande " +"échelle. Cette version donne un premier aperçu (expérimental) d'une " +"nouvelle fonctionnalité majeure, connue sous le nom de code de moteur de " +"client virtuel. Les clients virtuels permettent des simulations qui " +"s'étendent à un (très) grand nombre de clients sur une seule machine ou " +"une grappe de calcul. La façon la plus simple de tester la nouvelle " +"fonctionnalité est de regarder les deux nouveaux exemples de code appelés" +" `quickstart_simulation` et `simulation_pytorch`." -#: ../../source/ref-faq.rst:29 +#: ../../source/ref-changelog.md:861 msgid "" -"`Flower meets Nevermined YouTube video " -"`_." +"The feature is still experimental, so there's no stability guarantee for " +"the API. It's also not quite ready for prime time and comes with a few " +"known caveats. However, those who are curious are encouraged to try it " +"out and share their thoughts." msgstr "" -"`Flower rencontre Nevermined vidéo YouTube " -"`_." +"La fonction est encore expérimentale, il n'y a donc aucune garantie de " +"stabilité pour l'API. Elle n'est pas non plus tout à fait prête pour le " +"prime time et s'accompagne de quelques mises en garde connues. Cependant," +" les personnes curieuses sont encouragées à l'essayer et à faire part de " +"leurs réflexions." -#: ../../source/ref-faq.rst:30 -#, fuzzy +#: ../../source/ref-changelog.md:863 msgid "" -"`Flower meets KOSMoS `_." +"**New built-in strategies** " +"([#828](https://github.com/adap/flower/pull/828) " +"[#822](https://github.com/adap/flower/pull/822))" msgstr "" -"`Flower rencontre KOSMoS `_." +"**Nouvelles stratégies intégrées** " +"([#828](https://github.com/adap/flower/pull/828) " +"[#822](https://github.com/adap/flower/pull/822))" -#: ../../source/ref-faq.rst:31 +#: ../../source/ref-changelog.md:865 msgid "" -"`Flower meets Talan blog post `_ ." +"FedYogi - Federated learning strategy using Yogi on server-side. " +"Implementation based on https://arxiv.org/abs/2003.00295" msgstr "" -"`Flower meets Talan blog post `_ ." +"FedYogi - Stratégie d'apprentissage fédéré utilisant Yogi côté serveur. " +"Mise en oeuvre basée sur https://arxiv.org/abs/2003.00295" -#: ../../source/ref-faq.rst:32 +#: ../../source/ref-changelog.md:866 msgid "" -"`Flower meets Talan GitHub Repository " -"`_ ." +"FedAdam - Federated learning strategy using Adam on server-side. " +"Implementation based on https://arxiv.org/abs/2003.00295" msgstr "" -"`Flower rencontre Talan Dépôt GitHub " -"`_ ." - -#: ../../source/ref-telemetry.md:1 -msgid "Telemetry" -msgstr "Télémétrie" +"FedAdam - Stratégie d'apprentissage fédéré utilisant Adam côté serveur. " +"Mise en œuvre basée sur https://arxiv.org/abs/2003.00295" -#: ../../source/ref-telemetry.md:3 +#: ../../source/ref-changelog.md:868 msgid "" -"The Flower open-source project collects **anonymous** usage metrics to " -"make well-informed decisions to improve Flower. Doing this enables the " -"Flower team to understand how Flower is used and what challenges users " -"might face." +"**New PyTorch Lightning code example** " +"([#617](https://github.com/adap/flower/pull/617))" msgstr "" -"Le projet open-source Flower recueille des mesures d'utilisation " -"**anonymes** afin de prendre des décisions éclairées pour améliorer " -"Flower. Cela permet à l'équipe de Flower de comprendre comment Flower est" -" utilisé et quels sont les défis auxquels les utilisateurs peuvent être " -"confrontés." +"**Nouvel exemple de code PyTorch Lightning** " +"([#617](https://github.com/adap/flower/pull/617))" -#: ../../source/ref-telemetry.md:5 +#: ../../source/ref-changelog.md:870 msgid "" -"**Flower is a friendly framework for collaborative AI and data science.**" -" Staying true to this statement, Flower makes it easy to disable " -"telemetry for users that do not want to share anonymous usage metrics." +"**New Variational Auto-Encoder code example** " +"([#752](https://github.com/adap/flower/pull/752))" msgstr "" -"**Flower est un cadre convivial pour l'IA collaborative et la science des" -" données.** En restant fidèle à cette déclaration, Flower permet de " -"désactiver facilement la télémétrie pour les utilisateurs qui ne " -"souhaitent pas partager des mesures d'utilisation anonymes." - -#: ../../source/ref-telemetry.md:7 -msgid "Principles" -msgstr "Principes" +"**Nouvel exemple de code d'autocodage variationnel** " +"([#752](https://github.com/adap/flower/pull/752))" -#: ../../source/ref-telemetry.md:9 -msgid "We follow strong principles guarding anonymous usage metrics collection:" +#: ../../source/ref-changelog.md:872 +msgid "" +"**New scikit-learn code example** " +"([#748](https://github.com/adap/flower/pull/748))" msgstr "" -"Nous suivons des principes stricts concernant la collecte de données " -"anonymes sur l'utilisation :" +"**Nouvel exemple de code scikit-learn** " +"([#748](https://github.com/adap/flower/pull/748))" -#: ../../source/ref-telemetry.md:11 +#: ../../source/ref-changelog.md:874 msgid "" -"**Optional:** You will always be able to disable telemetry; read on to " -"learn “[How to opt-out](#how-to-opt-out)”." +"**New experimental TensorBoard strategy** " +"([#789](https://github.com/adap/flower/pull/789))" msgstr "" -"**Optionnel:** Tu pourras toujours désactiver la télémétrie ; lis la " -"suite pour apprendre \"[Comment se désengager](#how-to-opt-out)\"." +"**Nouvelle stratégie expérimentale TensorBoard** " +"([#789](https://github.com/adap/flower/pull/789))" -#: ../../source/ref-telemetry.md:12 +#: ../../source/ref-changelog.md:878 msgid "" -"**Anonymous:** The reported usage metrics are anonymous and do not " -"contain any personally identifiable information (PII). See “[Collected " -"metrics](#collected-metrics)” to understand what metrics are being " -"reported." +"Improved advanced TensorFlow code example " +"([#769](https://github.com/adap/flower/pull/769))" msgstr "" -"**Anonyme:** Les mesures d'utilisation rapportées sont anonymes et ne " -"contiennent aucune information personnelle identifiable (PII). Voir " -"\"[Collected metrics](#collected-metrics)\" pour comprendre quelles " -"mesures sont rapportées." +"Amélioration de l'exemple de code TensorFlow avancé " +"([#769](https://github.com/adap/flower/pull/769))" -#: ../../source/ref-telemetry.md:13 +#: ../../source/ref-changelog.md:879 msgid "" -"**Transparent:** You can easily inspect what anonymous metrics are being " -"reported; see the section “[How to inspect what is being reported](#how-" -"to-inspect-what-is-being-reported)”" +"Warning when `min_available_clients` is misconfigured " +"([#830](https://github.com/adap/flower/pull/830))" msgstr "" -"**Transparent:** Tu peux facilement inspecter les métriques anonymes qui " -"sont rapportées ; voir la section \"[Comment inspecter ce qui est " -"rapporté](#how-to-inspect-what-is-being-reported)\"" +"Avertissement lorsque `min_available_clients` est mal configuré " +"([#830](https://github.com/adap/flower/pull/830))" -#: ../../source/ref-telemetry.md:14 -#, fuzzy +#: ../../source/ref-changelog.md:880 msgid "" -"**Open for feedback:** You can always reach out to us if you have " -"feedback; see the section “[How to contact us](#how-to-contact-us)” for " -"details." +"Improved gRPC server docs " +"([#841](https://github.com/adap/flower/pull/841))" msgstr "" -"**Ouvert pour les commentaires:** Tu peux toujours nous contacter si tu " -"as des commentaires ; voir la section \"[Comment nous contacter ](#how-" -"to-contact-us)\" pour plus de détails." - -#: ../../source/ref-telemetry.md:16 -msgid "How to opt-out" -msgstr "Comment se désinscrire" +"Amélioration de la documentation sur le serveur gRPC " +"([#841](https://github.com/adap/flower/pull/841))" -#: ../../source/ref-telemetry.md:18 +#: ../../source/ref-changelog.md:881 msgid "" -"When Flower starts, it will check for an environment variable called " -"`FLWR_TELEMETRY_ENABLED`. Telemetry can easily be disabled by setting " -"`FLWR_TELEMETRY_ENABLED=0`. Assuming you are starting a Flower server or " -"client, simply do so by prepending your command as in:" +"Improved error message in `NumPyClient` " +"([#851](https://github.com/adap/flower/pull/851))" msgstr "" -"Lorsque Flower démarre, il vérifie la présence d'une variable " -"d'environnement appelée `FLWR_TELEMETRY_ENABLED`. La télémétrie peut " -"facilement être désactivée en réglant `FLWR_TELEMETRY_ENABLED=0`. En " -"supposant que tu démarres un serveur ou un client Flower, fais-le " -"simplement en faisant précéder ta commande de la façon suivante :" +"Amélioration du message d'erreur dans `NumPyClient` " +"([#851](https://github.com/adap/flower/pull/851))" -#: ../../source/ref-telemetry.md:24 +#: ../../source/ref-changelog.md:882 msgid "" -"Alternatively, you can export `FLWR_TELEMETRY_ENABLED=0` in, for example," -" `.bashrc` (or whatever configuration file applies to your environment) " -"to disable Flower telemetry permanently." +"Improved PyTorch quickstart code example " +"([#852](https://github.com/adap/flower/pull/852))" msgstr "" -"Tu peux aussi exporter `FLWR_TELEMETRY_ENABLED=0` dans, par exemple, " -"`.bashrc` (ou tout autre fichier de configuration qui s'applique à ton " -"environnement) pour désactiver la télémétrie de la fleur de façon " -"permanente." - -#: ../../source/ref-telemetry.md:26 -msgid "Collected metrics" -msgstr "Mesures collectées" - -#: ../../source/ref-telemetry.md:28 -msgid "Flower telemetry collects the following metrics:" -msgstr "La télémétrie des fleurs recueille les métriques suivantes :" +"Exemple de code de démarrage rapide PyTorch amélioré " +"([#852](https://github.com/adap/flower/pull/852))" -#: ../../source/ref-telemetry.md:30 +#: ../../source/ref-changelog.md:886 msgid "" -"**Flower version.** Understand which versions of Flower are currently " -"being used. This helps us to decide whether we should invest effort into " -"releasing a patch version for an older version of Flower or instead use " -"the bandwidth to build new features." +"**Disabled final distributed evaluation** " +"([#800](https://github.com/adap/flower/pull/800))" msgstr "" -"**Cela nous aide à décider si nous devons investir des efforts dans la " -"publication d'une version corrective pour une version plus ancienne de " -"Flower ou si nous devons plutôt utiliser la bande passante pour " -"développer de nouvelles fonctionnalités." +"**Désactivé l'évaluation finale distribuée** " +"([#800](https://github.com/adap/flower/pull/800))" -#: ../../source/ref-telemetry.md:32 +#: ../../source/ref-changelog.md:888 msgid "" -"**Operating system.** Enables us to answer questions such as: *Should we " -"create more guides for Linux, macOS, or Windows?*" +"Prior behaviour was to perform a final round of distributed evaluation on" +" all connected clients, which is often not required (e.g., when using " +"server-side evaluation). The prior behaviour can be enabled by passing " +"`force_final_distributed_eval=True` to `start_server`." msgstr "" -"**Système d'exploitation.** Nous permet de répondre à des questions " -"telles que : *Faudrait-il créer plus de guides pour Linux, macOS ou " -"Windows ?" +"Le comportement précédent consistait à effectuer un dernier tour " +"d'évaluation distribuée sur tous les clients connectés, ce qui n'est " +"souvent pas nécessaire (par exemple, lors de l'utilisation de " +"l'évaluation côté serveur). Le comportement précédent peut être activé en" +" passant `force_final_distributed_eval=True` à `start_server`." -#: ../../source/ref-telemetry.md:34 +#: ../../source/ref-changelog.md:890 msgid "" -"**Python version.** Knowing the Python version helps us, for example, to " -"decide whether we should invest effort into supporting old versions of " -"Python or stop supporting them and start taking advantage of new Python " -"features." +"**Renamed q-FedAvg strategy** " +"([#802](https://github.com/adap/flower/pull/802))" msgstr "" -"**Version de Python.** Connaître la version de Python nous aide, par " -"exemple, à décider si nous devons investir des efforts dans la prise en " -"charge des anciennes versions de Python ou cesser de les prendre en " -"charge et commencer à tirer parti des nouvelles fonctionnalités de " -"Python." +"**Renommé stratégie q-FedAvg** " +"([#802](https://github.com/adap/flower/pull/802))" -#: ../../source/ref-telemetry.md:36 +#: ../../source/ref-changelog.md:892 msgid "" -"**Hardware properties.** Understanding the hardware environment that " -"Flower is being used in helps to decide whether we should, for example, " -"put more effort into supporting low-resource environments." +"The strategy named `QffedAvg` was renamed to `QFedAvg` to better reflect " +"the notation given in the original paper (q-FFL is the optimization " +"objective, q-FedAvg is the proposed solver). Note the original (now " +"deprecated) `QffedAvg` class is still available for compatibility reasons" +" (it will be removed in a future release)." msgstr "" -"**Comprendre l'environnement matériel dans lequel Flower est utilisé " -"permet de décider si nous devrions, par exemple, faire plus d'efforts " -"pour prendre en charge les environnements à faibles ressources." +"La stratégie nommée `QffedAvg` a été renommée en `QFedAvg` pour mieux " +"refléter la notation donnée dans l'article original (q-FFL est l'objectif" +" d'optimisation, q-FedAvg est le solveur proposé). Notez que la classe " +"`QffedAvg` originale (maintenant obsolète) est toujours disponible pour " +"des raisons de compatibilité (elle sera supprimée dans une prochaine " +"version)." -#: ../../source/ref-telemetry.md:38 +#: ../../source/ref-changelog.md:894 msgid "" -"**Execution mode.** Knowing what execution mode Flower starts in enables " -"us to understand how heavily certain features are being used and better " -"prioritize based on that." +"**Deprecated and renamed code example** `simulation_pytorch` **to** " +"`simulation_pytorch_legacy` " +"([#791](https://github.com/adap/flower/pull/791))" msgstr "" -"**Mode d'exécution** Connaître le mode d'exécution dans lequel Flower " -"démarre nous permet de comprendre à quel point certaines fonctionnalités " -"sont utilisées et de mieux établir les priorités en fonction de cela." +"**Exemple de code déprécié et renommé** `simulation_pytorch` **en** " +"`simulation_pytorch_legacy` " +"([#791](https://github.com/adap/flower/pull/791))" -#: ../../source/ref-telemetry.md:40 +#: ../../source/ref-changelog.md:896 msgid "" -"**Cluster.** Flower telemetry assigns a random in-memory cluster ID each " -"time a Flower workload starts. This allows us to understand which device " -"types not only start Flower workloads but also successfully complete " -"them." +"This example has been replaced by a new example. The new example is based" +" on the experimental virtual client engine, which will become the new " +"default way of doing most types of large-scale simulations in Flower. The" +" existing example was kept for reference purposes, but it might be " +"removed in the future." msgstr "" -"**Cluster.** La télémétrie Flower attribue un ID de cluster en mémoire " -"aléatoire à chaque fois qu'une charge de travail Flower démarre. Cela " -"nous permet de comprendre quels types d'appareils non seulement démarrent" -" les charges de travail Flower, mais aussi les terminent avec succès." +"Cet exemple a été remplacé par un nouvel exemple. Le nouvel exemple est " +"basé sur le moteur expérimental du client virtuel, qui deviendra la " +"nouvelle méthode par défaut pour effectuer la plupart des types de " +"simulations à grande échelle dans Flower. L'exemple existant a été " +"conservé à des fins de référence, mais il pourrait être supprimé à " +"l'avenir." -#: ../../source/ref-telemetry.md:42 -msgid "" -"**Source.** Flower telemetry tries to store a random source ID in " -"`~/.flwr/source` the first time a telemetry event is generated. The " -"source ID is important to identify whether an issue is recurring or " -"whether an issue is triggered by multiple clusters running concurrently " -"(which often happens in simulation). For example, if a device runs " -"multiple workloads at the same time, and this results in an issue, then, " -"in order to reproduce the issue, multiple workloads must be started at " -"the same time." -msgstr "" -"**Source.** La télémétrie de Flower essaie de stocker un ID de source " -"aléatoire dans `~/.flwr/source` la première fois qu'un événement de " -"télémétrie est généré. L'ID de source est important pour identifier si un" -" problème est récurrent ou si un problème est déclenché par plusieurs " -"clusters fonctionnant simultanément (ce qui arrive souvent en " -"simulation). Par exemple, si un périphérique exécute plusieurs charges de" -" travail en même temps, et que cela entraîne un problème, alors, afin de " -"reproduire le problème, plusieurs charges de travail doivent être " -"démarrées en même temps." +#: ../../source/ref-changelog.md:898 +msgid "v0.16.0 (2021-05-11)" +msgstr "v0.16.0 (2021-05-11)" -#: ../../source/ref-telemetry.md:44 +#: ../../source/ref-changelog.md:902 msgid "" -"You may delete the source ID at any time. If you wish for all events " -"logged under a specific source ID to be deleted, you can send a deletion " -"request mentioning the source ID to `telemetry@flower.ai`. All events " -"related to that source ID will then be permanently deleted." +"**New built-in strategies** " +"([#549](https://github.com/adap/flower/pull/549))" msgstr "" -"Tu peux supprimer l'identifiant de la source à tout moment. Si tu " -"souhaites que tous les événements enregistrés sous un identifiant de " -"source spécifique soient supprimés, tu peux envoyer une demande de " -"suppression mentionnant l'identifiant de source à `telemetry@flower.ai`. " -"Tous les événements liés à cet identifiant de source seront alors " -"définitivement supprimés." +"**Nouvelles stratégies intégrées** " +"([#549](https://github.com/adap/flower/pull/549))" -#: ../../source/ref-telemetry.md:46 -msgid "" -"We will not collect any personally identifiable information. If you think" -" any of the metrics collected could be misused in any way, please [get in" -" touch with us](#how-to-contact-us). We will update this page to reflect " -"any changes to the metrics collected and publish changes in the " -"changelog." -msgstr "" -"Nous ne collecterons aucune information personnelle identifiable. Si tu " -"penses que l'une des métriques collectées pourrait être utilisée à " -"mauvais escient de quelque manière que ce soit, merci de [nous " -"contacter](#commentnouscontacter). Nous mettrons à jour cette page pour " -"refléter toute modification des métriques collectées et nous publierons " -"les changements dans le journal des modifications (changelog)." +#: ../../source/ref-changelog.md:904 +msgid "(abstract) FedOpt" +msgstr "(résumé) FedOpt" -#: ../../source/ref-telemetry.md:48 +#: ../../source/ref-changelog.md:907 msgid "" -"If you think other metrics would be helpful for us to better guide our " -"decisions, please let us know! We will carefully review them; if we are " -"confident that they do not compromise user privacy, we may add them." +"**Custom metrics for server and strategies** " +"([#717](https://github.com/adap/flower/pull/717))" msgstr "" -"Si tu penses que d'autres mesures nous seraient utiles pour mieux " -"orienter nos décisions, fais-le nous savoir ! Nous les examinerons " -"attentivement ; si nous sommes convaincus qu'elles ne compromettent pas " -"la vie privée des utilisateurs, nous pourrons les ajouter." - -#: ../../source/ref-telemetry.md:50 -msgid "How to inspect what is being reported" -msgstr "Comment inspecter ce qui est rapporté" +"**Métriques personnalisées pour le serveur et les stratégies** " +"([#717](https://github.com/adap/flower/pull/717))" -#: ../../source/ref-telemetry.md:52 +#: ../../source/ref-changelog.md:909 msgid "" -"We wanted to make it very easy for you to inspect what anonymous usage " -"metrics are reported. You can view all the reported telemetry information" -" by setting the environment variable `FLWR_TELEMETRY_LOGGING=1`. Logging " -"is disabled by default. You may use logging independently from " -"`FLWR_TELEMETRY_ENABLED` so that you can inspect the telemetry feature " -"without sending any metrics." +"The Flower server is now fully task-agnostic, all remaining instances of " +"task-specific metrics (such as `accuracy`) have been replaced by custom " +"metrics dictionaries. Flower 0.15 introduced the capability to pass a " +"dictionary containing custom metrics from client to server. As of this " +"release, custom metrics replace task-specific metrics on the server." msgstr "" -"Nous avons voulu qu'il soit très facile pour toi d'inspecter les mesures " -"d'utilisation anonymes qui sont rapportées. Tu peux voir toutes les " -"informations de télémétrie rapportées en définissant la variable " -"d'environnement `FLWR_TELEMETRY_LOGGING=1`. La journalisation est " -"désactivée par défaut. Tu peux utiliser la journalisation indépendamment " -"de `FLWR_TELEMETRY_ENABLED` afin d'inspecter la fonction de télémétrie " -"sans envoyer de mesures." +"Le serveur Flower est maintenant totalement agnostique, toutes les " +"instances restantes de métriques spécifiques à une tâche (telles que " +"`accuracy`) ont été remplacées par des dictionnaires de métriques " +"personnalisées. Flower 0.15 a introduit la possibilité de passer un " +"dictionnaire contenant des métriques personnalisées du client au serveur." +" À partir de cette version, les métriques personnalisées remplacent les " +"métriques spécifiques à une tâche sur le serveur." -#: ../../source/ref-telemetry.md:58 +#: ../../source/ref-changelog.md:911 +#, fuzzy msgid "" -"The inspect Flower telemetry without sending any anonymous usage metrics," -" use both environment variables:" +"Custom metric dictionaries are now used in two user-facing APIs: they are" +" returned from Strategy methods `aggregate_fit`/`aggregate_evaluate` and " +"they enable evaluation functions passed to built-in strategies (via " +"`eval_fn`) to return more than two evaluation metrics. Strategies can " +"even return *aggregated* metrics dictionaries for the server to keep " +"track of." msgstr "" -"L'inspecteur Flower telemetry sans envoyer de métriques d'utilisation " -"anonymes, utilise les deux variables d'environnement :" - -#: ../../source/ref-telemetry.md:64 -msgid "How to contact us" -msgstr "Comment nous contacter" +"Les dictionnaires de métriques personnalisés sont maintenant utilisés " +"dans deux API orientées vers l'utilisateur : ils sont renvoyés par les " +"méthodes de stratégie `aggregate_fit`/`aggregate_evaluate` et ils " +"permettent aux fonctions d'évaluation passées aux stratégies intégrées " +"(via `eval_fn`) de renvoyer plus de deux métriques d'évaluation. Les " +"stratégies peuvent même renvoyer des dictionnaires de métriques " +"*agrégées* pour que le serveur puisse en garder la trace." -#: ../../source/ref-telemetry.md:66 +#: ../../source/ref-changelog.md:913 +#, fuzzy msgid "" -"We want to hear from you. If you have any feedback or ideas on how to " -"improve the way we handle anonymous usage metrics, reach out to us via " -"[Slack](https://flower.ai/join-slack/) (channel `#telemetry`) or email " -"(`telemetry@flower.ai`)." +"Strategy implementations should migrate their `aggregate_fit` and " +"`aggregate_evaluate` methods to the new return type (e.g., by simply " +"returning an empty `{}`), server-side evaluation functions should migrate" +" from `return loss, accuracy` to `return loss, {\"accuracy\": accuracy}`." msgstr "" -"Si tu as des commentaires ou des idées pour améliorer la façon dont nous " -"traitons les mesures d'utilisation anonymes, contacte-nous via " -"[Slack](https://flower.ai/join-slack/) (canal `#telemetry`) ou par " -"courriel (`telemetry@flower.ai`)." +"Les implémentations de Stratey doivent migrer leurs méthodes " +"`aggregate_fit` et `aggregate_evaluate` vers le nouveau type de retour " +"(par exemple, en renvoyant simplement un `{}` vide), les fonctions " +"d'évaluation côté serveur doivent migrer de `return loss, accuracy` à " +"`return loss, {\"accuracy\" : accuracy}`." -#: ../../source/tutorial-quickstart-android.rst:-1 +#: ../../source/ref-changelog.md:915 msgid "" -"Read this Federated Learning quickstart tutorial for creating an Android " -"app using Flower." +"Flower 0.15-style return types are deprecated (but still supported), " +"compatibility will be removed in a future release." msgstr "" +"Les types de retour du style Flower 0.15 sont dépréciés (mais toujours " +"pris en charge), la compatibilité sera supprimée dans une prochaine " +"version." -#: ../../source/tutorial-quickstart-android.rst:5 -#, fuzzy -msgid "Quickstart Android" -msgstr "Démarrage rapide des Pandas" - -#: ../../source/tutorial-quickstart-android.rst:10 -#, fuzzy +#: ../../source/ref-changelog.md:917 msgid "" -"Let's build a federated learning system using TFLite and Flower on " -"Android!" +"**Migration warnings for deprecated functionality** " +"([#690](https://github.com/adap/flower/pull/690))" msgstr "" -"Construisons un système d'apprentissage fédéré en utilisant fastai et " -"Flower !" +"**Avertissements de migration pour les fonctionnalités obsolètes** " +"([#690](https://github.com/adap/flower/pull/690))" -#: ../../source/tutorial-quickstart-android.rst:12 -#, fuzzy +#: ../../source/ref-changelog.md:919 msgid "" -"Please refer to the `full code example " -"`_ to learn " -"more." +"Earlier versions of Flower were often migrated to new APIs, while " +"maintaining compatibility with legacy APIs. This release introduces " +"detailed warning messages if usage of deprecated APIs is detected. The " +"new warning messages often provide details on how to migrate to more " +"recent APIs, thus easing the transition from one release to another." msgstr "" -"Réfère-toi à l'exemple de code complet " -"`_ " -"pour en savoir plus." +"Les versions antérieures de Flower ont souvent été migrées vers de " +"nouvelles API, tout en maintenant la compatibilité avec les anciennes " +"API. Cette version introduit des messages d'avertissement détaillés si " +"l'utilisation d'API obsolètes est détectée. Les nouveaux messages " +"d'avertissement fournissent souvent des détails sur la façon de migrer " +"vers des API plus récentes, facilitant ainsi la transition d'une version " +"à l'autre." -#: ../../source/tutorial-quickstart-fastai.rst:-1 +#: ../../source/ref-changelog.md:921 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with FastAI to train a vision model on CIFAR-10." +"Improved docs and docstrings " +"([#691](https://github.com/adap/flower/pull/691) " +"[#692](https://github.com/adap/flower/pull/692) " +"[#713](https://github.com/adap/flower/pull/713))" msgstr "" +"Amélioration des docs et des docstrings " +"([#691](https://github.com/adap/flower/pull/691) " +"[#692](https://github.com/adap/flower/pull/692) " +"[#713](https://github.com/adap/flower/pull/713))" -#: ../../source/tutorial-quickstart-fastai.rst:5 -msgid "Quickstart fastai" -msgstr "Démarrage rapide fastai" - -#: ../../source/tutorial-quickstart-fastai.rst:10 -msgid "Let's build a federated learning system using fastai and Flower!" -msgstr "" -"Construisons un système d'apprentissage fédéré en utilisant fastai et " -"Flower !" +#: ../../source/ref-changelog.md:923 +msgid "MXNet example and documentation" +msgstr "Exemple et documentation MXNet" -#: ../../source/tutorial-quickstart-fastai.rst:12 -#, fuzzy +#: ../../source/ref-changelog.md:925 msgid "" -"Please refer to the `full code example " -"`_ " -"to learn more." +"FedBN implementation in example PyTorch: From Centralized To Federated " +"([#696](https://github.com/adap/flower/pull/696) " +"[#702](https://github.com/adap/flower/pull/702) " +"[#705](https://github.com/adap/flower/pull/705))" msgstr "" -"Réfère-toi à l'exemple de code complet " -"`_ " -"pour en savoir plus." +"Mise en œuvre de FedBN dans l'exemple PyTorch : De la centralisation à la" +" fédération ([#696](https://github.com/adap/flower/pull/696) " +"[#702](https://github.com/adap/flower/pull/702) " +"[#705](https://github.com/adap/flower/pull/705))" -#: ../../source/tutorial-quickstart-huggingface.rst:-1 +#: ../../source/ref-changelog.md:929 msgid "" -"Check out this Federating Learning quickstart tutorial for using Flower " -"with HuggingFace Transformers in order to fine-tune an LLM." +"**Serialization-agnostic server** " +"([#721](https://github.com/adap/flower/pull/721))" msgstr "" +"**Serveur agnostique de sérialisation** " +"([#721](https://github.com/adap/flower/pull/721))" -#: ../../source/tutorial-quickstart-huggingface.rst:5 -msgid "Quickstart 🤗 Transformers" -msgstr "Démarrage rapide 🤗 Transformateurs" - -#: ../../source/tutorial-quickstart-huggingface.rst:10 +#: ../../source/ref-changelog.md:931 msgid "" -"Let's build a federated learning system using Hugging Face Transformers " -"and Flower!" +"The Flower server is now fully serialization-agnostic. Prior usage of " +"class `Weights` (which represents parameters as deserialized NumPy " +"ndarrays) was replaced by class `Parameters` (e.g., in `Strategy`). " +"`Parameters` objects are fully serialization-agnostic and represents " +"parameters as byte arrays, the `tensor_type` attributes indicates how " +"these byte arrays should be interpreted (e.g., for " +"serialization/deserialization)." msgstr "" -"Construisons un système d'apprentissage fédéré à l'aide des " -"transformateurs Hugging Face et de Flower !" +"Le serveur Flower est désormais totalement agnostique en matière de " +"sérialisation. L'utilisation antérieure de la classe `Weights` (qui " +"représente les paramètres sous forme de tableaux NumPy désérialisés) a " +"été remplacée par la classe `Parameters` (par exemple, dans `Strategy`). " +"Les objets `Parameters` sont totalement agnostiques en matière de " +"sérialisation et représentent les paramètres sous forme de tableaux " +"d'octets, les attributs `tensor_type` indiquent comment ces tableaux " +"d'octets doivent être interprétés (par exemple, pour la " +"sérialisation/désérialisation)." -#: ../../source/tutorial-quickstart-huggingface.rst:12 +#: ../../source/ref-changelog.md:933 msgid "" -"We will leverage Hugging Face to federate the training of language models" -" over multiple clients using Flower. More specifically, we will fine-tune" -" a pre-trained Transformer model (distilBERT) for sequence classification" -" over a dataset of IMDB ratings. The end goal is to detect if a movie " -"rating is positive or negative." +"Built-in strategies implement this approach by handling serialization and" +" deserialization to/from `Weights` internally. Custom/3rd-party Strategy " +"implementations should update to the slightly changed Strategy method " +"definitions. Strategy authors can consult PR " +"[#721](https://github.com/adap/flower/pull/721) to see how strategies can" +" easily migrate to the new format." msgstr "" -"Nous nous appuierons sur Hugging Face pour fédérer l'entraînement de " -"modèles de langage sur plusieurs clients à l'aide de Flower. Plus " -"précisément, nous mettrons au point un modèle Transformer pré-entraîné " -"(distilBERT) pour la classification de séquences sur un ensemble de " -"données d'évaluations IMDB. L'objectif final est de détecter si " -"l'évaluation d'un film est positive ou négative." - -#: ../../source/tutorial-quickstart-huggingface.rst:18 -msgid "Dependencies" -msgstr "Dépendances" +"Les stratégies intégrées mettent en œuvre cette approche en gérant en " +"interne la sérialisation et la désérialisation de `Weights`. Les " +"implémentations de stratégies personnalisées ou tierces doivent être " +"mises à jour avec les définitions de méthodes de stratégie légèrement " +"modifiées. Les auteurs de stratégies peuvent consulter le PR " +"[#721](https://github.com/adap/flower/pull/721) pour voir comment les " +"stratégies peuvent facilement migrer vers le nouveau format." -#: ../../source/tutorial-quickstart-huggingface.rst:20 +#: ../../source/ref-changelog.md:935 msgid "" -"To follow along this tutorial you will need to install the following " -"packages: :code:`datasets`, :code:`evaluate`, :code:`flwr`, " -":code:`torch`, and :code:`transformers`. This can be done using " -":code:`pip`:" +"Deprecated `flwr.server.Server.evaluate`, use " +"`flwr.server.Server.evaluate_round` instead " +"([#717](https://github.com/adap/flower/pull/717))" msgstr "" -"Pour suivre ce tutoriel, tu devras installer les paquets suivants : " -":code:`datasets`, :code:`evaluate`, :code:`flwr`, :code:`torch`, et " -":code:`transformers`. Cela peut être fait en utilisant :code:`pip` :" - -#: ../../source/tutorial-quickstart-huggingface.rst:30 -msgid "Standard Hugging Face workflow" -msgstr "Flux de travail standard pour le visage" +"Déclassé `flwr.server.Server.evaluate`, utiliser " +"`flwr.server.Server.evaluate_round` à la place " +"([#717](https://github.com/adap/flower/pull/717))" -#: ../../source/tutorial-quickstart-huggingface.rst:33 -msgid "Handling the data" -msgstr "Traitement des données" +#: ../../source/ref-changelog.md:937 +msgid "v0.15.0 (2021-03-12)" +msgstr "v0.15.0 (2021-03-12)" -#: ../../source/tutorial-quickstart-huggingface.rst:35 +#: ../../source/ref-changelog.md:941 msgid "" -"To fetch the IMDB dataset, we will use Hugging Face's :code:`datasets` " -"library. We then need to tokenize the data and create :code:`PyTorch` " -"dataloaders, this is all done in the :code:`load_data` function:" +"**Server-side parameter initialization** " +"([#658](https://github.com/adap/flower/pull/658))" msgstr "" -"Pour récupérer le jeu de données IMDB, nous utiliserons la bibliothèque " -":code:`datasets` de Hugging Face. Nous devons ensuite tokeniser les " -"données et créer des :code:`PyTorch` dataloaders, ce qui est fait dans la" -" fonction :code:`load_data` :" - -#: ../../source/tutorial-quickstart-huggingface.rst:81 -msgid "Training and testing the model" -msgstr "Former et tester le modèle" +"**Initialisation des paramètres côté serveur** " +"([#658](https://github.com/adap/flower/pull/658))" -#: ../../source/tutorial-quickstart-huggingface.rst:83 +#: ../../source/ref-changelog.md:943 msgid "" -"Once we have a way of creating our trainloader and testloader, we can " -"take care of the training and testing. This is very similar to any " -":code:`PyTorch` training or testing loop:" +"Model parameters can now be initialized on the server-side. Server-side " +"parameter initialization works via a new `Strategy` method called " +"`initialize_parameters`." msgstr "" -"Une fois que nous avons trouvé un moyen de créer notre trainloader et " -"notre testloader, nous pouvons nous occuper de l'entraînement et du test." -" C'est très similaire à n'importe quelle boucle d'entraînement ou de test" -" :code:`PyTorch` :" - -#: ../../source/tutorial-quickstart-huggingface.rst:121 -msgid "Creating the model itself" -msgstr "Créer le modèle lui-même" +"Les paramètres du modèle peuvent maintenant être initialisés côté " +"serveur. L'initialisation des paramètres côté serveur fonctionne via une " +"nouvelle méthode `Strategy` appelée `initialize_parameters`." -#: ../../source/tutorial-quickstart-huggingface.rst:123 +#: ../../source/ref-changelog.md:945 msgid "" -"To create the model itself, we will just load the pre-trained distillBERT" -" model using Hugging Face’s :code:`AutoModelForSequenceClassification` :" +"Built-in strategies support a new constructor argument called " +"`initial_parameters` to set the initial parameters. Built-in strategies " +"will provide these initial parameters to the server on startup and then " +"delete them to free the memory afterwards." msgstr "" -"Pour créer le modèle lui-même, nous allons simplement charger le modèle " -"distillBERT pré-entraîné en utilisant le " -":code:`AutoModelForSequenceClassification` de Hugging Face :" - -#: ../../source/tutorial-quickstart-huggingface.rst:136 -msgid "Federating the example" -msgstr "Fédérer l'exemple" - -#: ../../source/tutorial-quickstart-huggingface.rst:139 -msgid "Creating the IMDBClient" -msgstr "Création du client IMDBC" +"Les stratégies intégrées prennent en charge un nouvel argument du " +"constructeur appelé `initial_parameters` pour définir les paramètres " +"initiaux. Les stratégies intégrées fourniront ces paramètres initiaux au " +"serveur au démarrage et les supprimeront ensuite pour libérer la mémoire." -#: ../../source/tutorial-quickstart-huggingface.rst:141 +#: ../../source/ref-changelog.md:964 msgid "" -"To federate our example to multiple clients, we first need to write our " -"Flower client class (inheriting from :code:`flwr.client.NumPyClient`). " -"This is very easy, as our model is a standard :code:`PyTorch` model:" +"If no initial parameters are provided to the strategy, the server will " +"continue to use the current behaviour (namely, it will ask one of the " +"connected clients for its parameters and use these as the initial global " +"parameters)." msgstr "" -"Pour fédérer notre exemple à plusieurs clients, nous devons d'abord " -"écrire notre classe de client Flower (héritant de " -":code:`flwr.client.NumPyClient`). C'est très facile, car notre modèle est" -" un modèle :code:`PyTorch` standard :" +"Si aucun paramètre initial n'est fourni à la stratégie, le serveur " +"continuera à utiliser le comportement actuel (à savoir qu'il demandera à " +"l'un des clients connectés ses paramètres et les utilisera comme " +"paramètres globaux initiaux)." -#: ../../source/tutorial-quickstart-huggingface.rst:169 +#: ../../source/ref-changelog.md:966 +msgid "Deprecations" +msgstr "Dépréciations" + +#: ../../source/ref-changelog.md:968 msgid "" -"The :code:`get_parameters` function lets the server get the client's " -"parameters. Inversely, the :code:`set_parameters` function allows the " -"server to send its parameters to the client. Finally, the :code:`fit` " -"function trains the model locally for the client, and the " -":code:`evaluate` function tests the model locally and returns the " -"relevant metrics." +"Deprecate `flwr.server.strategy.DefaultStrategy` (migrate to " +"`flwr.server.strategy.FedAvg`, which is equivalent)" msgstr "" -"La fonction :code:`get_parameters` permet au serveur d'obtenir les " -"paramètres du client. Inversement, la fonction :code:`set_parameters` " -"permet au serveur d'envoyer ses paramètres au client. Enfin, la fonction " -":code:`fit` forme le modèle localement pour le client, et la fonction " -":code:`evaluate` teste le modèle localement et renvoie les mesures " -"correspondantes." +"Déclasser `flwr.server.strategy.DefaultStrategy` (migrer vers " +"`flwr.server.strategy.FedAvg`, qui est équivalent)" -#: ../../source/tutorial-quickstart-huggingface.rst:175 -msgid "Starting the server" -msgstr "Démarrer le serveur" +#: ../../source/ref-changelog.md:970 +msgid "v0.14.0 (2021-02-18)" +msgstr "v0.14.0 (2021-02-18)" -#: ../../source/tutorial-quickstart-huggingface.rst:177 +#: ../../source/ref-changelog.md:974 msgid "" -"Now that we have a way to instantiate clients, we need to create our " -"server in order to aggregate the results. Using Flower, this can be done " -"very easily by first choosing a strategy (here, we are using " -":code:`FedAvg`, which will define the global weights as the average of " -"all the clients' weights at each round) and then using the " -":code:`flwr.server.start_server` function:" +"**Generalized** `Client.fit` **and** `Client.evaluate` **return values** " +"([#610](https://github.com/adap/flower/pull/610) " +"[#572](https://github.com/adap/flower/pull/572) " +"[#633](https://github.com/adap/flower/pull/633))" msgstr "" -"Maintenant que nous avons un moyen d'instancier les clients, nous devons " -"créer notre serveur afin d'agréger les résultats. Avec Flower, cela peut " -"être fait très facilement en choisissant d'abord une stratégie (ici, nous" -" utilisons :code:`FedAvg`, qui définira les poids globaux comme la " -"moyenne des poids de tous les clients à chaque tour) et en utilisant " -"ensuite la fonction :code:`flwr.server.start_server` :" +"**Généralisé** `Client.fit` **et** `Client.evaluate` **valeurs de " +"retour** ([#610](https://github.com/adap/flower/pull/610) " +"[#572](https://github.com/adap/flower/pull/572) " +"[#633](https://github.com/adap/flower/pull/633))" -#: ../../source/tutorial-quickstart-huggingface.rst:205 +#: ../../source/ref-changelog.md:976 msgid "" -"The :code:`weighted_average` function is there to provide a way to " -"aggregate the metrics distributed amongst the clients (basically this " -"allows us to display a nice average accuracy and loss for every round)." +"Clients can now return an additional dictionary mapping `str` keys to " +"values of the following types: `bool`, `bytes`, `float`, `int`, `str`. " +"This means one can return almost arbitrary values from `fit`/`evaluate` " +"and make use of them on the server side!" msgstr "" -"La fonction :code:`weighted_average` est là pour fournir un moyen " -"d'agréger les mesures réparties entre les clients (en gros, cela nous " -"permet d'afficher une belle moyenne de précision et de perte pour chaque " -"tour)." - -#: ../../source/tutorial-quickstart-huggingface.rst:209 -msgid "Putting everything together" -msgstr "Tout assembler" - -#: ../../source/tutorial-quickstart-huggingface.rst:211 -msgid "We can now start client instances using:" -msgstr "Nous pouvons maintenant démarrer des instances de clients en utilisant :" +"Les clients peuvent maintenant renvoyer un dictionnaire supplémentaire " +"associant les clés `str` aux valeurs des types suivants : `bool`, " +"`bytes`, `float`, `int`, `str`. Cela signifie que l'on peut renvoyer des " +"valeurs presque arbitraires de `fit`/`evaluate` et les utiliser du côté " +"du serveur !" -#: ../../source/tutorial-quickstart-huggingface.rst:221 +#: ../../source/ref-changelog.md:978 msgid "" -"And they will be able to connect to the server and start the federated " -"training." -msgstr "Et ils pourront se connecter au serveur et démarrer la formation fédérée." +"This improvement also allowed for more consistent return types between " +"`fit` and `evaluate`: `evaluate` should now return a tuple `(float, int, " +"dict)` representing the loss, number of examples, and a dictionary " +"holding arbitrary problem-specific values like accuracy." +msgstr "" +"Cette amélioration a également permis de rendre plus cohérents les types " +"de retour entre `fit` et `evaluate` : `evaluate` devrait maintenant " +"retourner un tuple `(float, int, dict)` représentant la perte, le nombre " +"d'exemples, et un dictionnaire contenant des valeurs arbitraires " +"spécifiques au problème comme la précision." -#: ../../source/tutorial-quickstart-huggingface.rst:223 -#, fuzzy +#: ../../source/ref-changelog.md:980 msgid "" -"If you want to check out everything put together, you should check out " -"the `full code example `_ ." +"In case you wondered: this feature is compatible with existing projects, " +"the additional dictionary return value is optional. New code should " +"however migrate to the new return types to be compatible with upcoming " +"Flower releases (`fit`: `List[np.ndarray], int, Dict[str, Scalar]`, " +"`evaluate`: `float, int, Dict[str, Scalar]`). See the example below for " +"details." msgstr "" -"Si tu veux voir tout ce qui est mis ensemble, tu devrais consulter " -"l'exemple de code complet : " -"[https://github.com/adap/flower/tree/main/examples/quickstart-" -"huggingface](https://github.com/adap/flower/tree/main/examples" -"/quickstart-huggingface)." +"Au cas où tu te poserais la question : cette fonctionnalité est " +"compatible avec les projets existants, la valeur de retour supplémentaire" +" du dictionnaire est facultative. Le nouveau code doit cependant migrer " +"vers les nouveaux types de retour pour être compatible avec les " +"prochaines versions de Flower (`fit` : `List[np.ndarray], int, Dict[str, " +"Scalar]`, `evaluate` : `float, int, Dict[str, Scalar]`). Voir l'exemple " +"ci-dessous pour plus de détails." -#: ../../source/tutorial-quickstart-huggingface.rst:226 +#: ../../source/ref-changelog.md:982 msgid "" -"Of course, this is a very basic example, and a lot can be added or " -"modified, it was just to showcase how simply we could federate a Hugging " -"Face workflow using Flower." +"*Code example:* note the additional dictionary return values in both " +"`FlwrClient.fit` and `FlwrClient.evaluate`:" msgstr "" -"Bien sûr, c'est un exemple très basique, et beaucoup de choses peuvent " -"être ajoutées ou modifiées, il s'agissait juste de montrer avec quelle " -"simplicité on pouvait fédérer un flux de travail Hugging Face à l'aide de" -" Flower." +"*Exemple de code:* note les valeurs de retour du dictionnaire " +"supplémentaires dans `FlwrClient.fit` et `FlwrClient.evaluate` :" -#: ../../source/tutorial-quickstart-huggingface.rst:229 +#: ../../source/ref-changelog.md:997 msgid "" -"Note that in this example we used :code:`PyTorch`, but we could have very" -" well used :code:`TensorFlow`." +"**Generalized** `config` **argument in** `Client.fit` **and** " +"`Client.evaluate` ([#595](https://github.com/adap/flower/pull/595))" msgstr "" -"Notez que dans cet exemple, nous avons utilisé :code:`PyTorch`, mais nous" -" aurions très bien pu utiliser :code:`TensorFlow`." +"**Généralisé** `config` **argument dans** `Client.fit` **et** " +"`Client.evaluate` ([#595](https://github.com/adap/flower/pull/595))" -#: ../../source/tutorial-quickstart-ios.rst:-1 +#: ../../source/ref-changelog.md:999 msgid "" -"Read this Federated Learning quickstart tutorial for creating an iOS app " -"using Flower to train a neural network on MNIST." +"The `config` argument used to be of type `Dict[str, str]`, which means " +"that dictionary values were expected to be strings. The new release " +"generalizes this to enable values of the following types: `bool`, " +"`bytes`, `float`, `int`, `str`." msgstr "" +"L'argument `config` était auparavant de type `Dict[str, str]`, ce qui " +"signifie que les valeurs du dictionnaire devaient être des chaînes. La " +"nouvelle version généralise cela pour permettre les valeurs des types " +"suivants : `bool`, `bytes`, `float`, `int`, `str`." -#: ../../source/tutorial-quickstart-ios.rst:5 -#, fuzzy -msgid "Quickstart iOS" -msgstr "Démarrage rapide XGBoost" - -#: ../../source/tutorial-quickstart-ios.rst:10 -#, fuzzy +#: ../../source/ref-changelog.md:1001 msgid "" -"In this tutorial we will learn how to train a Neural Network on MNIST " -"using Flower and CoreML on iOS devices." +"This means one can now pass almost arbitrary values to `fit`/`evaluate` " +"using the `config` dictionary. Yay, no more `str(epochs)` on the server-" +"side and `int(config[\"epochs\"])` on the client side!" msgstr "" -"Dans ce tutoriel, nous allons apprendre, comment former un réseau " -"neuronal convolutif sur MNIST en utilisant Flower et PyTorch." +"Cela signifie que l'on peut maintenant passer des valeurs presque " +"arbitraires à `fit`/`evaluate` en utilisant le dictionnaire `config`. " +"Yay, plus de `str(epochs)` du côté serveur et `int(config[\"epochs\"])` " +"du côté client !" -#: ../../source/tutorial-quickstart-ios.rst:12 -#, fuzzy +#: ../../source/ref-changelog.md:1003 msgid "" -"First of all, for running the Flower Python server, it is recommended to " -"create a virtual environment and run everything within a :doc:`virtualenv" -" `. For the Flower client " -"implementation in iOS, it is recommended to use Xcode as our IDE." +"*Code example:* note that the `config` dictionary now contains non-`str` " +"values in both `Client.fit` and `Client.evaluate`:" msgstr "" -"Tout d'abord, il est recommandé de créer un environnement virtuel et de " -"tout exécuter au sein d'un `virtualenv `_." +"*Exemple de code:* Notez que le dictionnaire `config` contient maintenant" +" des valeurs autres que `str` dans `Client.fit` et `Client.evaluate` :" -#: ../../source/tutorial-quickstart-ios.rst:15 -#, fuzzy -msgid "" -"Our example consists of one Python *server* and two iPhone *clients* that" -" all have the same model." -msgstr "" -"Notre exemple consiste en un *serveur* et deux *clients* ayant tous le " -"même modèle." +#: ../../source/ref-changelog.md:1020 +msgid "v0.13.0 (2021-01-08)" +msgstr "v0.13.0 (2021-01-08)" -#: ../../source/tutorial-quickstart-ios.rst:17 -#, fuzzy +#: ../../source/ref-changelog.md:1024 msgid "" -"*Clients* are responsible for generating individual weight updates for " -"the model based on their local datasets. These updates are then sent to " -"the *server* which will aggregate them to produce a better model. " -"Finally, the *server* sends this improved version of the model back to " -"each *client*. A complete cycle of weight updates is called a *round*." +"New example: PyTorch From Centralized To Federated " +"([#549](https://github.com/adap/flower/pull/549))" msgstr "" -"*Les clients* sont chargés de générer des mises à jour de poids " -"individuelles pour le modèle en fonction de leurs ensembles de données " -"locales. Ces mises à jour sont ensuite envoyées au *serveur* qui les " -"agrège pour produire un meilleur modèle. Enfin, le *serveur* renvoie " -"cette version améliorée du modèle à chaque *client*. Un cycle complet de " -"mises à jour de poids s'appelle un *round*." +"Nouvel exemple : PyTorch de centralisé à fédéré " +"([#549](https://github.com/adap/flower/pull/549))" -#: ../../source/tutorial-quickstart-ios.rst:21 -#, fuzzy -msgid "" -"Now that we have a rough idea of what is going on, let's get started to " -"setup our Flower server environment. We first need to install Flower. You" -" can do this by using pip:" -msgstr "" -"Maintenant que nous avons une idée générale de ce qui se passe, " -"commençons. Nous devons d'abord installer Flower. Tu peux le faire en " -"exécutant :" +#: ../../source/ref-changelog.md:1025 +msgid "Improved documentation" +msgstr "Amélioration de la documentation" -#: ../../source/tutorial-quickstart-ios.rst:27 -msgid "Or Poetry:" +#: ../../source/ref-changelog.md:1026 +msgid "New documentation theme ([#551](https://github.com/adap/flower/pull/551))" msgstr "" +"Nouveau thème de documentation " +"([#551](https://github.com/adap/flower/pull/551))" -#: ../../source/tutorial-quickstart-ios.rst:34 -#: ../../source/tutorial-quickstart-mxnet.rst:36 -#: ../../source/tutorial-quickstart-pytorch.rst:37 -#: ../../source/tutorial-quickstart-scikitlearn.rst:40 -#: ../../source/tutorial-quickstart-tensorflow.rst:29 -#: ../../source/tutorial-quickstart-xgboost.rst:55 -msgid "Flower Client" -msgstr "Client de la fleur" +#: ../../source/ref-changelog.md:1027 +msgid "New API reference ([#554](https://github.com/adap/flower/pull/554))" +msgstr "Nouvelle référence API ([#554](https://github.com/adap/flower/pull/554))" -#: ../../source/tutorial-quickstart-ios.rst:36 +#: ../../source/ref-changelog.md:1028 msgid "" -"Now that we have all our dependencies installed, let's run a simple " -"distributed training using CoreML as our local training pipeline and " -"MNIST as our dataset. For simplicity reasons we will use the complete " -"Flower client with CoreML, that has been implemented and stored inside " -"the Swift SDK. The client implementation can be seen below:" +"Updated examples documentation " +"([#549](https://github.com/adap/flower/pull/549))" msgstr "" +"Mise à jour de la documentation des exemples " +"([#549](https://github.com/adap/flower/pull/549))" -#: ../../source/tutorial-quickstart-ios.rst:72 +#: ../../source/ref-changelog.md:1029 msgid "" -"Let's create a new application project in Xcode and add :code:`flwr` as a" -" dependency in your project. For our application, we will store the logic" -" of our app in :code:`FLiOSModel.swift` and the UI elements in " -":code:`ContentView.swift`. We will focus more on :code:`FLiOSModel.swift`" -" in this quickstart. Please refer to the `full code example " -"`_ to learn more " -"about the app." +"Removed obsolete documentation " +"([#548](https://github.com/adap/flower/pull/548))" msgstr "" +"Suppression de la documentation obsolète " +"([#548](https://github.com/adap/flower/pull/548))" -#: ../../source/tutorial-quickstart-ios.rst:75 -msgid "Import Flower and CoreML related packages in :code:`FLiOSModel.swift`:" -msgstr "" +#: ../../source/ref-changelog.md:1031 +msgid "Bugfix:" +msgstr "Correction de bogues :" -#: ../../source/tutorial-quickstart-ios.rst:83 +#: ../../source/ref-changelog.md:1033 msgid "" -"Then add the mlmodel to the project simply by drag-and-drop, the mlmodel " -"will be bundled inside the application during deployment to your iOS " -"device. We need to pass the url to access mlmodel and run CoreML machine " -"learning processes, it can be retrieved by calling the function " -":code:`Bundle.main.url`. For the MNIST dataset, we need to preprocess it " -"into :code:`MLBatchProvider` object. The preprocessing is done inside " -":code:`DataLoader.swift`." +"`Server.fit` does not disconnect clients when finished, disconnecting the" +" clients is now handled in `flwr.server.start_server` " +"([#553](https://github.com/adap/flower/pull/553) " +"[#540](https://github.com/adap/flower/issues/540))." msgstr "" +"`Server.fit` ne déconnecte pas les clients lorsqu'il est terminé, la " +"déconnexion des clients est maintenant gérée dans " +"`flwr.server.start_server` " +"([#553](https://github.com/adap/flower/pull/553) " +"[#540](https://github.com/adap/flower/issues/540))." -#: ../../source/tutorial-quickstart-ios.rst:99 +#: ../../source/ref-changelog.md:1035 +msgid "v0.12.0 (2020-12-07)" +msgstr "v0.12.0 (2020-12-07)" + +#: ../../source/ref-changelog.md:1037 ../../source/ref-changelog.md:1053 +msgid "Important changes:" +msgstr "Changements importants :" + +#: ../../source/ref-changelog.md:1039 msgid "" -"Since CoreML does not allow the model parameters to be seen before " -"training, and accessing the model parameters during or after the training" -" can only be done by specifying the layer name, we need to know this " -"information beforehand, through looking at the model specification, which" -" are written as proto files. The implementation can be seen in " -":code:`MLModelInspect`." +"Added an example for embedded devices " +"([#507](https://github.com/adap/flower/pull/507))" msgstr "" +"Ajout d'un exemple pour les périphériques embarqués " +"([#507](https://github.com/adap/flower/pull/507))" -#: ../../source/tutorial-quickstart-ios.rst:102 +#: ../../source/ref-changelog.md:1040 msgid "" -"After we have all of the necessary information, let's create our Flower " -"client." +"Added a new NumPyClient (in addition to the existing KerasClient) " +"([#504](https://github.com/adap/flower/pull/504) " +"[#508](https://github.com/adap/flower/pull/508))" msgstr "" +"Ajout d'un nouveau NumPyClient (en plus du KerasClient existant) " +"([#504](https://github.com/adap/flower/pull/504) " +"[#508](https://github.com/adap/flower/pull/508))" -#: ../../source/tutorial-quickstart-ios.rst:117 +#: ../../source/ref-changelog.md:1041 msgid "" -"Then start the Flower gRPC client and start communicating to the server " -"by passing our Flower client to the function :code:`startFlwrGRPC`." +"Deprecated `flwr_example` package and started to migrate examples into " +"the top-level `examples` directory " +"([#494](https://github.com/adap/flower/pull/494) " +"[#512](https://github.com/adap/flower/pull/512))" msgstr "" +"Déclassement du paquet `flwr_example` et migration des exemples dans le " +"répertoire de premier niveau `examples` " +"([#494](https://github.com/adap/flower/pull/494) " +"[#512](https://github.com/adap/flower/pull/512))" -#: ../../source/tutorial-quickstart-ios.rst:124 +#: ../../source/ref-changelog.md:1043 +msgid "v0.11.0 (2020-11-30)" +msgstr "v0.11.0 (2020-11-30)" + +#: ../../source/ref-changelog.md:1045 +msgid "Incompatible changes:" +msgstr "Changements incompatibles :" + +#: ../../source/ref-changelog.md:1047 msgid "" -"That's it for the client. We only have to implement :code:`Client` or " -"call the provided :code:`MLFlwrClient` and call :code:`startFlwrGRPC()`. " -"The attribute :code:`hostname` and :code:`port` tells the client which " -"server to connect to. This can be done by entering the hostname and port " -"in the application before clicking the start button to start the " -"federated learning process." +"Renamed strategy methods " +"([#486](https://github.com/adap/flower/pull/486)) to unify the naming of " +"Flower's public APIs. Other public methods/functions (e.g., every method " +"in `Client`, but also `Strategy.evaluate`) do not use the `on_` prefix, " +"which is why we're removing it from the four methods in Strategy. To " +"migrate rename the following `Strategy` methods accordingly:" msgstr "" +"Renommé les méthodes de stratégie " +"([#486](https://github.com/adap/flower/pull/486)) pour unifier le nommage" +" des API publiques de Flower. D'autres méthodes/fonctions publiques (par " +"exemple, toutes les méthodes de `Client`, mais aussi `Strategy.evaluate`)" +" n'utilisent pas le préfixe `on_`, c'est pourquoi nous le supprimons des " +"quatre méthodes de Stratégie. Pour migrer, renommez les méthodes de " +"`Strategy` suivantes en conséquence :" -#: ../../source/tutorial-quickstart-ios.rst:129 -#: ../../source/tutorial-quickstart-mxnet.rst:226 -#: ../../source/tutorial-quickstart-pytorch.rst:203 -#: ../../source/tutorial-quickstart-scikitlearn.rst:157 -#: ../../source/tutorial-quickstart-tensorflow.rst:98 -#: ../../source/tutorial-quickstart-xgboost.rst:309 -msgid "Flower Server" -msgstr "Serveur de Flower" +#: ../../source/ref-changelog.md:1048 +msgid "`on_configure_evaluate` => `configure_evaluate`" +msgstr "`on_configure_evaluate` => `configure_evaluate`" -#: ../../source/tutorial-quickstart-ios.rst:131 -#: ../../source/tutorial-quickstart-mxnet.rst:228 -#: ../../source/tutorial-quickstart-pytorch.rst:205 -#: ../../source/tutorial-quickstart-tensorflow.rst:100 +#: ../../source/ref-changelog.md:1049 +msgid "`on_aggregate_evaluate` => `aggregate_evaluate`" +msgstr "`on_aggregate_evaluate` => `aggregate_evaluate`" + +#: ../../source/ref-changelog.md:1050 +msgid "`on_configure_fit` => `configure_fit`" +msgstr "`on_configure_fit` => `configure_fit`" + +#: ../../source/ref-changelog.md:1051 +msgid "`on_aggregate_fit` => `aggregate_fit`" +msgstr "`on_aggregate_fit` => `aggregate_fit`" + +#: ../../source/ref-changelog.md:1055 msgid "" -"For simple workloads we can start a Flower server and leave all the " -"configuration possibilities at their default values. In a file named " -":code:`server.py`, import Flower and start the server:" +"Deprecated `DefaultStrategy` " +"([#479](https://github.com/adap/flower/pull/479)). To migrate use " +"`FedAvg` instead." msgstr "" -"Pour les charges de travail simples, nous pouvons démarrer un serveur " -"Flower et laisser toutes les possibilités de configuration à leurs " -"valeurs par défaut. Dans un fichier nommé :code:`server.py`, importe " -"Flower et démarre le serveur :" - -#: ../../source/tutorial-quickstart-ios.rst:142 -#: ../../source/tutorial-quickstart-mxnet.rst:239 -#: ../../source/tutorial-quickstart-pytorch.rst:216 -#: ../../source/tutorial-quickstart-scikitlearn.rst:215 -#: ../../source/tutorial-quickstart-tensorflow.rst:112 -msgid "Train the model, federated!" -msgstr "Entraîne le modèle, fédéré !" +"Déclassé `DefaultStrategy` " +"([#479](https://github.com/adap/flower/pull/479)). Pour migrer, utilisez " +"`FedAvg` à la place." -#: ../../source/tutorial-quickstart-ios.rst:144 -#: ../../source/tutorial-quickstart-pytorch.rst:218 -#: ../../source/tutorial-quickstart-tensorflow.rst:114 -#: ../../source/tutorial-quickstart-xgboost.rst:525 +#: ../../source/ref-changelog.md:1056 msgid "" -"With both client and server ready, we can now run everything and see " -"federated learning in action. FL systems usually have a server and " -"multiple clients. We therefore have to start the server first:" +"Simplified examples and baselines " +"([#484](https://github.com/adap/flower/pull/484))." msgstr "" -"Le client et le serveur étant prêts, nous pouvons maintenant tout " -"exécuter et voir l'apprentissage fédéré en action. Les systèmes FL ont " -"généralement un serveur et plusieurs clients. Nous devons donc commencer " -"par démarrer le serveur :" +"Exemples simplifiés et lignes de base " +"([#484](https://github.com/adap/flower/pull/484))." -#: ../../source/tutorial-quickstart-ios.rst:152 +#: ../../source/ref-changelog.md:1057 msgid "" -"Once the server is running we can start the clients in different " -"terminals. Build and run the client through your Xcode, one through Xcode" -" Simulator and the other by deploying it to your iPhone. To see more " -"about how to deploy your app to iPhone or Simulator visit `here " -"`_." +"Removed presently unused `on_conclude_round` from strategy interface " +"([#483](https://github.com/adap/flower/pull/483))." msgstr "" +"Suppression de `on_conclude_round` actuellement inutilisé de l'interface " +"de stratégie ([#483](https://github.com/adap/flower/pull/483))." -#: ../../source/tutorial-quickstart-ios.rst:156 -#, fuzzy +#: ../../source/ref-changelog.md:1058 msgid "" -"Congratulations! You've successfully built and run your first federated " -"learning system in your ios device. The full `source code " -"`_ for this " -"example can be found in :code:`examples/ios`." +"Set minimal Python version to 3.6.1 instead of 3.6.9 " +"([#471](https://github.com/adap/flower/pull/471))." msgstr "" -"Félicitations ! Tu as réussi à construire et à faire fonctionner ton " -"premier système d'apprentissage fédéré. Le code source complet " -"`_ de cet exemple se trouve dans :code:`examples" -"/quickstart-mxnet`." +"Fixe la version minimale de Python à 3.6.1 au lieu de 3.6.9 " +"([#471](https://github.com/adap/flower/pull/471))." -#: ../../source/tutorial-quickstart-jax.rst:-1 +#: ../../source/ref-changelog.md:1059 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with Jax to train a linear regression model on a scikit-learn dataset." +"Improved `Strategy` docstrings " +"([#470](https://github.com/adap/flower/pull/470))." msgstr "" +"Amélioration des docstrings `Stratégie` " +"([#470](https://github.com/adap/flower/pull/470))." -#: ../../source/tutorial-quickstart-jax.rst:5 -msgid "Quickstart JAX" -msgstr "Démarrage rapide de JAX" +#: ../../source/ref-example-projects.rst:2 +#, fuzzy +msgid "Example projects" +msgstr "Exemples de PyTorch" -#: ../../source/tutorial-quickstart-mxnet.rst:-1 +#: ../../source/ref-example-projects.rst:4 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with MXNet to train a Sequential model on MNIST." +"Flower comes with a number of usage examples. The examples demonstrate " +"how Flower can be used to federate different kinds of existing machine " +"learning pipelines, usually leveraging popular machine learning " +"frameworks such as `PyTorch `_ or `TensorFlow " +"`_." msgstr "" +"Flower est livré avec un certain nombre d'exemples d'utilisation, qui " +"montrent comment Flower peut être utilisé pour fédérer différents types " +"de pipelines d'apprentissage automatique existants, qui s'appuient " +"généralement sur des frameworks d'apprentissage automatique populaires " +"tels que `PyTorch `_ ou `TensorFlow " +"`_." -#: ../../source/tutorial-quickstart-mxnet.rst:5 -msgid "Quickstart MXNet" -msgstr "Démarrage rapide de MXNet" - -#: ../../source/tutorial-quickstart-mxnet.rst:7 +#: ../../source/ref-example-projects.rst:10 +#, fuzzy msgid "" -"MXNet is no longer maintained and has been moved into `Attic " -"`_. As a result, we would " -"encourage you to use other ML frameworks alongside Flower, for example, " -"PyTorch. This tutorial might be removed in future versions of Flower." -msgstr "" +"The following examples are available as standalone projects. Quickstart " +"TensorFlow/Keras ---------------------------" +msgstr "Les exemples suivants sont disponibles sous forme de projets autonomes." -#: ../../source/tutorial-quickstart-mxnet.rst:12 +#: ../../source/ref-example-projects.rst:14 msgid "" -"In this tutorial, we will learn how to train a :code:`Sequential` model " -"on MNIST using Flower and MXNet." +"The TensorFlow/Keras quickstart example shows CIFAR-10 image " +"classification with MobileNetV2:" msgstr "" -"Dans ce tutoriel, nous allons apprendre à former un modèle " -":code:`Sequential` sur MNIST à l'aide de Flower et de MXNet." +"L'exemple de démarrage rapide TensorFlow/Keras montre la classification " +"d'images CIFAR-10 avec MobileNetV2 :" -#: ../../source/tutorial-quickstart-mxnet.rst:14 -#: ../../source/tutorial-quickstart-scikitlearn.rst:12 +#: ../../source/ref-example-projects.rst:17 #, fuzzy msgid "" -"It is recommended to create a virtual environment and run everything " -"within this :doc:`virtualenv `." -msgstr "" -"Il est recommandé de créer un environnement virtuel et de tout exécuter " -"dans ce `virtualenv `_." - -#: ../../source/tutorial-quickstart-mxnet.rst:16 -#: ../../source/tutorial-quickstart-pytorch.rst:17 -#: ../../source/tutorial-quickstart-scikitlearn.rst:14 -msgid "" -"Our example consists of one *server* and two *clients* all having the " -"same model." +"`Quickstart TensorFlow (Code) " +"`_" msgstr "" -"Notre exemple consiste en un *serveur* et deux *clients* ayant tous le " -"même modèle." +"`Quickstart TensorFlow (Code) " +"`_" -#: ../../source/tutorial-quickstart-mxnet.rst:18 -#: ../../source/tutorial-quickstart-scikitlearn.rst:16 -msgid "" -"*Clients* are responsible for generating individual model parameter " -"updates for the model based on their local datasets. These updates are " -"then sent to the *server* which will aggregate them to produce an updated" -" global model. Finally, the *server* sends this improved version of the " -"model back to each *client*. A complete cycle of parameters updates is " -"called a *round*." +#: ../../source/ref-example-projects.rst:18 +#, fuzzy +msgid ":doc:`Quickstart TensorFlow (Tutorial) `" msgstr "" -"*Les clients* sont chargés de générer des mises à jour individuelles des " -"paramètres du modèle en fonction de leurs ensembles de données locales. " -"Ces mises à jour sont ensuite envoyées au *serveur* qui les agrège pour " -"produire un modèle global mis à jour. Enfin, le *serveur* renvoie cette " -"version améliorée du modèle à chaque *client*. Un cycle complet de mises " -"à jour des paramètres s'appelle un *round*." +"`Quickstart TensorFlow (Tutorial) `_" -#: ../../source/tutorial-quickstart-mxnet.rst:22 -#: ../../source/tutorial-quickstart-scikitlearn.rst:20 +#: ../../source/ref-example-projects.rst:19 msgid "" -"Now that we have a rough idea of what is going on, let's get started. We " -"first need to install Flower. You can do this by running:" +"`Quickstart TensorFlow (Blog Post) `_" msgstr "" -"Maintenant que nous avons une idée approximative de ce qui se passe, " -"commençons. Nous devons d'abord installer Flower. Tu peux le faire en " -"lançant :" +"`Quickstart TensorFlow (Blog Post) `_" -#: ../../source/tutorial-quickstart-mxnet.rst:28 -msgid "Since we want to use MXNet, let's go ahead and install it:" -msgstr "Puisque nous voulons utiliser MXNet, allons-y et installons-le :" +#: ../../source/ref-example-projects.rst:23 +#: ../../source/tutorial-quickstart-pytorch.rst:5 +msgid "Quickstart PyTorch" +msgstr "Démarrage rapide de PyTorch" -#: ../../source/tutorial-quickstart-mxnet.rst:38 +#: ../../source/ref-example-projects.rst:25 msgid "" -"Now that we have all our dependencies installed, let's run a simple " -"distributed training with two clients and one server. Our training " -"procedure and network architecture are based on MXNet´s `Hand-written " -"Digit Recognition tutorial " -"`_." +"The PyTorch quickstart example shows CIFAR-10 image classification with a" +" simple Convolutional Neural Network:" msgstr "" -"Maintenant que toutes nos dépendances sont installées, lançons une " -"formation distribuée simple avec deux clients et un serveur. Notre " -"procédure de formation et l'architecture du réseau sont basées sur le " -"tutoriel de reconnaissance de chiffres écrits à la main du MXNet " -"`_." +"L'exemple de démarrage rapide PyTorch montre la classification d'images " +"CIFAR-10 avec un simple réseau neuronal convolutif :" -#: ../../source/tutorial-quickstart-mxnet.rst:40 +#: ../../source/ref-example-projects.rst:28 +#, fuzzy msgid "" -"In a file called :code:`client.py`, import Flower and MXNet related " -"packages:" +"`Quickstart PyTorch (Code) " +"`_" msgstr "" -"Dans un fichier appelé :code:`client.py`, importe Flower et les paquets " -"liés au MXNet :" - -#: ../../source/tutorial-quickstart-mxnet.rst:55 -msgid "In addition, define the device allocation in MXNet with:" -msgstr "En outre, définis l'attribution de l'appareil dans MXNet avec :" +"`Quickstart PyTorch (Code) " +"`_" -#: ../../source/tutorial-quickstart-mxnet.rst:61 -msgid "" -"We use MXNet to load MNIST, a popular image classification dataset of " -"handwritten digits for machine learning. The MXNet utility " -":code:`mx.test_utils.get_mnist()` downloads the training and test data." +#: ../../source/ref-example-projects.rst:29 +#, fuzzy +msgid ":doc:`Quickstart PyTorch (Tutorial) `" msgstr "" -"Nous utilisons MXNet pour charger MNIST, un ensemble de données de " -"classification d'images populaire de chiffres manuscrits pour " -"l'apprentissage automatique. L'utilitaire MXNet " -":code:`mx.test_utils.get_mnist()` télécharge les données d'entraînement " -"et de test." +"`Quickstart PyTorch (Tutorial) `_" + +#: ../../source/ref-example-projects.rst:33 +msgid "PyTorch: From Centralized To Federated" +msgstr "PyTorch : De la centralisation à la fédération" -#: ../../source/tutorial-quickstart-mxnet.rst:75 +#: ../../source/ref-example-projects.rst:35 msgid "" -"Define the training and loss with MXNet. We train the model by looping " -"over the dataset, measure the corresponding loss, and optimize it." +"This example shows how a regular PyTorch project can be federated using " +"Flower:" msgstr "" -"Définis l'entraînement et la perte avec MXNet. Nous entraînons le modèle " -"en parcourant en boucle l'ensemble des données, nous mesurons la perte " -"correspondante et nous l'optimisons." +"Cet exemple montre comment un projet PyTorch ordinaire peut être fédéré à" +" l'aide de Flower :" -#: ../../source/tutorial-quickstart-mxnet.rst:113 +#: ../../source/ref-example-projects.rst:37 +#, fuzzy msgid "" -"Next, we define the validation of our machine learning model. We loop " -"over the test set and measure both loss and accuracy on the test set." +"`PyTorch: From Centralized To Federated (Code) " +"`_" msgstr "" -"Ensuite, nous définissons la validation de notre modèle d'apprentissage " -"automatique. Nous effectuons une boucle sur l'ensemble de test et " -"mesurons à la fois la perte et la précision sur l'ensemble de test." +"`PyTorch : De la centralisation à la fédération (Code) " +"`_" -#: ../../source/tutorial-quickstart-mxnet.rst:137 +#: ../../source/ref-example-projects.rst:38 +#, fuzzy msgid "" -"After defining the training and testing of a MXNet machine learning " -"model, we use these functions to implement a Flower client." +":doc:`PyTorch: From Centralized To Federated (Tutorial) `" msgstr "" -"Après avoir défini la formation et le test d'un modèle d'apprentissage " -"automatique MXNet, nous utilisons ces fonctions pour mettre en œuvre un " -"client Flower." +"`PyTorch : De la centralisation à la fédération (Tutoriel) " +"`_" -#: ../../source/tutorial-quickstart-mxnet.rst:139 -msgid "Our Flower clients will use a simple :code:`Sequential` model:" -msgstr "Nos clients Flower utiliseront un modèle simple :code:`Sequential` :" +#: ../../source/ref-example-projects.rst:42 +msgid "Federated Learning on Raspberry Pi and Nvidia Jetson" +msgstr "Apprentissage fédéré sur Raspberry Pi et Nvidia Jetson" -#: ../../source/tutorial-quickstart-mxnet.rst:158 +#: ../../source/ref-example-projects.rst:44 msgid "" -"After loading the dataset with :code:`load_data()` we perform one forward" -" propagation to initialize the model and model parameters with " -":code:`model(init)`. Next, we implement a Flower client." +"This example shows how Flower can be used to build a federated learning " +"system that run across Raspberry Pi and Nvidia Jetson:" msgstr "" -"Après avoir chargé l'ensemble de données avec :code:`load_data()`, nous " -"effectuons une propagation vers l'avant pour initialiser le modèle et les" -" paramètres du modèle avec :code:`model(init)`. Ensuite, nous " -"implémentons un client Flower." +"Cet exemple montre comment Flower peut être utilisé pour construire un " +"système d'apprentissage fédéré qui fonctionne sur Raspberry Pi et Nvidia " +"Jetson :" -#: ../../source/tutorial-quickstart-mxnet.rst:160 -#: ../../source/tutorial-quickstart-pytorch.rst:144 -#: ../../source/tutorial-quickstart-tensorflow.rst:54 +#: ../../source/ref-example-projects.rst:46 +#, fuzzy msgid "" -"The Flower server interacts with clients through an interface called " -":code:`Client`. When the server selects a particular client for training," -" it sends training instructions over the network. The client receives " -"those instructions and calls one of the :code:`Client` methods to run " -"your code (i.e., to train the neural network we defined earlier)." +"`Federated Learning on Raspberry Pi and Nvidia Jetson (Code) " +"`_" msgstr "" -"Le serveur Flower interagit avec les clients par le biais d'une interface" -" appelée :code:`Client`. Lorsque le serveur sélectionne un client " -"particulier pour la formation, il envoie des instructions de formation " -"sur le réseau. Le client reçoit ces instructions et appelle l'une des " -"méthodes :code:`Client` pour exécuter ton code (c'est-à-dire pour former " -"le réseau neuronal que nous avons défini plus tôt)." +"`L'apprentissage fédéré sur Raspberry Pi et Nvidia Jetson (Code) " +"`_" -#: ../../source/tutorial-quickstart-mxnet.rst:166 +#: ../../source/ref-example-projects.rst:47 msgid "" -"Flower provides a convenience class called :code:`NumPyClient` which " -"makes it easier to implement the :code:`Client` interface when your " -"workload uses MXNet. Implementing :code:`NumPyClient` usually means " -"defining the following methods (:code:`set_parameters` is optional " -"though):" +"`Federated Learning on Raspberry Pi and Nvidia Jetson (Blog Post) " +"`_" msgstr "" -"Flower fournit une classe de commodité appelée :code:`NumPyClient` qui " -"facilite l'implémentation de l'interface :code:`Client` lorsque ta charge" -" de travail utilise MXNet. L'implémentation de :code:`NumPyClient` " -"signifie généralement la définition des méthodes suivantes " -"(:code:`set_parameters` est cependant facultatif) :" - -#: ../../source/tutorial-quickstart-mxnet.rst:172 -#: ../../source/tutorial-quickstart-pytorch.rst:156 -#: ../../source/tutorial-quickstart-scikitlearn.rst:109 -msgid "return the model weight as a list of NumPy ndarrays" -msgstr "renvoie le poids du modèle sous la forme d'une liste de ndarrays NumPy" - -#: ../../source/tutorial-quickstart-mxnet.rst:173 -#: ../../source/tutorial-quickstart-pytorch.rst:157 -#: ../../source/tutorial-quickstart-scikitlearn.rst:111 -msgid ":code:`set_parameters` (optional)" -msgstr ":code:`set_parameters` (optionnel)" +"`L'apprentissage fédéré sur Raspberry Pi et Nvidia Jetson (Blog Post) " +"`_" -#: ../../source/tutorial-quickstart-mxnet.rst:174 -#: ../../source/tutorial-quickstart-pytorch.rst:158 -#: ../../source/tutorial-quickstart-scikitlearn.rst:111 +#: ../../source/ref-faq.rst:4 msgid "" -"update the local model weights with the parameters received from the " -"server" +"This page collects answers to commonly asked questions about Federated " +"Learning with Flower." msgstr "" -"mettre à jour les poids du modèle local avec les paramètres reçus du " -"serveur" - -#: ../../source/tutorial-quickstart-mxnet.rst:176 -#: ../../source/tutorial-quickstart-pytorch.rst:160 -#: ../../source/tutorial-quickstart-scikitlearn.rst:114 -msgid "set the local model weights" -msgstr "fixe les poids du modèle local" - -#: ../../source/tutorial-quickstart-mxnet.rst:177 -#: ../../source/tutorial-quickstart-pytorch.rst:161 -#: ../../source/tutorial-quickstart-scikitlearn.rst:115 -msgid "train the local model" -msgstr "entraîne le modèle local" - -#: ../../source/tutorial-quickstart-mxnet.rst:178 -#: ../../source/tutorial-quickstart-pytorch.rst:162 -#: ../../source/tutorial-quickstart-scikitlearn.rst:116 -msgid "receive the updated local model weights" -msgstr "recevoir les poids du modèle local mis à jour" - -#: ../../source/tutorial-quickstart-mxnet.rst:180 -#: ../../source/tutorial-quickstart-pytorch.rst:164 -#: ../../source/tutorial-quickstart-scikitlearn.rst:118 -msgid "test the local model" -msgstr "teste le modèle local" +"Cette page rassemble les réponses aux questions les plus fréquemment " +"posées sur l'apprentissage fédéré avec Flower." -#: ../../source/tutorial-quickstart-mxnet.rst:182 -msgid "They can be implemented in the following way:" -msgstr "Ils peuvent être mis en œuvre de la manière suivante :" +#: ../../source/ref-faq.rst +#, fuzzy +msgid ":fa:`eye,mr-1` Can Flower run on Jupyter Notebooks / Google Colab?" +msgstr "" +":fa:`eye,mr-1` Flower peut-il fonctionner sur les ordinateurs portables " +"Juptyter / Google Colab ?" -#: ../../source/tutorial-quickstart-mxnet.rst:212 +#: ../../source/ref-faq.rst:8 msgid "" -"We can now create an instance of our class :code:`MNISTClient` and add " -"one line to actually run this client:" +"Yes, it can! Flower even comes with a few under-the-hood optimizations to" +" make it work even better on Colab. Here's a quickstart example:" msgstr "" -"Nous pouvons maintenant créer une instance de notre classe " -":code:`MNISTClient` et ajouter une ligne pour exécuter ce client :" +"Oui, c'est possible ! Flower est même livré avec quelques optimisations " +"pour qu'il fonctionne encore mieux sur Colab. Voici un exemple de " +"démarrage rapide :" -#: ../../source/tutorial-quickstart-mxnet.rst:219 +#: ../../source/ref-faq.rst:10 #, fuzzy msgid "" -"That's it for the client. We only have to implement :code:`Client` or " -":code:`NumPyClient` and call :code:`fl.client.start_client()` or " -":code:`fl.client.start_numpy_client()`. The string " -":code:`\"0.0.0.0:8080\"` tells the client which server to connect to. In " -"our case we can run the server and the client on the same machine, " -"therefore we use :code:`\"0.0.0.0:8080\"`. If we run a truly federated " -"workload with the server and clients running on different machines, all " -"that needs to change is the :code:`server_address` we pass to the client." +"`Flower simulation PyTorch " +"`_" msgstr "" -"C'est tout pour le client. Il nous suffit d'implémenter :code:`Client` ou" -" :code:`NumPyClient` et d'appeler :code:`fl.client.start_client()`. La " -"chaîne :code:`\"0.0.0:8080\"` indique au client à quel serveur se " -"connecter. Dans notre cas, nous pouvons exécuter le serveur et le client " -"sur la même machine, c'est pourquoi nous utilisons " -":code:`\"0.0.0:8080\"`. Si nous exécutons une charge de travail " -"véritablement fédérée avec le serveur et les clients s'exécutant sur des " -"machines différentes, tout ce qui doit changer est :code:`server_address`" -" que nous transmettons au client." +"`Flower Quickstart (TensorFlow/Keras) " +"`_" -#: ../../source/tutorial-quickstart-mxnet.rst:241 +#: ../../source/ref-faq.rst:11 +#, fuzzy msgid "" -"With both client and server ready, we can now run everything and see " -"federated learning in action. Federated learning systems usually have a " -"server and multiple clients. We therefore have to start the server first:" +"`Flower simulation TensorFlow/Keras " +"`_" msgstr "" -"Le client et le serveur étant prêts, nous pouvons maintenant tout " -"exécuter et voir l'apprentissage fédéré en action. Les systèmes " -"d'apprentissage fédéré ont généralement un serveur et plusieurs clients. " -"Nous devons donc commencer par démarrer le serveur :" +"`Flower Quickstart (TensorFlow/Keras) " +"`_" -#: ../../source/tutorial-quickstart-mxnet.rst:249 -#: ../../source/tutorial-quickstart-pytorch.rst:226 -#: ../../source/tutorial-quickstart-scikitlearn.rst:224 -#: ../../source/tutorial-quickstart-tensorflow.rst:122 -#: ../../source/tutorial-quickstart-xgboost.rst:533 -msgid "" -"Once the server is running we can start the clients in different " -"terminals. Open a new terminal and start the first client:" +#: ../../source/ref-faq.rst +msgid ":fa:`eye,mr-1` How can I run Federated Learning on a Raspberry Pi?" msgstr "" -"Une fois que le serveur fonctionne, nous pouvons démarrer les clients " -"dans différents terminaux. Ouvre un nouveau terminal et démarre le " -"premier client :" - -#: ../../source/tutorial-quickstart-mxnet.rst:256 -#: ../../source/tutorial-quickstart-pytorch.rst:233 -#: ../../source/tutorial-quickstart-scikitlearn.rst:231 -#: ../../source/tutorial-quickstart-tensorflow.rst:129 -#: ../../source/tutorial-quickstart-xgboost.rst:540 -msgid "Open another terminal and start the second client:" -msgstr "Ouvre un autre terminal et démarre le deuxième client :" +":fa:`eye,mr-1` Comment puis-je faire fonctionner l'apprentissage fédéré " +"sur un Raspberry Pi ?" -#: ../../source/tutorial-quickstart-mxnet.rst:262 -#: ../../source/tutorial-quickstart-pytorch.rst:239 -#: ../../source/tutorial-quickstart-scikitlearn.rst:237 -#: ../../source/tutorial-quickstart-xgboost.rst:546 +#: ../../source/ref-faq.rst:15 +#, fuzzy msgid "" -"Each client will have its own dataset. You should now see how the " -"training does in the very first terminal (the one that started the " -"server):" +"Find the `blog post about federated learning on embedded device here " +"`_" +" and the corresponding `GitHub code example " +"`_." msgstr "" -"Chaque client aura son propre ensemble de données. Tu devrais maintenant " -"voir comment la formation se déroule dans le tout premier terminal (celui" -" qui a démarré le serveur) :" +"Trouve le `blog post about federated learning on embedded device ici " +"`_" +" et l'exemple de code GitHub correspondant " +"`_." + +#: ../../source/ref-faq.rst +msgid ":fa:`eye,mr-1` Does Flower support federated learning on Android devices?" +msgstr "" +":fa:`eye,mr-1` Est-ce que Flower prend en charge l'apprentissage fédéré " +"sur les appareils Android ?" -#: ../../source/tutorial-quickstart-mxnet.rst:294 +#: ../../source/ref-faq.rst:19 #, fuzzy msgid "" -"Congratulations! You've successfully built and run your first federated " -"learning system. The full `source code " -"`_ for this example can be found in :code:`examples" -"/quickstart-mxnet`." +"Yes, it does. Please take a look at our `blog post " +"`_ or check out the code examples:" msgstr "" -"Félicitations ! Tu as réussi à construire et à faire fonctionner ton " -"premier système d'apprentissage fédéré. Le code source complet " -"`_ de cet exemple se trouve dans :code:`examples" -"/quickstart-mxnet`." +"Oui. Jetez un coup d'œil à notre `blog post " +"`_ ou consultez l'`exemple de code Android sur GitHub " +"`_." -#: ../../source/tutorial-quickstart-pandas.rst:-1 +#: ../../source/ref-faq.rst:21 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with Pandas to perform Federated Analytics." +"`Android Kotlin example `_" msgstr "" -#: ../../source/tutorial-quickstart-pandas.rst:5 -msgid "Quickstart Pandas" -msgstr "Démarrage rapide des Pandas" +#: ../../source/ref-faq.rst:22 +msgid "`Android Java example `_" +msgstr "" -#: ../../source/tutorial-quickstart-pandas.rst:10 -msgid "Let's build a federated analytics system using Pandas and Flower!" -msgstr "Construisons un système d'analyse fédéré à l'aide de Pandas et de Flower !" +#: ../../source/ref-faq.rst +msgid ":fa:`eye,mr-1` Can I combine federated learning with blockchain?" +msgstr "" +":fa:`eye,mr-1` Puis-je combiner l'apprentissage fédéré avec la blockchain" +" ?" -#: ../../source/tutorial-quickstart-pandas.rst:12 -#, fuzzy +#: ../../source/ref-faq.rst:26 msgid "" -"Please refer to the `full code example " -"`_ " -"to learn more." +"Yes, of course. A list of available examples using Flower within a " +"blockchain environment is available here:" msgstr "" -"Réfère-toi à l'exemple de code complet " -"`_ " -"pour en savoir plus." +"Oui, bien sûr, une liste d'exemples disponibles utilisant Flower dans un " +"environnement blockchain est disponible ici :" -#: ../../source/tutorial-quickstart-pytorch.rst:-1 +#: ../../source/ref-faq.rst:28 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with PyTorch to train a CNN model on MNIST." +"`Flower meets Nevermined GitHub Repository `_." msgstr "" +"`Flower meets Nevermined GitHub Repository `_." -#: ../../source/tutorial-quickstart-pytorch.rst:13 +#: ../../source/ref-faq.rst:29 msgid "" -"In this tutorial we will learn how to train a Convolutional Neural " -"Network on CIFAR10 using Flower and PyTorch." +"`Flower meets Nevermined YouTube video " +"`_." msgstr "" -"Dans ce tutoriel, nous allons apprendre à entraîner un réseau neuronal " -"convolutif sur CIFAR10 à l'aide de Flower et PyTorch." +"`Flower rencontre Nevermined vidéo YouTube " +"`_." -#: ../../source/tutorial-quickstart-pytorch.rst:15 -#: ../../source/tutorial-quickstart-xgboost.rst:39 +#: ../../source/ref-faq.rst:30 #, fuzzy msgid "" -"First of all, it is recommended to create a virtual environment and run " -"everything within a :doc:`virtualenv `." +"`Flower meets KOSMoS `_." msgstr "" -"Tout d'abord, il est recommandé de créer un environnement virtuel et de " -"tout exécuter au sein d'un `virtualenv `_." +"`Flower rencontre KOSMoS `_." -#: ../../source/tutorial-quickstart-pytorch.rst:19 +#: ../../source/ref-faq.rst:31 msgid "" -"*Clients* are responsible for generating individual weight-updates for " -"the model based on their local datasets. These updates are then sent to " -"the *server* which will aggregate them to produce a better model. " -"Finally, the *server* sends this improved version of the model back to " -"each *client*. A complete cycle of weight updates is called a *round*." +"`Flower meets Talan blog post `_ ." msgstr "" -"*Les clients* sont chargés de générer des mises à jour de poids " -"individuelles pour le modèle en fonction de leurs ensembles de données " -"locales. Ces mises à jour sont ensuite envoyées au *serveur* qui les " -"agrège pour produire un meilleur modèle. Enfin, le *serveur* renvoie " -"cette version améliorée du modèle à chaque *client*. Un cycle complet de " -"mises à jour de poids s'appelle un *round*." +"`Flower meets Talan blog post `_ ." -#: ../../source/tutorial-quickstart-pytorch.rst:23 +#: ../../source/ref-faq.rst:32 msgid "" -"Now that we have a rough idea of what is going on, let's get started. We " -"first need to install Flower. You can do this by running :" +"`Flower meets Talan GitHub Repository " +"`_ ." msgstr "" -"Maintenant que nous avons une idée générale de ce qui se passe, " -"commençons. Nous devons d'abord installer Flower. Tu peux le faire en " -"exécutant :" +"`Flower rencontre Talan Dépôt GitHub " +"`_ ." -#: ../../source/tutorial-quickstart-pytorch.rst:29 -msgid "" -"Since we want to use PyTorch to solve a computer vision task, let's go " -"ahead and install PyTorch and the **torchvision** library:" -msgstr "" -"Puisque nous voulons utiliser PyTorch pour résoudre une tâche de vision " -"par ordinateur, allons-y et installons PyTorch et la bibliothèque " -"**torchvision** :" +#: ../../source/ref-telemetry.md:1 +msgid "Telemetry" +msgstr "Télémétrie" -#: ../../source/tutorial-quickstart-pytorch.rst:39 +#: ../../source/ref-telemetry.md:3 msgid "" -"Now that we have all our dependencies installed, let's run a simple " -"distributed training with two clients and one server. Our training " -"procedure and network architecture are based on PyTorch's `Deep Learning " -"with PyTorch " -"`_." +"The Flower open-source project collects **anonymous** usage metrics to " +"make well-informed decisions to improve Flower. Doing this enables the " +"Flower team to understand how Flower is used and what challenges users " +"might face." msgstr "" -"Maintenant que nous avons installé toutes nos dépendances, lançons une " -"formation distribuée simple avec deux clients et un serveur. Notre " -"procédure de formation et l'architecture de notre réseau sont basées sur " -"`Deep Learning with PyTorch " -"`_ de" -" PyTorch." +"Le projet open-source Flower recueille des mesures d'utilisation " +"**anonymes** afin de prendre des décisions éclairées pour améliorer " +"Flower. Cela permet à l'équipe de Flower de comprendre comment Flower est" +" utilisé et quels sont les défis auxquels les utilisateurs peuvent être " +"confrontés." -#: ../../source/tutorial-quickstart-pytorch.rst:41 +#: ../../source/ref-telemetry.md:5 msgid "" -"In a file called :code:`client.py`, import Flower and PyTorch related " -"packages:" +"**Flower is a friendly framework for collaborative AI and data science.**" +" Staying true to this statement, Flower makes it easy to disable " +"telemetry for users that do not want to share anonymous usage metrics." msgstr "" -"Dans un fichier appelé :code:`client.py`, importe Flower et les paquets " -"liés à PyTorch :" +"**Flower est un cadre convivial pour l'IA collaborative et la science des" +" données.** En restant fidèle à cette déclaration, Flower permet de " +"désactiver facilement la télémétrie pour les utilisateurs qui ne " +"souhaitent pas partager des mesures d'utilisation anonymes." -#: ../../source/tutorial-quickstart-pytorch.rst:56 -msgid "In addition, we define the device allocation in PyTorch with:" -msgstr "En outre, nous définissons l'attribution des appareils dans PyTorch avec :" +#: ../../source/ref-telemetry.md:7 +msgid "Principles" +msgstr "Principes" -#: ../../source/tutorial-quickstart-pytorch.rst:62 -msgid "" -"We use PyTorch to load CIFAR10, a popular colored image classification " -"dataset for machine learning. The PyTorch :code:`DataLoader()` downloads " -"the training and test data that are then normalized." +#: ../../source/ref-telemetry.md:9 +msgid "We follow strong principles guarding anonymous usage metrics collection:" msgstr "" -"Nous utilisons PyTorch pour charger CIFAR10, un ensemble de données de " -"classification d'images colorées populaire pour l'apprentissage " -"automatique. Le :code:`DataLoader()` de PyTorch télécharge les données " -"d'entraînement et de test qui sont ensuite normalisées." +"Nous suivons des principes stricts concernant la collecte de données " +"anonymes sur l'utilisation :" -#: ../../source/tutorial-quickstart-pytorch.rst:78 +#: ../../source/ref-telemetry.md:11 msgid "" -"Define the loss and optimizer with PyTorch. The training of the dataset " -"is done by looping over the dataset, measure the corresponding loss and " -"optimize it." +"**Optional:** You will always be able to disable telemetry; read on to " +"learn “[How to opt-out](#how-to-opt-out)”." msgstr "" -"Définis la perte et l'optimiseur avec PyTorch L'entraînement de " -"l'ensemble de données se fait en bouclant sur l'ensemble de données, en " -"mesurant la perte correspondante et en l'optimisant." +"**Optionnel:** Tu pourras toujours désactiver la télémétrie ; lis la " +"suite pour apprendre \"[Comment se désengager](#how-to-opt-out)\"." -#: ../../source/tutorial-quickstart-pytorch.rst:94 +#: ../../source/ref-telemetry.md:12 msgid "" -"Define then the validation of the machine learning network. We loop over" -" the test set and measure the loss and accuracy of the test set." +"**Anonymous:** The reported usage metrics are anonymous and do not " +"contain any personally identifiable information (PII). See “[Collected " +"metrics](#collected-metrics)” to understand what metrics are being " +"reported." msgstr "" -"Définis ensuite la validation du réseau d'apprentissage automatique. Nous" -" passons en boucle sur l'ensemble de test et mesurons la perte et la " -"précision de l'ensemble de test." +"**Anonyme:** Les mesures d'utilisation rapportées sont anonymes et ne " +"contiennent aucune information personnelle identifiable (PII). Voir " +"\"[Collected metrics](#collected-metrics)\" pour comprendre quelles " +"mesures sont rapportées." -#: ../../source/tutorial-quickstart-pytorch.rst:113 +#: ../../source/ref-telemetry.md:13 msgid "" -"After defining the training and testing of a PyTorch machine learning " -"model, we use the functions for the Flower clients." +"**Transparent:** You can easily inspect what anonymous metrics are being " +"reported; see the section “[How to inspect what is being reported](#how-" +"to-inspect-what-is-being-reported)”" msgstr "" -"Après avoir défini l'entraînement et le test d'un modèle d'apprentissage " -"automatique PyTorch, nous utilisons les fonctions pour les clients " -"Flower." +"**Transparent:** Tu peux facilement inspecter les métriques anonymes qui " +"sont rapportées ; voir la section \"[Comment inspecter ce qui est " +"rapporté](#how-to-inspect-what-is-being-reported)\"" -#: ../../source/tutorial-quickstart-pytorch.rst:115 +#: ../../source/ref-telemetry.md:14 +#, fuzzy msgid "" -"The Flower clients will use a simple CNN adapted from 'PyTorch: A 60 " -"Minute Blitz':" +"**Open for feedback:** You can always reach out to us if you have " +"feedback; see the section “[How to contact us](#how-to-contact-us)” for " +"details." msgstr "" -"Les clients de Flower utiliseront un CNN simple adapté de \"PyTorch : A " -"60 Minute Blitz\" :" +"**Ouvert pour les commentaires:** Tu peux toujours nous contacter si tu " +"as des commentaires ; voir la section \"[Comment nous contacter ](#how-" +"to-contact-us)\" pour plus de détails." -#: ../../source/tutorial-quickstart-pytorch.rst:142 +#: ../../source/ref-telemetry.md:16 +msgid "How to opt-out" +msgstr "Comment se désinscrire" + +#: ../../source/ref-telemetry.md:18 msgid "" -"After loading the data set with :code:`load_data()` we define the Flower " -"interface." +"When Flower starts, it will check for an environment variable called " +"`FLWR_TELEMETRY_ENABLED`. Telemetry can easily be disabled by setting " +"`FLWR_TELEMETRY_ENABLED=0`. Assuming you are starting a Flower server or " +"client, simply do so by prepending your command as in:" msgstr "" -"Après avoir chargé l'ensemble des données avec :code:`load_data()`, nous " -"définissons l'interface Flower." +"Lorsque Flower démarre, il vérifie la présence d'une variable " +"d'environnement appelée `FLWR_TELEMETRY_ENABLED`. La télémétrie peut " +"facilement être désactivée en réglant `FLWR_TELEMETRY_ENABLED=0`. En " +"supposant que tu démarres un serveur ou un client Flower, fais-le " +"simplement en faisant précéder ta commande de la façon suivante :" -#: ../../source/tutorial-quickstart-pytorch.rst:150 +#: ../../source/ref-telemetry.md:24 msgid "" -"Flower provides a convenience class called :code:`NumPyClient` which " -"makes it easier to implement the :code:`Client` interface when your " -"workload uses PyTorch. Implementing :code:`NumPyClient` usually means " -"defining the following methods (:code:`set_parameters` is optional " -"though):" +"Alternatively, you can export `FLWR_TELEMETRY_ENABLED=0` in, for example," +" `.bashrc` (or whatever configuration file applies to your environment) " +"to disable Flower telemetry permanently." msgstr "" -"Flower fournit une classe de commodité appelée :code:`NumPyClient` qui " -"facilite la mise en œuvre de l'interface :code:`Client` lorsque ta charge" -" de travail utilise PyTorch. Mettre en œuvre :code:`NumPyClient` signifie" -" généralement définir les méthodes suivantes (:code:`set_parameters` est " -"cependant facultatif) :" +"Tu peux aussi exporter `FLWR_TELEMETRY_ENABLED=0` dans, par exemple, " +"`.bashrc` (ou tout autre fichier de configuration qui s'applique à ton " +"environnement) pour désactiver la télémétrie de la fleur de façon " +"permanente." -#: ../../source/tutorial-quickstart-pytorch.rst:166 -msgid "which can be implemented in the following way:" -msgstr "qui peut être mis en œuvre de la manière suivante :" +#: ../../source/ref-telemetry.md:26 +msgid "Collected metrics" +msgstr "Mesures collectées" -#: ../../source/tutorial-quickstart-pytorch.rst:189 -#: ../../source/tutorial-quickstart-tensorflow.rst:82 +#: ../../source/ref-telemetry.md:28 +msgid "Flower telemetry collects the following metrics:" +msgstr "La télémétrie des fleurs recueille les métriques suivantes :" + +#: ../../source/ref-telemetry.md:30 msgid "" -"We can now create an instance of our class :code:`CifarClient` and add " -"one line to actually run this client:" +"**Flower version.** Understand which versions of Flower are currently " +"being used. This helps us to decide whether we should invest effort into " +"releasing a patch version for an older version of Flower or instead use " +"the bandwidth to build new features." msgstr "" -"Nous pouvons maintenant créer une instance de notre classe " -":code:`CifarClient` et ajouter une ligne pour exécuter ce client :" +"**Cela nous aide à décider si nous devons investir des efforts dans la " +"publication d'une version corrective pour une version plus ancienne de " +"Flower ou si nous devons plutôt utiliser la bande passante pour " +"développer de nouvelles fonctionnalités." -#: ../../source/tutorial-quickstart-pytorch.rst:196 -#: ../../source/tutorial-quickstart-tensorflow.rst:90 -#, fuzzy +#: ../../source/ref-telemetry.md:32 msgid "" -"That's it for the client. We only have to implement :code:`Client` or " -":code:`NumPyClient` and call :code:`fl.client.start_client()`. If you " -"implement a client of type :code:`NumPyClient` you'll need to first call " -"its :code:`to_client()` method. The string :code:`\"[::]:8080\"` tells " -"the client which server to connect to. In our case we can run the server " -"and the client on the same machine, therefore we use " -":code:`\"[::]:8080\"`. If we run a truly federated workload with the " -"server and clients running on different machines, all that needs to " -"change is the :code:`server_address` we point the client at." +"**Operating system.** Enables us to answer questions such as: *Should we " +"create more guides for Linux, macOS, or Windows?*" msgstr "" -"C'est tout pour le client. Il nous suffit d'implémenter :code:`Client` ou" -" :code:`NumPyClient` et d'appeler :code:`fl.client.start_client()`. La " -"chaîne :code:`\"[: :]:8080\"` indique au client à quel serveur se " -"connecter. Dans notre cas, nous pouvons exécuter le serveur et le client " -"sur la même machine, c'est pourquoi nous utilisons :code:`\"[: " -":]:8080\"`. Si nous exécutons une charge de travail véritablement fédérée" -" avec le serveur et les clients fonctionnant sur des machines " -"différentes, tout ce qui doit changer est l'adresse " -":code:`server_address` vers laquelle nous dirigeons le client." +"**Système d'exploitation.** Nous permet de répondre à des questions " +"telles que : *Faudrait-il créer plus de guides pour Linux, macOS ou " +"Windows ?" -#: ../../source/tutorial-quickstart-pytorch.rst:271 -#, fuzzy +#: ../../source/ref-telemetry.md:34 msgid "" -"Congratulations! You've successfully built and run your first federated " -"learning system. The full `source code " -"`_ for this example can be found in :code:`examples" -"/quickstart-pytorch`." +"**Python version.** Knowing the Python version helps us, for example, to " +"decide whether we should invest effort into supporting old versions of " +"Python or stop supporting them and start taking advantage of new Python " +"features." msgstr "" -"Félicitations ! Tu as réussi à construire et à faire fonctionner ton " -"premier système d'apprentissage fédéré. Le code source complet " -"`_ de cet exemple se trouve dans :code:`examples" -"/quickstart-pytorch`." +"**Version de Python.** Connaître la version de Python nous aide, par " +"exemple, à décider si nous devons investir des efforts dans la prise en " +"charge des anciennes versions de Python ou cesser de les prendre en " +"charge et commencer à tirer parti des nouvelles fonctionnalités de " +"Python." -#: ../../source/tutorial-quickstart-pytorch-lightning.rst:-1 +#: ../../source/ref-telemetry.md:36 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with PyTorch Lightning to train an Auto Encoder model on MNIST." +"**Hardware properties.** Understanding the hardware environment that " +"Flower is being used in helps to decide whether we should, for example, " +"put more effort into supporting low-resource environments." msgstr "" +"**Comprendre l'environnement matériel dans lequel Flower est utilisé " +"permet de décider si nous devrions, par exemple, faire plus d'efforts " +"pour prendre en charge les environnements à faibles ressources." -#: ../../source/tutorial-quickstart-pytorch-lightning.rst:5 -msgid "Quickstart PyTorch Lightning" -msgstr "Démarrage rapide de PyTorch Lightning" - -#: ../../source/tutorial-quickstart-pytorch-lightning.rst:10 -#, fuzzy +#: ../../source/ref-telemetry.md:38 msgid "" -"Let's build a horizontal federated learning system using PyTorch " -"Lightning and Flower!" +"**Execution mode.** Knowing what execution mode Flower starts in enables " +"us to understand how heavily certain features are being used and better " +"prioritize based on that." msgstr "" -"Construisons un système d'apprentissage fédéré en utilisant PyTorch " -"Lightning et Flower !" +"**Mode d'exécution** Connaître le mode d'exécution dans lequel Flower " +"démarre nous permet de comprendre à quel point certaines fonctionnalités " +"sont utilisées et de mieux établir les priorités en fonction de cela." -#: ../../source/tutorial-quickstart-pytorch-lightning.rst:12 -#, fuzzy +#: ../../source/ref-telemetry.md:40 msgid "" -"Please refer to the `full code example " -"`_ to learn more." +"**Cluster.** Flower telemetry assigns a random in-memory cluster ID each " +"time a Flower workload starts. This allows us to understand which device " +"types not only start Flower workloads but also successfully complete " +"them." msgstr "" -"Réfère-toi à l'exemple de code complet " -"`_ pour en savoir plus." +"**Cluster.** La télémétrie Flower attribue un ID de cluster en mémoire " +"aléatoire à chaque fois qu'une charge de travail Flower démarre. Cela " +"nous permet de comprendre quels types d'appareils non seulement démarrent" +" les charges de travail Flower, mais aussi les terminent avec succès." -#: ../../source/tutorial-quickstart-scikitlearn.rst:-1 +#: ../../source/ref-telemetry.md:42 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with scikit-learn to train a linear regression model." +"**Source.** Flower telemetry tries to store a random source ID in " +"`~/.flwr/source` the first time a telemetry event is generated. The " +"source ID is important to identify whether an issue is recurring or " +"whether an issue is triggered by multiple clusters running concurrently " +"(which often happens in simulation). For example, if a device runs " +"multiple workloads at the same time, and this results in an issue, then, " +"in order to reproduce the issue, multiple workloads must be started at " +"the same time." msgstr "" +"**Source.** La télémétrie de Flower essaie de stocker un ID de source " +"aléatoire dans `~/.flwr/source` la première fois qu'un événement de " +"télémétrie est généré. L'ID de source est important pour identifier si un" +" problème est récurrent ou si un problème est déclenché par plusieurs " +"clusters fonctionnant simultanément (ce qui arrive souvent en " +"simulation). Par exemple, si un périphérique exécute plusieurs charges de" +" travail en même temps, et que cela entraîne un problème, alors, afin de " +"reproduire le problème, plusieurs charges de travail doivent être " +"démarrées en même temps." -#: ../../source/tutorial-quickstart-scikitlearn.rst:5 -msgid "Quickstart scikit-learn" -msgstr "Démarrage rapide de scikit-learn" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:10 +#: ../../source/ref-telemetry.md:44 msgid "" -"In this tutorial, we will learn how to train a :code:`Logistic " -"Regression` model on MNIST using Flower and scikit-learn." +"You may delete the source ID at any time. If you wish for all events " +"logged under a specific source ID to be deleted, you can send a deletion " +"request mentioning the source ID to `telemetry@flower.ai`. All events " +"related to that source ID will then be permanently deleted." msgstr "" -"Dans ce tutoriel, nous allons apprendre à former un :code:`modèle de " -"régression logistique` sur MNIST en utilisant Flower et scikit-learn." - -#: ../../source/tutorial-quickstart-scikitlearn.rst:26 -#, fuzzy -msgid "Since we want to use scikit-learn, let's go ahead and install it:" -msgstr "Puisque nous voulons utiliser scikt-learn, allons-y et installons-le :" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:32 -msgid "Or simply install all dependencies using Poetry:" -msgstr "Ou installe simplement toutes les dépendances à l'aide de Poetry :" +"Tu peux supprimer l'identifiant de la source à tout moment. Si tu " +"souhaites que tous les événements enregistrés sous un identifiant de " +"source spécifique soient supprimés, tu peux envoyer une demande de " +"suppression mentionnant l'identifiant de source à `telemetry@flower.ai`. " +"Tous les événements liés à cet identifiant de source seront alors " +"définitivement supprimés." -#: ../../source/tutorial-quickstart-scikitlearn.rst:42 +#: ../../source/ref-telemetry.md:46 msgid "" -"Now that we have all our dependencies installed, let's run a simple " -"distributed training with two clients and one server. However, before " -"setting up the client and server, we will define all functionalities that" -" we need for our federated learning setup within :code:`utils.py`. The " -":code:`utils.py` contains different functions defining all the machine " -"learning basics:" -msgstr "" -"Maintenant que toutes nos dépendances sont installées, exécutons une " -"formation distribuée simple avec deux clients et un serveur. Cependant, " -"avant de configurer le client et le serveur, nous allons définir toutes " -"les fonctionnalités dont nous avons besoin pour notre configuration " -"d'apprentissage fédéré dans :code:`utils.py`. Le :code:`utils.py` " -"contient différentes fonctions définissant toutes les bases de " -"l'apprentissage automatique :" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:45 -msgid ":code:`get_model_parameters()`" -msgstr ":code:`get_model_parameters()`" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:46 -msgid "Returns the parameters of a :code:`sklearn` LogisticRegression model" +"We will not collect any personally identifiable information. If you think" +" any of the metrics collected could be misused in any way, please [get in" +" touch with us](#how-to-contact-us). We will update this page to reflect " +"any changes to the metrics collected and publish changes in the " +"changelog." msgstr "" -"Renvoie les paramètres d'un modèle de régression logistique " -":code:`sklearn`" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:47 -msgid ":code:`set_model_params()`" -msgstr ":code:`set_model_params()`" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:48 -msgid "Sets the parameters of a :code:`sklean` LogisticRegression model" -msgstr "Définit les paramètres d'un modèle de régression logistique :code:`sklean`" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:49 -msgid ":code:`set_initial_params()`" -msgstr ":code:`set_initial_params()`" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:50 -msgid "Initializes the model parameters that the Flower server will ask for" -msgstr "Initialise les paramètres du modèle que le serveur de Flower demandera" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:51 -msgid ":code:`load_mnist()`" -msgstr ":code:`load_mnist()`" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:52 -msgid "Loads the MNIST dataset using OpenML" -msgstr "Charge l'ensemble de données MNIST à l'aide d'OpenML" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:53 -msgid ":code:`shuffle()`" -msgstr ":code:`shuffle()`" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:54 -msgid "Shuffles data and its label" -msgstr "Mélange les données et leur étiquette" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:56 -msgid ":code:`partition()`" -msgstr ":code:`partition()`" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:56 -msgid "Splits datasets into a number of partitions" -msgstr "Divise les ensembles de données en un certain nombre de partitions" +"Nous ne collecterons aucune information personnelle identifiable. Si tu " +"penses que l'une des métriques collectées pourrait être utilisée à " +"mauvais escient de quelque manière que ce soit, merci de [nous " +"contacter](#commentnouscontacter). Nous mettrons à jour cette page pour " +"refléter toute modification des métriques collectées et nous publierons " +"les changements dans le journal des modifications (changelog)." -#: ../../source/tutorial-quickstart-scikitlearn.rst:58 +#: ../../source/ref-telemetry.md:48 msgid "" -"Please check out :code:`utils.py` `here " -"`_ for more details. The pre-defined functions are used in" -" the :code:`client.py` and imported. The :code:`client.py` also requires " -"to import several packages such as Flower and scikit-learn:" +"If you think other metrics would be helpful for us to better guide our " +"decisions, please let us know! We will carefully review them; if we are " +"confident that they do not compromise user privacy, we may add them." msgstr "" -"Tu peux consulter :code:`utils.py` `ici " -"`_ pour plus de détails. Les fonctions prédéfinies sont " -"utilisées dans :code:`client.py` et importées. :code:`client.py` " -"nécessite également d'importer plusieurs paquets tels que Flower et " -"scikit-learn :" +"Si tu penses que d'autres mesures nous seraient utiles pour mieux " +"orienter nos décisions, fais-le nous savoir ! Nous les examinerons " +"attentivement ; si nous sommes convaincus qu'elles ne compromettent pas " +"la vie privée des utilisateurs, nous pourrons les ajouter." -#: ../../source/tutorial-quickstart-scikitlearn.rst:73 -#, fuzzy +#: ../../source/ref-telemetry.md:50 +msgid "How to inspect what is being reported" +msgstr "Comment inspecter ce qui est rapporté" + +#: ../../source/ref-telemetry.md:52 msgid "" -"We load the MNIST dataset from `OpenML " -"`_, a popular " -"image classification dataset of handwritten digits for machine learning. " -"The utility :code:`utils.load_mnist()` downloads the training and test " -"data. The training set is split afterwards into 10 partitions with " -":code:`utils.partition()`." +"We wanted to make it very easy for you to inspect what anonymous usage " +"metrics are reported. You can view all the reported telemetry information" +" by setting the environment variable `FLWR_TELEMETRY_LOGGING=1`. Logging " +"is disabled by default. You may use logging independently from " +"`FLWR_TELEMETRY_ENABLED` so that you can inspect the telemetry feature " +"without sending any metrics." msgstr "" -"Nous chargeons l'ensemble de données MNIST de `OpenML " -"`_, un ensemble de données de " -"classification d'images populaires de chiffres manuscrits pour " -"l'apprentissage automatique. L'utilitaire :code:`utils.load_mnist()` " -"télécharge les données d'entraînement et de test. L'ensemble " -"d'entraînement est ensuite divisé en 10 partitions avec " -":code:`utils.partition()`." +"Nous avons voulu qu'il soit très facile pour toi d'inspecter les mesures " +"d'utilisation anonymes qui sont rapportées. Tu peux voir toutes les " +"informations de télémétrie rapportées en définissant la variable " +"d'environnement `FLWR_TELEMETRY_LOGGING=1`. La journalisation est " +"désactivée par défaut. Tu peux utiliser la journalisation indépendamment " +"de `FLWR_TELEMETRY_ENABLED` afin d'inspecter la fonction de télémétrie " +"sans envoyer de mesures." -#: ../../source/tutorial-quickstart-scikitlearn.rst:85 +#: ../../source/ref-telemetry.md:58 msgid "" -"Next, the logistic regression model is defined and initialized with " -":code:`utils.set_initial_params()`." +"The inspect Flower telemetry without sending any anonymous usage metrics," +" use both environment variables:" msgstr "" -"Ensuite, le modèle de régression logistique est défini et initialisé avec" -" :code:`utils.set_initial_params()`." +"L'inspecteur Flower telemetry sans envoyer de métriques d'utilisation " +"anonymes, utilise les deux variables d'environnement :" + +#: ../../source/ref-telemetry.md:64 +msgid "How to contact us" +msgstr "Comment nous contacter" -#: ../../source/tutorial-quickstart-scikitlearn.rst:97 +#: ../../source/ref-telemetry.md:66 msgid "" -"The Flower server interacts with clients through an interface called " -":code:`Client`. When the server selects a particular client for training," -" it sends training instructions over the network. The client receives " -"those instructions and calls one of the :code:`Client` methods to run " -"your code (i.e., to fit the logistic regression we defined earlier)." +"We want to hear from you. If you have any feedback or ideas on how to " +"improve the way we handle anonymous usage metrics, reach out to us via " +"[Slack](https://flower.ai/join-slack/) (channel `#telemetry`) or email " +"(`telemetry@flower.ai`)." msgstr "" -"Le serveur Flower interagit avec les clients par le biais d'une interface" -" appelée :code:`Client`. Lorsque le serveur sélectionne un client " -"particulier pour la formation, il envoie des instructions de formation " -"sur le réseau. Le client reçoit ces instructions et appelle l'une des " -"méthodes :code:`Client` pour exécuter ton code (c'est-à-dire pour ajuster" -" la régression logistique que nous avons définie plus tôt)." +"Si tu as des commentaires ou des idées pour améliorer la façon dont nous " +"traitons les mesures d'utilisation anonymes, contacte-nous via " +"[Slack](https://flower.ai/join-slack/) (canal `#telemetry`) ou par " +"courriel (`telemetry@flower.ai`)." -#: ../../source/tutorial-quickstart-scikitlearn.rst:103 +#: ../../source/tutorial-quickstart-android.rst:-1 msgid "" -"Flower provides a convenience class called :code:`NumPyClient` which " -"makes it easier to implement the :code:`Client` interface when your " -"workload uses scikit-learn. Implementing :code:`NumPyClient` usually " -"means defining the following methods (:code:`set_parameters` is optional " -"though):" +"Read this Federated Learning quickstart tutorial for creating an Android " +"app using Flower." msgstr "" -"Flower fournit une classe de commodité appelée :code:`NumPyClient` qui " -"facilite la mise en œuvre de l'interface :code:`Client` lorsque ta charge" -" de travail utilise scikit-learn. Mettre en œuvre :code:`NumPyClient` " -"signifie généralement définir les méthodes suivantes " -"(:code:`set_parameters` est cependant facultatif) :" -#: ../../source/tutorial-quickstart-scikitlearn.rst:112 -msgid "is directly imported with :code:`utils.set_model_params()`" -msgstr "est directement importé avec :code:`utils.set_model_params()`" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:120 -msgid "The methods can be implemented in the following way:" -msgstr "Les méthodes peuvent être mises en œuvre de la manière suivante :" +#: ../../source/tutorial-quickstart-android.rst:5 +#, fuzzy +msgid "Quickstart Android" +msgstr "Démarrage rapide des Pandas" -#: ../../source/tutorial-quickstart-scikitlearn.rst:143 +#: ../../source/tutorial-quickstart-android.rst:10 +#, fuzzy msgid "" -"We can now create an instance of our class :code:`MnistClient` and add " -"one line to actually run this client:" +"Let's build a federated learning system using TFLite and Flower on " +"Android!" msgstr "" -"Nous pouvons maintenant créer une instance de notre classe " -":code:`MnistClient` et ajouter une ligne pour exécuter ce client :" +"Construisons un système d'apprentissage fédéré en utilisant fastai et " +"Flower !" -#: ../../source/tutorial-quickstart-scikitlearn.rst:150 +#: ../../source/tutorial-quickstart-android.rst:12 #, fuzzy msgid "" -"That's it for the client. We only have to implement :code:`Client` or " -":code:`NumPyClient` and call :code:`fl.client.start_client()`. If you " -"implement a client of type :code:`NumPyClient` you'll need to first call " -"its :code:`to_client()` method. The string :code:`\"0.0.0.0:8080\"` tells" -" the client which server to connect to. In our case we can run the server" -" and the client on the same machine, therefore we use " -":code:`\"0.0.0.0:8080\"`. If we run a truly federated workload with the " -"server and clients running on different machines, all that needs to " -"change is the :code:`server_address` we pass to the client." +"Please refer to the `full code example " +"`_ to learn " +"more." msgstr "" -"C'est tout pour le client. Il nous suffit d'implémenter :code:`Client` ou" -" :code:`NumPyClient` et d'appeler :code:`fl.client.start_client()`. La " -"chaîne :code:`\"0.0.0:8080\"` indique au client à quel serveur se " -"connecter. Dans notre cas, nous pouvons exécuter le serveur et le client " -"sur la même machine, c'est pourquoi nous utilisons " -":code:`\"0.0.0:8080\"`. Si nous exécutons une charge de travail " -"véritablement fédérée avec le serveur et les clients s'exécutant sur des " -"machines différentes, tout ce qui doit changer est :code:`server_address`" -" que nous transmettons au client." +"Réfère-toi à l'exemple de code complet " +"`_ " +"pour en savoir plus." -#: ../../source/tutorial-quickstart-scikitlearn.rst:159 +#: ../../source/tutorial-quickstart-fastai.rst:-1 msgid "" -"The following Flower server is a little bit more advanced and returns an " -"evaluation function for the server-side evaluation. First, we import " -"again all required libraries such as Flower and scikit-learn." +"Check out this Federated Learning quickstart tutorial for using Flower " +"with FastAI to train a vision model on CIFAR-10." msgstr "" -"Le serveur Flower suivant est un peu plus avancé et renvoie une fonction " -"d'évaluation pour l'évaluation côté serveur. Tout d'abord, nous importons" -" à nouveau toutes les bibliothèques requises telles que Flower et scikit-" -"learn." -#: ../../source/tutorial-quickstart-scikitlearn.rst:162 -msgid ":code:`server.py`, import Flower and start the server:" -msgstr ":code:`server.py`, importe Flower et démarre le serveur :" +#: ../../source/tutorial-quickstart-fastai.rst:5 +msgid "Quickstart fastai" +msgstr "Démarrage rapide fastai" -#: ../../source/tutorial-quickstart-scikitlearn.rst:173 -msgid "" -"The number of federated learning rounds is set in :code:`fit_round()` and" -" the evaluation is defined in :code:`get_evaluate_fn()`. The evaluation " -"function is called after each federated learning round and gives you " -"information about loss and accuracy." +#: ../../source/tutorial-quickstart-fastai.rst:10 +msgid "Let's build a federated learning system using fastai and Flower!" msgstr "" -"Le nombre de tours d'apprentissage fédéré est défini dans " -":code:`fit_round()` et l'évaluation est définie dans " -":code:`get_evaluate_fn()`. La fonction d'évaluation est appelée après " -"chaque tour d'apprentissage fédéré et te donne des informations sur la " -"perte et la précision." +"Construisons un système d'apprentissage fédéré en utilisant fastai et " +"Flower !" -#: ../../source/tutorial-quickstart-scikitlearn.rst:198 +#: ../../source/tutorial-quickstart-fastai.rst:12 +#, fuzzy msgid "" -"The :code:`main` contains the server-side parameter initialization " -":code:`utils.set_initial_params()` as well as the aggregation strategy " -":code:`fl.server.strategy:FedAvg()`. The strategy is the default one, " -"federated averaging (or FedAvg), with two clients and evaluation after " -"each federated learning round. The server can be started with the command" -" :code:`fl.server.start_server(server_address=\"0.0.0.0:8080\", " -"strategy=strategy, config=fl.server.ServerConfig(num_rounds=3))`." +"Please refer to the `full code example " +"`_ " +"to learn more." msgstr "" -"Le :code:`main` contient l'initialisation des paramètres côté serveur " -":code:`utils.set_initial_params()` ainsi que la stratégie d'agrégation " -":code:`fl.server.strategy:FedAvg()`. La stratégie est celle par défaut, " -"la moyenne fédérée (ou FedAvg), avec deux clients et une évaluation après" -" chaque tour d'apprentissage fédéré. Le serveur peut être démarré avec la" -" commande :code:`fl.server.start_server(server_address=\"0.0.0.0:8080\", " -"strategy=strategy, config=fl.server.ServerConfig(num_rounds=3))`." +"Réfère-toi à l'exemple de code complet " +"`_ " +"pour en savoir plus." -#: ../../source/tutorial-quickstart-scikitlearn.rst:217 +#: ../../source/tutorial-quickstart-huggingface.rst:-1 msgid "" -"With both client and server ready, we can now run everything and see " -"federated learning in action. Federated learning systems usually have a " -"server and multiple clients. We, therefore, have to start the server " -"first:" +"Check out this Federating Learning quickstart tutorial for using Flower " +"with HuggingFace Transformers in order to fine-tune an LLM." msgstr "" -"Le client et le serveur étant prêts, nous pouvons maintenant tout lancer " -"et voir l'apprentissage fédéré en action. Les systèmes d'apprentissage " -"fédéré ont généralement un serveur et plusieurs clients. Nous devons donc" -" commencer par lancer le serveur :" -#: ../../source/tutorial-quickstart-scikitlearn.rst:271 +#: ../../source/tutorial-quickstart-huggingface.rst:5 +msgid "Quickstart 🤗 Transformers" +msgstr "Démarrage rapide 🤗 Transformateurs" + +#: ../../source/tutorial-quickstart-huggingface.rst:10 msgid "" -"Congratulations! You've successfully built and run your first federated " -"learning system. The full `source code " -"`_ for this example can be found in :code:`examples/sklearn-logreg-" -"mnist`." +"Let's build a federated learning system using Hugging Face Transformers " +"and Flower!" msgstr "" -"Félicitations ! Tu as réussi à construire et à faire fonctionner ton " -"premier système d'apprentissage fédéré. Le code source complet " -"`_ de cet exemple se trouve dans :code:`examples/sklearn-logreg-" -"mnist`." +"Construisons un système d'apprentissage fédéré à l'aide des " +"transformateurs Hugging Face et de Flower !" -#: ../../source/tutorial-quickstart-tensorflow.rst:-1 +#: ../../source/tutorial-quickstart-huggingface.rst:12 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with TensorFlow to train a MobilNetV2 model on CIFAR-10." +"We will leverage Hugging Face to federate the training of language models" +" over multiple clients using Flower. More specifically, we will fine-tune" +" a pre-trained Transformer model (distilBERT) for sequence classification" +" over a dataset of IMDB ratings. The end goal is to detect if a movie " +"rating is positive or negative." msgstr "" +"Nous nous appuierons sur Hugging Face pour fédérer l'entraînement de " +"modèles de langage sur plusieurs clients à l'aide de Flower. Plus " +"précisément, nous mettrons au point un modèle Transformer pré-entraîné " +"(distilBERT) pour la classification de séquences sur un ensemble de " +"données d'évaluations IMDB. L'objectif final est de détecter si " +"l'évaluation d'un film est positive ou négative." -#: ../../source/tutorial-quickstart-tensorflow.rst:5 -msgid "Quickstart TensorFlow" -msgstr "Démarrage rapide de TensorFlow" +#: ../../source/tutorial-quickstart-huggingface.rst:18 +msgid "Dependencies" +msgstr "Dépendances" -#: ../../source/tutorial-quickstart-tensorflow.rst:13 -msgid "Let's build a federated learning system in less than 20 lines of code!" +#: ../../source/tutorial-quickstart-huggingface.rst:20 +msgid "" +"To follow along this tutorial you will need to install the following " +"packages: :code:`datasets`, :code:`evaluate`, :code:`flwr`, " +":code:`torch`, and :code:`transformers`. This can be done using " +":code:`pip`:" msgstr "" -"Construisons un système d'apprentissage fédéré en moins de 20 lignes de " -"code !" +"Pour suivre ce tutoriel, tu devras installer les paquets suivants : " +":code:`datasets`, :code:`evaluate`, :code:`flwr`, :code:`torch`, et " +":code:`transformers`. Cela peut être fait en utilisant :code:`pip` :" -#: ../../source/tutorial-quickstart-tensorflow.rst:15 -msgid "Before Flower can be imported we have to install it:" -msgstr "Avant de pouvoir importer une fleur, nous devons l'installer :" +#: ../../source/tutorial-quickstart-huggingface.rst:30 +msgid "Standard Hugging Face workflow" +msgstr "Flux de travail standard pour le visage" -#: ../../source/tutorial-quickstart-tensorflow.rst:21 +#: ../../source/tutorial-quickstart-huggingface.rst:33 +msgid "Handling the data" +msgstr "Traitement des données" + +#: ../../source/tutorial-quickstart-huggingface.rst:35 msgid "" -"Since we want to use the Keras API of TensorFlow (TF), we have to install" -" TF as well:" +"To fetch the IMDB dataset, we will use Hugging Face's :code:`datasets` " +"library. We then need to tokenize the data and create :code:`PyTorch` " +"dataloaders, this is all done in the :code:`load_data` function:" msgstr "" -"Comme nous voulons utiliser l'API Keras de TensorFlow (TF), nous devons " -"également installer TF :" +"Pour récupérer le jeu de données IMDB, nous utiliserons la bibliothèque " +":code:`datasets` de Hugging Face. Nous devons ensuite tokeniser les " +"données et créer des :code:`PyTorch` dataloaders, ce qui est fait dans la" +" fonction :code:`load_data` :" -#: ../../source/tutorial-quickstart-tensorflow.rst:31 -msgid "Next, in a file called :code:`client.py`, import Flower and TensorFlow:" -msgstr "" -"Ensuite, dans un fichier appelé :code:`client.py`, importe Flower et " -"TensorFlow :" +#: ../../source/tutorial-quickstart-huggingface.rst:81 +msgid "Training and testing the model" +msgstr "Former et tester le modèle" -#: ../../source/tutorial-quickstart-tensorflow.rst:38 +#: ../../source/tutorial-quickstart-huggingface.rst:83 msgid "" -"We use the Keras utilities of TF to load CIFAR10, a popular colored image" -" classification dataset for machine learning. The call to " -":code:`tf.keras.datasets.cifar10.load_data()` downloads CIFAR10, caches " -"it locally, and then returns the entire training and test set as NumPy " -"ndarrays." +"Once we have a way of creating our trainloader and testloader, we can " +"take care of the training and testing. This is very similar to any " +":code:`PyTorch` training or testing loop:" msgstr "" -"Nous utilisons les utilitaires Keras de TF pour charger CIFAR10, un " -"ensemble de données de classification d'images colorées populaire pour " -"l'apprentissage automatique. L'appel à " -":code:`tf.keras.datasets.cifar10.load_data()` télécharge CIFAR10, le met " -"en cache localement, puis renvoie l'ensemble d'entraînement et de test " -"sous forme de NumPy ndarrays." +"Une fois que nous avons trouvé un moyen de créer notre trainloader et " +"notre testloader, nous pouvons nous occuper de l'entraînement et du test." +" C'est très similaire à n'importe quelle boucle d'entraînement ou de test" +" :code:`PyTorch` :" -#: ../../source/tutorial-quickstart-tensorflow.rst:47 +#: ../../source/tutorial-quickstart-huggingface.rst:121 +msgid "Creating the model itself" +msgstr "Créer le modèle lui-même" + +#: ../../source/tutorial-quickstart-huggingface.rst:123 msgid "" -"Next, we need a model. For the purpose of this tutorial, we use " -"MobilNetV2 with 10 output classes:" +"To create the model itself, we will just load the pre-trained distillBERT" +" model using Hugging Face’s :code:`AutoModelForSequenceClassification` :" msgstr "" -"Ensuite, nous avons besoin d'un modèle. Pour les besoins de ce tutoriel, " -"nous utilisons MobilNetV2 avec 10 classes de sortie :" +"Pour créer le modèle lui-même, nous allons simplement charger le modèle " +"distillBERT pré-entraîné en utilisant le " +":code:`AutoModelForSequenceClassification` de Hugging Face :" -#: ../../source/tutorial-quickstart-tensorflow.rst:60 +#: ../../source/tutorial-quickstart-huggingface.rst:136 +msgid "Federating the example" +msgstr "Fédérer l'exemple" + +#: ../../source/tutorial-quickstart-huggingface.rst:139 +msgid "Creating the IMDBClient" +msgstr "Création du client IMDBC" + +#: ../../source/tutorial-quickstart-huggingface.rst:141 msgid "" -"Flower provides a convenience class called :code:`NumPyClient` which " -"makes it easier to implement the :code:`Client` interface when your " -"workload uses Keras. The :code:`NumPyClient` interface defines three " -"methods which can be implemented in the following way:" +"To federate our example to multiple clients, we first need to write our " +"Flower client class (inheriting from :code:`flwr.client.NumPyClient`). " +"This is very easy, as our model is a standard :code:`PyTorch` model:" msgstr "" -"Flower fournit une classe de commodité appelée :code:`NumPyClient` qui " -"facilite la mise en œuvre de l'interface :code:`Client` lorsque ta charge" -" de travail utilise Keras. L'interface :code:`NumPyClient` définit trois " -"méthodes qui peuvent être mises en œuvre de la manière suivante :" - -#: ../../source/tutorial-quickstart-tensorflow.rst:135 -msgid "Each client will have its own dataset." -msgstr "Chaque client aura son propre ensemble de données." +"Pour fédérer notre exemple à plusieurs clients, nous devons d'abord " +"écrire notre classe de client Flower (héritant de " +":code:`flwr.client.NumPyClient`). C'est très facile, car notre modèle est" +" un modèle :code:`PyTorch` standard :" -#: ../../source/tutorial-quickstart-tensorflow.rst:137 +#: ../../source/tutorial-quickstart-huggingface.rst:169 msgid "" -"You should now see how the training does in the very first terminal (the " -"one that started the server):" +"The :code:`get_parameters` function lets the server get the client's " +"parameters. Inversely, the :code:`set_parameters` function allows the " +"server to send its parameters to the client. Finally, the :code:`fit` " +"function trains the model locally for the client, and the " +":code:`evaluate` function tests the model locally and returns the " +"relevant metrics." msgstr "" -"Tu devrais maintenant voir comment la formation se déroule dans le tout " -"premier terminal (celui qui a démarré le serveur) :" +"La fonction :code:`get_parameters` permet au serveur d'obtenir les " +"paramètres du client. Inversement, la fonction :code:`set_parameters` " +"permet au serveur d'envoyer ses paramètres au client. Enfin, la fonction " +":code:`fit` forme le modèle localement pour le client, et la fonction " +":code:`evaluate` teste le modèle localement et renvoie les mesures " +"correspondantes." -#: ../../source/tutorial-quickstart-tensorflow.rst:169 -#, fuzzy +#: ../../source/tutorial-quickstart-huggingface.rst:175 +msgid "Starting the server" +msgstr "Démarrer le serveur" + +#: ../../source/tutorial-quickstart-huggingface.rst:177 msgid "" -"Congratulations! You've successfully built and run your first federated " -"learning system. The full `source code " -"`_ for this can be found in :code:`examples" -"/quickstart-tensorflow/client.py`." +"Now that we have a way to instantiate clients, we need to create our " +"server in order to aggregate the results. Using Flower, this can be done " +"very easily by first choosing a strategy (here, we are using " +":code:`FedAvg`, which will define the global weights as the average of " +"all the clients' weights at each round) and then using the " +":code:`flwr.server.start_server` function:" msgstr "" -"Félicitations ! Tu as réussi à construire et à faire fonctionner ton " -"premier système d'apprentissage fédéré. Le `code source complet " -"`_ pour cela se trouve dans :code:`examples" -"/quickstart-tensorflow/client.py`." +"Maintenant que nous avons un moyen d'instancier les clients, nous devons " +"créer notre serveur afin d'agréger les résultats. Avec Flower, cela peut " +"être fait très facilement en choisissant d'abord une stratégie (ici, nous" +" utilisons :code:`FedAvg`, qui définira les poids globaux comme la " +"moyenne des poids de tous les clients à chaque tour) et en utilisant " +"ensuite la fonction :code:`flwr.server.start_server` :" -#: ../../source/tutorial-quickstart-xgboost.rst:-1 +#: ../../source/tutorial-quickstart-huggingface.rst:205 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with XGBoost to train classification models on trees." +"The :code:`weighted_average` function is there to provide a way to " +"aggregate the metrics distributed amongst the clients (basically this " +"allows us to display a nice average accuracy and loss for every round)." msgstr "" +"La fonction :code:`weighted_average` est là pour fournir un moyen " +"d'agréger les mesures réparties entre les clients (en gros, cela nous " +"permet d'afficher une belle moyenne de précision et de perte pour chaque " +"tour)." -#: ../../source/tutorial-quickstart-xgboost.rst:5 -msgid "Quickstart XGBoost" -msgstr "Démarrage rapide XGBoost" +#: ../../source/tutorial-quickstart-huggingface.rst:209 +msgid "Putting everything together" +msgstr "Tout assembler" -#: ../../source/tutorial-quickstart-xgboost.rst:14 -#, fuzzy -msgid "Federated XGBoost" -msgstr "Formation fédérée" +#: ../../source/tutorial-quickstart-huggingface.rst:211 +msgid "We can now start client instances using:" +msgstr "Nous pouvons maintenant démarrer des instances de clients en utilisant :" -#: ../../source/tutorial-quickstart-xgboost.rst:16 +#: ../../source/tutorial-quickstart-huggingface.rst:221 msgid "" -"EXtreme Gradient Boosting (**XGBoost**) is a robust and efficient " -"implementation of gradient-boosted decision tree (**GBDT**), that " -"maximises the computational boundaries for boosted tree methods. It's " -"primarily designed to enhance both the performance and computational " -"speed of machine learning models. In XGBoost, trees are constructed " -"concurrently, unlike the sequential approach taken by GBDT." -msgstr "" +"And they will be able to connect to the server and start the federated " +"training." +msgstr "Et ils pourront se connecter au serveur et démarrer la formation fédérée." -#: ../../source/tutorial-quickstart-xgboost.rst:20 +#: ../../source/tutorial-quickstart-huggingface.rst:223 +#, fuzzy msgid "" -"Often, for tabular data on medium-sized datasets with fewer than 10k " -"training examples, XGBoost surpasses the results of deep learning " -"techniques." +"If you want to check out everything put together, you should check out " +"the `full code example `_ ." msgstr "" +"Si tu veux voir tout ce qui est mis ensemble, tu devrais consulter " +"l'exemple de code complet : " +"[https://github.com/adap/flower/tree/main/examples/quickstart-" +"huggingface](https://github.com/adap/flower/tree/main/examples" +"/quickstart-huggingface)." -#: ../../source/tutorial-quickstart-xgboost.rst:23 -#, fuzzy -msgid "Why federated XGBoost?" -msgstr "Qu'est-ce que l'apprentissage fédéré ?" - -#: ../../source/tutorial-quickstart-xgboost.rst:25 +#: ../../source/tutorial-quickstart-huggingface.rst:226 msgid "" -"Indeed, as the demand for data privacy and decentralized learning grows, " -"there's an increasing requirement to implement federated XGBoost systems " -"for specialised applications, like survival analysis and financial fraud " -"detection." +"Of course, this is a very basic example, and a lot can be added or " +"modified, it was just to showcase how simply we could federate a Hugging " +"Face workflow using Flower." msgstr "" +"Bien sûr, c'est un exemple très basique, et beaucoup de choses peuvent " +"être ajoutées ou modifiées, il s'agissait juste de montrer avec quelle " +"simplicité on pouvait fédérer un flux de travail Hugging Face à l'aide de" +" Flower." -#: ../../source/tutorial-quickstart-xgboost.rst:27 +#: ../../source/tutorial-quickstart-huggingface.rst:229 msgid "" -"Federated learning ensures that raw data remains on the local device, " -"making it an attractive approach for sensitive domains where data " -"security and privacy are paramount. Given the robustness and efficiency " -"of XGBoost, combining it with federated learning offers a promising " -"solution for these specific challenges." +"Note that in this example we used :code:`PyTorch`, but we could have very" +" well used :code:`TensorFlow`." msgstr "" +"Notez que dans cet exemple, nous avons utilisé :code:`PyTorch`, mais nous" +" aurions très bien pu utiliser :code:`TensorFlow`." -#: ../../source/tutorial-quickstart-xgboost.rst:30 +#: ../../source/tutorial-quickstart-ios.rst:-1 msgid "" -"In this tutorial we will learn how to train a federated XGBoost model on " -"HIGGS dataset using Flower and :code:`xgboost` package. We use a simple " -"example (`full code xgboost-quickstart " -"`_)" -" with two *clients* and one *server* to demonstrate how federated XGBoost" -" works, and then we dive into a more complex example (`full code xgboost-" -"comprehensive `_) to run various experiments." +"Read this Federated Learning quickstart tutorial for creating an iOS app " +"using Flower to train a neural network on MNIST." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:37 -msgid "Environment Setup" -msgstr "" +#: ../../source/tutorial-quickstart-ios.rst:5 +#, fuzzy +msgid "Quickstart iOS" +msgstr "Démarrage rapide XGBoost" -#: ../../source/tutorial-quickstart-xgboost.rst:41 +#: ../../source/tutorial-quickstart-ios.rst:10 +#, fuzzy msgid "" -"We first need to install Flower and Flower Datasets. You can do this by " -"running :" +"In this tutorial we will learn how to train a Neural Network on MNIST " +"using Flower and CoreML on iOS devices." msgstr "" +"Dans ce tutoriel, nous allons apprendre, comment former un réseau " +"neuronal convolutif sur MNIST en utilisant Flower et PyTorch." -#: ../../source/tutorial-quickstart-xgboost.rst:47 +#: ../../source/tutorial-quickstart-ios.rst:12 #, fuzzy msgid "" -"Since we want to use :code:`xgboost` package to build up XGBoost trees, " -"let's go ahead and install :code:`xgboost`:" -msgstr "Puisque nous voulons utiliser scikt-learn, allons-y et installons-le :" - -#: ../../source/tutorial-quickstart-xgboost.rst:57 -msgid "" -"*Clients* are responsible for generating individual weight-updates for " -"the model based on their local datasets. Now that we have all our " -"dependencies installed, let's run a simple distributed training with two " -"clients and one server." +"First of all, for running the Flower Python server, it is recommended to " +"create a virtual environment and run everything within a :doc:`virtualenv" +" `. For the Flower client " +"implementation in iOS, it is recommended to use Xcode as our IDE." msgstr "" +"Tout d'abord, il est recommandé de créer un environnement virtuel et de " +"tout exécuter au sein d'un `virtualenv `_." -#: ../../source/tutorial-quickstart-xgboost.rst:60 +#: ../../source/tutorial-quickstart-ios.rst:15 #, fuzzy msgid "" -"In a file called :code:`client.py`, import xgboost, Flower, Flower " -"Datasets and other related functions:" +"Our example consists of one Python *server* and two iPhone *clients* that" +" all have the same model." msgstr "" -"Dans un fichier appelé :code:`client.py`, importe Flower et les paquets " -"liés à PyTorch :" +"Notre exemple consiste en un *serveur* et deux *clients* ayant tous le " +"même modèle." -#: ../../source/tutorial-quickstart-xgboost.rst:87 -msgid "Dataset partition and hyper-parameter selection" +#: ../../source/tutorial-quickstart-ios.rst:17 +#, fuzzy +msgid "" +"*Clients* are responsible for generating individual weight updates for " +"the model based on their local datasets. These updates are then sent to " +"the *server* which will aggregate them to produce a better model. " +"Finally, the *server* sends this improved version of the model back to " +"each *client*. A complete cycle of weight updates is called a *round*." msgstr "" +"*Les clients* sont chargés de générer des mises à jour de poids " +"individuelles pour le modèle en fonction de leurs ensembles de données " +"locales. Ces mises à jour sont ensuite envoyées au *serveur* qui les " +"agrège pour produire un meilleur modèle. Enfin, le *serveur* renvoie " +"cette version améliorée du modèle à chaque *client*. Un cycle complet de " +"mises à jour de poids s'appelle un *round*." -#: ../../source/tutorial-quickstart-xgboost.rst:89 +#: ../../source/tutorial-quickstart-ios.rst:21 +#, fuzzy msgid "" -"Prior to local training, we require loading the HIGGS dataset from Flower" -" Datasets and conduct data partitioning for FL:" +"Now that we have a rough idea of what is going on, let's get started to " +"setup our Flower server environment. We first need to install Flower. You" +" can do this by using pip:" msgstr "" +"Maintenant que nous avons une idée générale de ce qui se passe, " +"commençons. Nous devons d'abord installer Flower. Tu peux le faire en " +"exécutant :" -#: ../../source/tutorial-quickstart-xgboost.rst:102 -msgid "" -"In this example, we split the dataset into two partitions with uniform " -"distribution (:code:`IidPartitioner(num_partitions=2)`). Then, we load " -"the partition for the given client based on :code:`node_id`:" +#: ../../source/tutorial-quickstart-ios.rst:27 +msgid "Or Poetry:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:121 +#: ../../source/tutorial-quickstart-ios.rst:34 +#: ../../source/tutorial-quickstart-pytorch.rst:37 +#: ../../source/tutorial-quickstart-scikitlearn.rst:40 +#: ../../source/tutorial-quickstart-tensorflow.rst:29 +#: ../../source/tutorial-quickstart-xgboost.rst:55 +msgid "Flower Client" +msgstr "Client de la fleur" + +#: ../../source/tutorial-quickstart-ios.rst:36 msgid "" -"After that, we do train/test splitting on the given partition (client's " -"local data), and transform data format for :code:`xgboost` package." +"Now that we have all our dependencies installed, let's run a simple " +"distributed training using CoreML as our local training pipeline and " +"MNIST as our dataset. For simplicity reasons we will use the complete " +"Flower client with CoreML, that has been implemented and stored inside " +"the Swift SDK. The client implementation can be seen below:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:134 +#: ../../source/tutorial-quickstart-ios.rst:72 msgid "" -"The functions of :code:`train_test_split` and " -":code:`transform_dataset_to_dmatrix` are defined as below:" +"Let's create a new application project in Xcode and add :code:`flwr` as a" +" dependency in your project. For our application, we will store the logic" +" of our app in :code:`FLiOSModel.swift` and the UI elements in " +":code:`ContentView.swift`. We will focus more on :code:`FLiOSModel.swift`" +" in this quickstart. Please refer to the `full code example " +"`_ to learn more " +"about the app." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:158 -msgid "Finally, we define the hyper-parameters used for XGBoost training." +#: ../../source/tutorial-quickstart-ios.rst:75 +msgid "Import Flower and CoreML related packages in :code:`FLiOSModel.swift`:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:174 +#: ../../source/tutorial-quickstart-ios.rst:83 msgid "" -"The :code:`num_local_round` represents the number of iterations for local" -" tree boost. We use CPU for the training in default. One can shift it to " -"GPU by setting :code:`tree_method` to :code:`gpu_hist`. We use AUC as " -"evaluation metric." -msgstr "" - -#: ../../source/tutorial-quickstart-xgboost.rst:181 -msgid "Flower client definition for XGBoost" +"Then add the mlmodel to the project simply by drag-and-drop, the mlmodel " +"will be bundled inside the application during deployment to your iOS " +"device. We need to pass the url to access mlmodel and run CoreML machine " +"learning processes, it can be retrieved by calling the function " +":code:`Bundle.main.url`. For the MNIST dataset, we need to preprocess it " +"into :code:`MLBatchProvider` object. The preprocessing is done inside " +":code:`DataLoader.swift`." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:183 +#: ../../source/tutorial-quickstart-ios.rst:99 msgid "" -"After loading the dataset we define the Flower client. We follow the " -"general rule to define :code:`XgbClient` class inherited from " -":code:`fl.client.Client`." +"Since CoreML does not allow the model parameters to be seen before " +"training, and accessing the model parameters during or after the training" +" can only be done by specifying the layer name, we need to know this " +"information beforehand, through looking at the model specification, which" +" are written as proto files. The implementation can be seen in " +":code:`MLModelInspect`." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:193 +#: ../../source/tutorial-quickstart-ios.rst:102 msgid "" -"The :code:`self.bst` is used to keep the Booster objects that remain " -"consistent across rounds, allowing them to store predictions from trees " -"integrated in earlier rounds and maintain other essential data structures" -" for training." +"After we have all of the necessary information, let's create our Flower " +"client." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:196 +#: ../../source/tutorial-quickstart-ios.rst:117 msgid "" -"Then, we override :code:`get_parameters`, :code:`fit` and " -":code:`evaluate` methods insides :code:`XgbClient` class as follows." +"Then start the Flower gRPC client and start communicating to the server " +"by passing our Flower client to the function :code:`startFlwrGRPC`." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:210 +#: ../../source/tutorial-quickstart-ios.rst:124 msgid "" -"Unlike neural network training, XGBoost trees are not started from a " -"specified random weights. In this case, we do not use " -":code:`get_parameters` and :code:`set_parameters` to initialise model " -"parameters for XGBoost. As a result, let's return an empty tensor in " -":code:`get_parameters` when it is called by the server at the first " -"round." +"That's it for the client. We only have to implement :code:`Client` or " +"call the provided :code:`MLFlwrClient` and call :code:`startFlwrGRPC()`. " +"The attribute :code:`hostname` and :code:`port` tells the client which " +"server to connect to. This can be done by entering the hostname and port " +"in the application before clicking the start button to start the " +"federated learning process." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:251 +#: ../../source/tutorial-quickstart-ios.rst:129 +#: ../../source/tutorial-quickstart-pytorch.rst:203 +#: ../../source/tutorial-quickstart-scikitlearn.rst:167 +#: ../../source/tutorial-quickstart-tensorflow.rst:98 +#: ../../source/tutorial-quickstart-xgboost.rst:309 +msgid "Flower Server" +msgstr "Serveur de Flower" + +#: ../../source/tutorial-quickstart-ios.rst:131 +#: ../../source/tutorial-quickstart-pytorch.rst:205 +#: ../../source/tutorial-quickstart-tensorflow.rst:100 msgid "" -"In :code:`fit`, at the first round, we call :code:`xgb.train()` to build " -"up the first set of trees. the returned Booster object and config are " -"stored in :code:`self.bst` and :code:`self.config`, respectively. From " -"the second round, we load the global model sent from server to " -":code:`self.bst`, and then update model weights on local training data " -"with function :code:`local_boost` as follows:" +"For simple workloads we can start a Flower server and leave all the " +"configuration possibilities at their default values. In a file named " +":code:`server.py`, import Flower and start the server:" msgstr "" +"Pour les charges de travail simples, nous pouvons démarrer un serveur " +"Flower et laisser toutes les possibilités de configuration à leurs " +"valeurs par défaut. Dans un fichier nommé :code:`server.py`, importe " +"Flower et démarre le serveur :" -#: ../../source/tutorial-quickstart-xgboost.rst:269 +#: ../../source/tutorial-quickstart-ios.rst:142 +#: ../../source/tutorial-quickstart-pytorch.rst:216 +#: ../../source/tutorial-quickstart-scikitlearn.rst:230 +#: ../../source/tutorial-quickstart-tensorflow.rst:112 +msgid "Train the model, federated!" +msgstr "Entraîne le modèle, fédéré !" + +#: ../../source/tutorial-quickstart-ios.rst:144 +#: ../../source/tutorial-quickstart-pytorch.rst:218 +#: ../../source/tutorial-quickstart-tensorflow.rst:114 +#: ../../source/tutorial-quickstart-xgboost.rst:525 msgid "" -"Given :code:`num_local_round`, we update trees by calling " -":code:`self.bst.update` method. After training, the last " -":code:`N=num_local_round` trees will be extracted to send to the server." +"With both client and server ready, we can now run everything and see " +"federated learning in action. FL systems usually have a server and " +"multiple clients. We therefore have to start the server first:" msgstr "" +"Le client et le serveur étant prêts, nous pouvons maintenant tout " +"exécuter et voir l'apprentissage fédéré en action. Les systèmes FL ont " +"généralement un serveur et plusieurs clients. Nous devons donc commencer " +"par démarrer le serveur :" -#: ../../source/tutorial-quickstart-xgboost.rst:291 +#: ../../source/tutorial-quickstart-ios.rst:152 msgid "" -"In :code:`evaluate`, we call :code:`self.bst.eval_set` function to " -"conduct evaluation on valid set. The AUC value will be returned." +"Once the server is running we can start the clients in different " +"terminals. Build and run the client through your Xcode, one through Xcode" +" Simulator and the other by deploying it to your iPhone. To see more " +"about how to deploy your app to iPhone or Simulator visit `here " +"`_." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:294 +#: ../../source/tutorial-quickstart-ios.rst:156 #, fuzzy msgid "" -"Now, we can create an instance of our class :code:`XgbClient` and add one" -" line to actually run this client:" +"Congratulations! You've successfully built and run your first federated " +"learning system in your ios device. The full `source code " +"`_ for this " +"example can be found in :code:`examples/ios`." msgstr "" -"Nous pouvons maintenant créer une instance de notre classe " -":code:`MnistClient` et ajouter une ligne pour exécuter ce client :" +"Félicitations ! Tu as réussi à construire et à faire fonctionner ton " +"premier système d'apprentissage fédéré. Le code source complet " +"`_ de cet exemple se trouve dans :code:`examples" +"/quickstart-mxnet`." -#: ../../source/tutorial-quickstart-xgboost.rst:300 -#, fuzzy +#: ../../source/tutorial-quickstart-jax.rst:-1 msgid "" -"That's it for the client. We only have to implement :code:`Client`and " -"call :code:`fl.client.start_client()`. The string :code:`\"[::]:8080\"` " -"tells the client which server to connect to. In our case we can run the " -"server and the client on the same machine, therefore we use " -":code:`\"[::]:8080\"`. If we run a truly federated workload with the " -"server and clients running on different machines, all that needs to " -"change is the :code:`server_address` we point the client at." +"Check out this Federated Learning quickstart tutorial for using Flower " +"with Jax to train a linear regression model on a scikit-learn dataset." msgstr "" -"C'est tout pour le client. Il nous suffit d'implémenter :code:`Client` ou" -" :code:`NumPyClient` et d'appeler :code:`fl.client.start_client()`. La " -"chaîne :code:`\"[: :]:8080\"` indique au client à quel serveur se " -"connecter. Dans notre cas, nous pouvons exécuter le serveur et le client " -"sur la même machine, c'est pourquoi nous utilisons :code:`\"[: " -":]:8080\"`. Si nous exécutons une charge de travail véritablement fédérée" -" avec le serveur et les clients fonctionnant sur des machines " -"différentes, tout ce qui doit changer est l'adresse " -":code:`server_address` vers laquelle nous dirigeons le client." -#: ../../source/tutorial-quickstart-xgboost.rst:311 -#, fuzzy -msgid "" -"These updates are then sent to the *server* which will aggregate them to " -"produce a better model. Finally, the *server* sends this improved version" -" of the model back to each *client* to finish a complete FL round." -msgstr "" -"*Les clients* sont chargés de générer des mises à jour de poids " -"individuelles pour le modèle en fonction de leurs ensembles de données " -"locales. Ces mises à jour sont ensuite envoyées au *serveur* qui les " -"agrège pour produire un meilleur modèle. Enfin, le *serveur* renvoie " -"cette version améliorée du modèle à chaque *client*. Un cycle complet de " -"mises à jour de poids s'appelle un *round*." +#: ../../source/tutorial-quickstart-jax.rst:5 +msgid "Quickstart JAX" +msgstr "Démarrage rapide de JAX" -#: ../../source/tutorial-quickstart-xgboost.rst:314 -#, fuzzy +#: ../../source/tutorial-quickstart-pandas.rst:-1 msgid "" -"In a file named :code:`server.py`, import Flower and FedXgbBagging from " -":code:`flwr.server.strategy`." +"Check out this Federated Learning quickstart tutorial for using Flower " +"with Pandas to perform Federated Analytics." msgstr "" -"Dans un fichier appelé :code:`client.py`, importe Flower et les paquets " -"liés au MXNet :" -#: ../../source/tutorial-quickstart-xgboost.rst:316 -msgid "We first define a strategy for XGBoost bagging aggregation." -msgstr "" +#: ../../source/tutorial-quickstart-pandas.rst:5 +msgid "Quickstart Pandas" +msgstr "Démarrage rapide des Pandas" -#: ../../source/tutorial-quickstart-xgboost.rst:339 -msgid "" -"We use two clients for this example. An " -":code:`evaluate_metrics_aggregation` function is defined to collect and " -"wighted average the AUC values from clients." -msgstr "" +#: ../../source/tutorial-quickstart-pandas.rst:10 +msgid "Let's build a federated analytics system using Pandas and Flower!" +msgstr "Construisons un système d'analyse fédéré à l'aide de Pandas et de Flower !" -#: ../../source/tutorial-quickstart-xgboost.rst:342 +#: ../../source/tutorial-quickstart-pandas.rst:12 #, fuzzy -msgid "Then, we start the server:" -msgstr "Démarrer le serveur" - -#: ../../source/tutorial-quickstart-xgboost.rst:354 -msgid "Tree-based bagging aggregation" -msgstr "" - -#: ../../source/tutorial-quickstart-xgboost.rst:356 msgid "" -"You must be curious about how bagging aggregation works. Let's look into " -"the details." +"Please refer to the `full code example " +"`_ " +"to learn more." msgstr "" +"Réfère-toi à l'exemple de code complet " +"`_ " +"pour en savoir plus." -#: ../../source/tutorial-quickstart-xgboost.rst:358 +#: ../../source/tutorial-quickstart-pytorch.rst:-1 msgid "" -"In file :code:`flwr.server.strategy.fedxgb_bagging.py`, we define " -":code:`FedXgbBagging` inherited from :code:`flwr.server.strategy.FedAvg`." -" Then, we override the :code:`aggregate_fit`, :code:`aggregate_evaluate` " -"and :code:`evaluate` methods as follows:" +"Check out this Federated Learning quickstart tutorial for using Flower " +"with PyTorch to train a CNN model on MNIST." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:454 +#: ../../source/tutorial-quickstart-pytorch.rst:13 msgid "" -"In :code:`aggregate_fit`, we sequentially aggregate the clients' XGBoost " -"trees by calling :code:`aggregate()` function:" +"In this tutorial we will learn how to train a Convolutional Neural " +"Network on CIFAR10 using Flower and PyTorch." msgstr "" +"Dans ce tutoriel, nous allons apprendre à entraîner un réseau neuronal " +"convolutif sur CIFAR10 à l'aide de Flower et PyTorch." -#: ../../source/tutorial-quickstart-xgboost.rst:513 +#: ../../source/tutorial-quickstart-pytorch.rst:15 +#: ../../source/tutorial-quickstart-xgboost.rst:39 +#, fuzzy msgid "" -"In this function, we first fetch the number of trees and the number of " -"parallel trees for the current and previous model by calling " -":code:`_get_tree_nums`. Then, the fetched information will be aggregated." -" After that, the trees (containing model weights) are aggregated to " -"generate a new tree model." +"First of all, it is recommended to create a virtual environment and run " +"everything within a :doc:`virtualenv `." msgstr "" +"Tout d'abord, il est recommandé de créer un environnement virtuel et de " +"tout exécuter au sein d'un `virtualenv `_." -#: ../../source/tutorial-quickstart-xgboost.rst:518 +#: ../../source/tutorial-quickstart-pytorch.rst:17 +#: ../../source/tutorial-quickstart-scikitlearn.rst:14 msgid "" -"After traversal of all clients' models, a new global model is generated, " -"followed by the serialisation, and sending back to each client." +"Our example consists of one *server* and two *clients* all having the " +"same model." msgstr "" +"Notre exemple consiste en un *serveur* et deux *clients* ayant tous le " +"même modèle." -#: ../../source/tutorial-quickstart-xgboost.rst:523 -msgid "Launch Federated XGBoost!" +#: ../../source/tutorial-quickstart-pytorch.rst:19 +msgid "" +"*Clients* are responsible for generating individual weight-updates for " +"the model based on their local datasets. These updates are then sent to " +"the *server* which will aggregate them to produce a better model. " +"Finally, the *server* sends this improved version of the model back to " +"each *client*. A complete cycle of weight updates is called a *round*." msgstr "" +"*Les clients* sont chargés de générer des mises à jour de poids " +"individuelles pour le modèle en fonction de leurs ensembles de données " +"locales. Ces mises à jour sont ensuite envoyées au *serveur* qui les " +"agrège pour produire un meilleur modèle. Enfin, le *serveur* renvoie " +"cette version améliorée du modèle à chaque *client*. Un cycle complet de " +"mises à jour de poids s'appelle un *round*." -#: ../../source/tutorial-quickstart-xgboost.rst:585 +#: ../../source/tutorial-quickstart-pytorch.rst:23 msgid "" -"Congratulations! You've successfully built and run your first federated " -"XGBoost system. The AUC values can be checked in " -":code:`metrics_distributed`. One can see that the average AUC increases " -"over FL rounds." +"Now that we have a rough idea of what is going on, let's get started. We " +"first need to install Flower. You can do this by running :" msgstr "" +"Maintenant que nous avons une idée générale de ce qui se passe, " +"commençons. Nous devons d'abord installer Flower. Tu peux le faire en " +"exécutant :" -#: ../../source/tutorial-quickstart-xgboost.rst:590 -#, fuzzy +#: ../../source/tutorial-quickstart-pytorch.rst:29 msgid "" -"The full `source code `_ for this example can be found in :code:`examples" -"/xgboost-quickstart`." +"Since we want to use PyTorch to solve a computer vision task, let's go " +"ahead and install PyTorch and the **torchvision** library:" msgstr "" -"Félicitations ! Tu as réussi à construire et à faire fonctionner ton " -"premier système d'apprentissage fédéré. Le code source complet " -"`_ de cet exemple se trouve dans :code:`examples" -"/quickstart-mxnet`." +"Puisque nous voulons utiliser PyTorch pour résoudre une tâche de vision " +"par ordinateur, allons-y et installons PyTorch et la bibliothèque " +"**torchvision** :" -#: ../../source/tutorial-quickstart-xgboost.rst:594 -msgid "Comprehensive Federated XGBoost" +#: ../../source/tutorial-quickstart-pytorch.rst:39 +msgid "" +"Now that we have all our dependencies installed, let's run a simple " +"distributed training with two clients and one server. Our training " +"procedure and network architecture are based on PyTorch's `Deep Learning " +"with PyTorch " +"`_." msgstr "" +"Maintenant que nous avons installé toutes nos dépendances, lançons une " +"formation distribuée simple avec deux clients et un serveur. Notre " +"procédure de formation et l'architecture de notre réseau sont basées sur " +"`Deep Learning with PyTorch " +"`_ de" +" PyTorch." -#: ../../source/tutorial-quickstart-xgboost.rst:596 +#: ../../source/tutorial-quickstart-pytorch.rst:41 msgid "" -"Now that you have known how federated XGBoost work with Flower, it's time" -" to run some more comprehensive experiments by customising the " -"experimental settings. In the xgboost-comprehensive example (`full code " -"`_), we provide more options to define various experimental" -" setups, including aggregation strategies, data partitioning and " -"centralised/distributed evaluation. We also support :doc:`Flower " -"simulation ` making it easy to simulate large " -"client cohorts in a resource-aware manner. Let's take a look!" +"In a file called :code:`client.py`, import Flower and PyTorch related " +"packages:" msgstr "" +"Dans un fichier appelé :code:`client.py`, importe Flower et les paquets " +"liés à PyTorch :" -#: ../../source/tutorial-quickstart-xgboost.rst:603 -#, fuzzy -msgid "Cyclic training" -msgstr "Formation centralisée" +#: ../../source/tutorial-quickstart-pytorch.rst:56 +msgid "In addition, we define the device allocation in PyTorch with:" +msgstr "En outre, nous définissons l'attribution des appareils dans PyTorch avec :" -#: ../../source/tutorial-quickstart-xgboost.rst:605 +#: ../../source/tutorial-quickstart-pytorch.rst:62 msgid "" -"In addition to bagging aggregation, we offer a cyclic training scheme, " -"which performs FL in a client-by-client fashion. Instead of aggregating " -"multiple clients, there is only one single client participating in the " -"training per round in the cyclic training scenario. The trained local " -"XGBoost trees will be passed to the next client as an initialised model " -"for next round's boosting." +"We use PyTorch to load CIFAR10, a popular colored image classification " +"dataset for machine learning. The PyTorch :code:`DataLoader()` downloads " +"the training and test data that are then normalized." msgstr "" +"Nous utilisons PyTorch pour charger CIFAR10, un ensemble de données de " +"classification d'images colorées populaire pour l'apprentissage " +"automatique. Le :code:`DataLoader()` de PyTorch télécharge les données " +"d'entraînement et de test qui sont ensuite normalisées." -#: ../../source/tutorial-quickstart-xgboost.rst:609 +#: ../../source/tutorial-quickstart-pytorch.rst:78 msgid "" -"To do this, we first customise a :code:`ClientManager` in " -":code:`server_utils.py`:" +"Define the loss and optimizer with PyTorch. The training of the dataset " +"is done by looping over the dataset, measure the corresponding loss and " +"optimize it." msgstr "" +"Définis la perte et l'optimiseur avec PyTorch L'entraînement de " +"l'ensemble de données se fait en bouclant sur l'ensemble de données, en " +"mesurant la perte correspondante et en l'optimisant." -#: ../../source/tutorial-quickstart-xgboost.rst:649 +#: ../../source/tutorial-quickstart-pytorch.rst:94 msgid "" -"The customised :code:`ClientManager` samples all available clients in " -"each FL round based on the order of connection to the server. Then, we " -"define a new strategy :code:`FedXgbCyclic` in " -":code:`flwr.server.strategy.fedxgb_cyclic.py`, in order to sequentially " -"select only one client in given round and pass the received model to next" -" client." +"Define then the validation of the machine learning network. We loop over" +" the test set and measure the loss and accuracy of the test set." msgstr "" +"Définis ensuite la validation du réseau d'apprentissage automatique. Nous" +" passons en boucle sur l'ensemble de test et mesurons la perte et la " +"précision de l'ensemble de test." -#: ../../source/tutorial-quickstart-xgboost.rst:690 +#: ../../source/tutorial-quickstart-pytorch.rst:113 msgid "" -"Unlike the original :code:`FedAvg`, we don't perform aggregation here. " -"Instead, we just make a copy of the received client model as global model" -" by overriding :code:`aggregate_fit`." +"After defining the training and testing of a PyTorch machine learning " +"model, we use the functions for the Flower clients." msgstr "" +"Après avoir défini l'entraînement et le test d'un modèle d'apprentissage " +"automatique PyTorch, nous utilisons les fonctions pour les clients " +"Flower." -#: ../../source/tutorial-quickstart-xgboost.rst:693 +#: ../../source/tutorial-quickstart-pytorch.rst:115 msgid "" -"Also, the customised :code:`configure_fit` and :code:`configure_evaluate`" -" methods ensure the clients to be sequentially selected given FL round:" +"The Flower clients will use a simple CNN adapted from 'PyTorch: A 60 " +"Minute Blitz':" msgstr "" +"Les clients de Flower utiliseront un CNN simple adapté de \"PyTorch : A " +"60 Minute Blitz\" :" -#: ../../source/tutorial-quickstart-xgboost.rst:757 -msgid "Customised data partitioning" +#: ../../source/tutorial-quickstart-pytorch.rst:142 +msgid "" +"After loading the data set with :code:`load_data()` we define the Flower " +"interface." msgstr "" +"Après avoir chargé l'ensemble des données avec :code:`load_data()`, nous " +"définissons l'interface Flower." -#: ../../source/tutorial-quickstart-xgboost.rst:759 -msgid "" -"In :code:`dataset.py`, we have a function :code:`instantiate_partitioner`" -" to instantiate the data partitioner based on the given " -":code:`num_partitions` and :code:`partitioner_type`. Currently, we " -"provide four supported partitioner type to simulate the uniformity/non-" -"uniformity in data quantity (uniform, linear, square, exponential)." -msgstr "" - -#: ../../source/tutorial-quickstart-xgboost.rst:790 -#, fuzzy -msgid "Customised centralised/distributed evaluation" -msgstr "Évaluation centralisée" - -#: ../../source/tutorial-quickstart-xgboost.rst:792 +#: ../../source/tutorial-quickstart-pytorch.rst:144 +#: ../../source/tutorial-quickstart-tensorflow.rst:54 msgid "" -"To facilitate centralised evaluation, we define a function in " -":code:`server_utils.py`:" +"The Flower server interacts with clients through an interface called " +":code:`Client`. When the server selects a particular client for training," +" it sends training instructions over the network. The client receives " +"those instructions and calls one of the :code:`Client` methods to run " +"your code (i.e., to train the neural network we defined earlier)." msgstr "" +"Le serveur Flower interagit avec les clients par le biais d'une interface" +" appelée :code:`Client`. Lorsque le serveur sélectionne un client " +"particulier pour la formation, il envoie des instructions de formation " +"sur le réseau. Le client reçoit ces instructions et appelle l'une des " +"méthodes :code:`Client` pour exécuter ton code (c'est-à-dire pour former " +"le réseau neuronal que nous avons défini plus tôt)." -#: ../../source/tutorial-quickstart-xgboost.rst:824 +#: ../../source/tutorial-quickstart-pytorch.rst:150 msgid "" -"This function returns a evaluation function which instantiates a " -":code:`Booster` object and loads the global model weights to it. The " -"evaluation is conducted by calling :code:`eval_set()` method, and the " -"tested AUC value is reported." +"Flower provides a convenience class called :code:`NumPyClient` which " +"makes it easier to implement the :code:`Client` interface when your " +"workload uses PyTorch. Implementing :code:`NumPyClient` usually means " +"defining the following methods (:code:`set_parameters` is optional " +"though):" msgstr "" +"Flower fournit une classe de commodité appelée :code:`NumPyClient` qui " +"facilite la mise en œuvre de l'interface :code:`Client` lorsque ta charge" +" de travail utilise PyTorch. Mettre en œuvre :code:`NumPyClient` signifie" +" généralement définir les méthodes suivantes (:code:`set_parameters` est " +"cependant facultatif) :" -#: ../../source/tutorial-quickstart-xgboost.rst:827 -msgid "" -"As for distributed evaluation on the clients, it's same as the quick-" -"start example by overriding the :code:`evaluate()` method insides the " -":code:`XgbClient` class in :code:`client_utils.py`." -msgstr "" +#: ../../source/tutorial-quickstart-pytorch.rst:156 +#: ../../source/tutorial-quickstart-scikitlearn.rst:119 +msgid "return the model weight as a list of NumPy ndarrays" +msgstr "renvoie le poids du modèle sous la forme d'une liste de ndarrays NumPy" -#: ../../source/tutorial-quickstart-xgboost.rst:831 -#, fuzzy -msgid "Flower simulation" -msgstr "Simulation de moniteur" +#: ../../source/tutorial-quickstart-pytorch.rst:157 +#: ../../source/tutorial-quickstart-scikitlearn.rst:121 +msgid ":code:`set_parameters` (optional)" +msgstr ":code:`set_parameters` (optionnel)" -#: ../../source/tutorial-quickstart-xgboost.rst:832 +#: ../../source/tutorial-quickstart-pytorch.rst:158 +#: ../../source/tutorial-quickstart-scikitlearn.rst:121 msgid "" -"We also provide an example code (:code:`sim.py`) to use the simulation " -"capabilities of Flower to simulate federated XGBoost training on either a" -" single machine or a cluster of machines." +"update the local model weights with the parameters received from the " +"server" msgstr "" +"mettre à jour les poids du modèle local avec les paramètres reçus du " +"serveur" -#: ../../source/tutorial-quickstart-xgboost.rst:866 -msgid "" -"After importing all required packages, we define a :code:`main()` " -"function to perform the simulation process:" -msgstr "" +#: ../../source/tutorial-quickstart-pytorch.rst:160 +#: ../../source/tutorial-quickstart-scikitlearn.rst:124 +msgid "set the local model weights" +msgstr "fixe les poids du modèle local" -#: ../../source/tutorial-quickstart-xgboost.rst:921 -msgid "" -"We first load the dataset and perform data partitioning, and the pre-" -"processed data is stored in a :code:`list`. After the simulation begins, " -"the clients won't need to pre-process their partitions again." -msgstr "" +#: ../../source/tutorial-quickstart-pytorch.rst:161 +#: ../../source/tutorial-quickstart-scikitlearn.rst:125 +msgid "train the local model" +msgstr "entraîne le modèle local" -#: ../../source/tutorial-quickstart-xgboost.rst:924 -msgid "Then, we define the strategies and other hyper-parameters:" -msgstr "" +#: ../../source/tutorial-quickstart-pytorch.rst:162 +#: ../../source/tutorial-quickstart-scikitlearn.rst:126 +msgid "receive the updated local model weights" +msgstr "recevoir les poids du modèle local mis à jour" -#: ../../source/tutorial-quickstart-xgboost.rst:975 -msgid "" -"After that, we start the simulation by calling " -":code:`fl.simulation.start_simulation`:" -msgstr "" +#: ../../source/tutorial-quickstart-pytorch.rst:164 +#: ../../source/tutorial-quickstart-scikitlearn.rst:128 +msgid "test the local model" +msgstr "teste le modèle local" -#: ../../source/tutorial-quickstart-xgboost.rst:995 +#: ../../source/tutorial-quickstart-pytorch.rst:166 +msgid "which can be implemented in the following way:" +msgstr "qui peut être mis en œuvre de la manière suivante :" + +#: ../../source/tutorial-quickstart-pytorch.rst:189 +#: ../../source/tutorial-quickstart-tensorflow.rst:82 msgid "" -"One of key parameters for :code:`start_simulation` is :code:`client_fn` " -"which returns a function to construct a client. We define it as follows:" +"We can now create an instance of our class :code:`CifarClient` and add " +"one line to actually run this client:" msgstr "" +"Nous pouvons maintenant créer une instance de notre classe " +":code:`CifarClient` et ajouter une ligne pour exécuter ce client :" -#: ../../source/tutorial-quickstart-xgboost.rst:1038 -msgid "Arguments parser" +#: ../../source/tutorial-quickstart-pytorch.rst:196 +#: ../../source/tutorial-quickstart-tensorflow.rst:90 +#, fuzzy +msgid "" +"That's it for the client. We only have to implement :code:`Client` or " +":code:`NumPyClient` and call :code:`fl.client.start_client()`. If you " +"implement a client of type :code:`NumPyClient` you'll need to first call " +"its :code:`to_client()` method. The string :code:`\"[::]:8080\"` tells " +"the client which server to connect to. In our case we can run the server " +"and the client on the same machine, therefore we use " +":code:`\"[::]:8080\"`. If we run a truly federated workload with the " +"server and clients running on different machines, all that needs to " +"change is the :code:`server_address` we point the client at." msgstr "" +"C'est tout pour le client. Il nous suffit d'implémenter :code:`Client` ou" +" :code:`NumPyClient` et d'appeler :code:`fl.client.start_client()`. La " +"chaîne :code:`\"[: :]:8080\"` indique au client à quel serveur se " +"connecter. Dans notre cas, nous pouvons exécuter le serveur et le client " +"sur la même machine, c'est pourquoi nous utilisons :code:`\"[: " +":]:8080\"`. Si nous exécutons une charge de travail véritablement fédérée" +" avec le serveur et les clients fonctionnant sur des machines " +"différentes, tout ce qui doit changer est l'adresse " +":code:`server_address` vers laquelle nous dirigeons le client." -#: ../../source/tutorial-quickstart-xgboost.rst:1040 +#: ../../source/tutorial-quickstart-pytorch.rst:226 +#: ../../source/tutorial-quickstart-scikitlearn.rst:239 +#: ../../source/tutorial-quickstart-tensorflow.rst:122 +#: ../../source/tutorial-quickstart-xgboost.rst:533 msgid "" -"In :code:`utils.py`, we define the arguments parsers for clients, server " -"and simulation, allowing users to specify different experimental " -"settings. Let's first see the sever side:" +"Once the server is running we can start the clients in different " +"terminals. Open a new terminal and start the first client:" msgstr "" +"Une fois que le serveur fonctionne, nous pouvons démarrer les clients " +"dans différents terminaux. Ouvre un nouveau terminal et démarre le " +"premier client :" -#: ../../source/tutorial-quickstart-xgboost.rst:1086 +#: ../../source/tutorial-quickstart-pytorch.rst:233 +#: ../../source/tutorial-quickstart-scikitlearn.rst:246 +#: ../../source/tutorial-quickstart-tensorflow.rst:129 +#: ../../source/tutorial-quickstart-xgboost.rst:540 +msgid "Open another terminal and start the second client:" +msgstr "Ouvre un autre terminal et démarre le deuxième client :" + +#: ../../source/tutorial-quickstart-pytorch.rst:239 +#: ../../source/tutorial-quickstart-scikitlearn.rst:252 +#: ../../source/tutorial-quickstart-xgboost.rst:546 msgid "" -"This allows user to specify training strategies / the number of total " -"clients / FL rounds / participating clients / clients for evaluation, and" -" evaluation fashion. Note that with :code:`--centralised-eval`, the sever" -" will do centralised evaluation and all functionalities for client " -"evaluation will be disabled." +"Each client will have its own dataset. You should now see how the " +"training does in the very first terminal (the one that started the " +"server):" msgstr "" +"Chaque client aura son propre ensemble de données. Tu devrais maintenant " +"voir comment la formation se déroule dans le tout premier terminal (celui" +" qui a démarré le serveur) :" -#: ../../source/tutorial-quickstart-xgboost.rst:1090 -msgid "Then, the argument parser on client side:" +#: ../../source/tutorial-quickstart-pytorch.rst:271 +#, fuzzy +msgid "" +"Congratulations! You've successfully built and run your first federated " +"learning system. The full `source code " +"`_ for this example can be found in :code:`examples" +"/quickstart-pytorch`." msgstr "" +"Félicitations ! Tu as réussi à construire et à faire fonctionner ton " +"premier système d'apprentissage fédéré. Le code source complet " +"`_ de cet exemple se trouve dans :code:`examples" +"/quickstart-pytorch`." -#: ../../source/tutorial-quickstart-xgboost.rst:1144 +#: ../../source/tutorial-quickstart-pytorch-lightning.rst:-1 msgid "" -"This defines various options for client data partitioning. Besides, " -"clients also have an option to conduct evaluation on centralised test set" -" by setting :code:`--centralised-eval`, as well as an option to perform " -"scaled learning rate based on the number of clients by setting :code" -":`--scaled-lr`." +"Check out this Federated Learning quickstart tutorial for using Flower " +"with PyTorch Lightning to train an Auto Encoder model on MNIST." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1148 -msgid "We also have an argument parser for simulation:" -msgstr "" +#: ../../source/tutorial-quickstart-pytorch-lightning.rst:5 +msgid "Quickstart PyTorch Lightning" +msgstr "Démarrage rapide de PyTorch Lightning" -#: ../../source/tutorial-quickstart-xgboost.rst:1226 -msgid "This integrates all arguments for both client and server sides." +#: ../../source/tutorial-quickstart-pytorch-lightning.rst:10 +#, fuzzy +msgid "" +"Let's build a horizontal federated learning system using PyTorch " +"Lightning and Flower!" msgstr "" +"Construisons un système d'apprentissage fédéré en utilisant PyTorch " +"Lightning et Flower !" -#: ../../source/tutorial-quickstart-xgboost.rst:1229 +#: ../../source/tutorial-quickstart-pytorch-lightning.rst:12 #, fuzzy -msgid "Example commands" -msgstr "Exemples de PyTorch" +msgid "" +"Please refer to the `full code example " +"`_ to learn more." +msgstr "" +"Réfère-toi à l'exemple de code complet " +"`_ pour en savoir plus." -#: ../../source/tutorial-quickstart-xgboost.rst:1231 +#: ../../source/tutorial-quickstart-scikitlearn.rst:-1 msgid "" -"To run a centralised evaluated experiment with bagging strategy on 5 " -"clients with exponential distribution for 50 rounds, we first start the " -"server as below:" +"Check out this Federated Learning quickstart tutorial for using Flower " +"with scikit-learn to train a linear regression model." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1238 -#, fuzzy -msgid "Then, on each client terminal, we start the clients:" -msgstr "Ouvre un autre terminal et démarre le deuxième client :" +#: ../../source/tutorial-quickstart-scikitlearn.rst:5 +msgid "Quickstart scikit-learn" +msgstr "Démarrage rapide de scikit-learn" -#: ../../source/tutorial-quickstart-xgboost.rst:1244 -msgid "To run the same experiment with Flower simulation:" +#: ../../source/tutorial-quickstart-scikitlearn.rst:10 +msgid "" +"In this tutorial, we will learn how to train a :code:`Logistic " +"Regression` model on MNIST using Flower and scikit-learn." msgstr "" +"Dans ce tutoriel, nous allons apprendre à former un :code:`modèle de " +"régression logistique` sur MNIST en utilisant Flower et scikit-learn." -#: ../../source/tutorial-quickstart-xgboost.rst:1250 +#: ../../source/tutorial-quickstart-scikitlearn.rst:12 #, fuzzy msgid "" -"The full `code `_ for this comprehensive example can be found in" -" :code:`examples/xgboost-comprehensive`." +"It is recommended to create a virtual environment and run everything " +"within this :doc:`virtualenv `." msgstr "" -"Félicitations ! Tu as réussi à construire et à faire fonctionner ton " -"premier système d'apprentissage fédéré. Le code source complet " -"`_ de cet exemple se trouve dans :code:`examples" -"/quickstart-mxnet`." - -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:9 -#, fuzzy -msgid "Build a strategy from scratch" -msgstr "Élaborer une stratégie à partir de zéro" +"Il est recommandé de créer un environnement virtuel et de tout exécuter " +"dans ce `virtualenv `_." -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:11 -#, fuzzy +#: ../../source/tutorial-quickstart-scikitlearn.rst:16 msgid "" -"Welcome to the third part of the Flower federated learning tutorial. In " -"previous parts of this tutorial, we introduced federated learning with " -"PyTorch and Flower (`part 1 `__) and we learned how strategies " -"can be used to customize the execution on both the server and the clients" -" (`part 2 `__)." +"*Clients* are responsible for generating individual model parameter " +"updates for the model based on their local datasets. These updates are " +"then sent to the *server* which will aggregate them to produce an updated" +" global model. Finally, the *server* sends this improved version of the " +"model back to each *client*. A complete cycle of parameters updates is " +"called a *round*." msgstr "" -"Bienvenue dans la troisième partie du tutoriel sur l'apprentissage fédéré" -" Flower. Dans les parties précédentes de ce tutoriel, nous avons présenté" -" l'apprentissage fédéré avec PyTorch et Flower (`partie 1 " -"`__) " -"et nous avons appris comment les stratégies peuvent être utilisées pour " -"personnaliser l'exécution à la fois sur le serveur et sur les clients " -"(`partie 2 `__)." +"*Les clients* sont chargés de générer des mises à jour individuelles des " +"paramètres du modèle en fonction de leurs ensembles de données locales. " +"Ces mises à jour sont ensuite envoyées au *serveur* qui les agrège pour " +"produire un modèle global mis à jour. Enfin, le *serveur* renvoie cette " +"version améliorée du modèle à chaque *client*. Un cycle complet de mises " +"à jour des paramètres s'appelle un *round*." -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:13 +#: ../../source/tutorial-quickstart-scikitlearn.rst:20 msgid "" -"In this notebook, we'll continue to customize the federated learning " -"system we built previously by creating a custom version of FedAvg (again," -" using `Flower `__ and `PyTorch " -"`__)." +"Now that we have a rough idea of what is going on, let's get started. We " +"first need to install Flower. You can do this by running:" msgstr "" -"Dans ce carnet, nous allons continuer à personnaliser le système " -"d'apprentissage fédéré que nous avons construit précédemment en créant " -"une version personnalisée de FedAvg (encore une fois, en utilisant " -"`Flower `__ et `PyTorch `__)." +"Maintenant que nous avons une idée approximative de ce qui se passe, " +"commençons. Nous devons d'abord installer Flower. Tu peux le faire en " +"lançant :" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:15 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:16 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:15 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:15 +#: ../../source/tutorial-quickstart-scikitlearn.rst:26 +#, fuzzy +msgid "Since we want to use scikit-learn, let's go ahead and install it:" +msgstr "Puisque nous voulons utiliser scikt-learn, allons-y et installons-le :" + +#: ../../source/tutorial-quickstart-scikitlearn.rst:32 +msgid "Or simply install all dependencies using Poetry:" +msgstr "Ou installe simplement toutes les dépendances à l'aide de Poetry :" + +#: ../../source/tutorial-quickstart-scikitlearn.rst:42 msgid "" -"`Star Flower on GitHub `__ ⭐️ and join " -"the Flower community on Slack to connect, ask questions, and get help: " -"`Join Slack `__ 🌼 We'd love to hear from " -"you in the ``#introductions`` channel! And if anything is unclear, head " -"over to the ``#questions`` channel." +"Now that we have all our dependencies installed, let's run a simple " +"distributed training with two clients and one server. However, before " +"setting up the client and server, we will define all functionalities that" +" we need for our federated learning setup within :code:`utils.py`. The " +":code:`utils.py` contains different functions defining all the machine " +"learning basics:" msgstr "" -"`Star Flower on GitHub `__ ⭐️ et " -"rejoignez la communauté Flower sur Slack pour vous connecter, poser des " -"questions et obtenir de l'aide : `Join Slack `__ 🌼 Nous serions ravis d'avoir de vos nouvelles dans le canal " -"``#introductions`` ! Et si quelque chose n'est pas clair, rendez-vous sur" -" le canal ``#questions``." - -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:17 -msgid "Let's build a new ``Strategy`` from scratch!" -msgstr "Construisons une nouvelle ``Stratégie`` à partir de zéro !" +"Maintenant que toutes nos dépendances sont installées, exécutons une " +"formation distribuée simple avec deux clients et un serveur. Cependant, " +"avant de configurer le client et le serveur, nous allons définir toutes " +"les fonctionnalités dont nous avons besoin pour notre configuration " +"d'apprentissage fédéré dans :code:`utils.py`. Le :code:`utils.py` " +"contient différentes fonctions définissant toutes les bases de " +"l'apprentissage automatique :" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:29 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:29 -msgid "Preparation" -msgstr "Préparation" +#: ../../source/tutorial-quickstart-scikitlearn.rst:45 +msgid ":code:`get_model_parameters()`" +msgstr ":code:`get_model_parameters()`" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:31 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:32 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:31 -msgid "" -"Before we begin with the actual code, let's make sure that we have " -"everything we need." +#: ../../source/tutorial-quickstart-scikitlearn.rst:46 +msgid "Returns the parameters of a :code:`sklearn` LogisticRegression model" msgstr "" -"Avant de commencer le code proprement dit, assurons-nous que nous " -"disposons de tout ce dont nous avons besoin." +"Renvoie les paramètres d'un modèle de régression logistique " +":code:`sklearn`" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:43 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:44 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:43 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:43 -msgid "Installing dependencies" -msgstr "Installation des dépendances" +#: ../../source/tutorial-quickstart-scikitlearn.rst:47 +msgid ":code:`set_model_params()`" +msgstr ":code:`set_model_params()`" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:45 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:46 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:45 -msgid "First, we install the necessary packages:" -msgstr "Tout d'abord, nous installons les paquets nécessaires :" +#: ../../source/tutorial-quickstart-scikitlearn.rst:48 +#, fuzzy +msgid "Sets the parameters of a :code:`sklearn` LogisticRegression model" +msgstr "Définit les paramètres d'un modèle de régression logistique :code:`sklean`" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:65 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:66 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:65 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:65 +#: ../../source/tutorial-quickstart-scikitlearn.rst:50 +msgid ":code:`set_initial_params()`" +msgstr ":code:`set_initial_params()`" + +#: ../../source/tutorial-quickstart-scikitlearn.rst:50 +msgid "Initializes the model parameters that the Flower server will ask for" +msgstr "Initialise les paramètres du modèle que le serveur de Flower demandera" + +#: ../../source/tutorial-quickstart-scikitlearn.rst:52 msgid "" -"Now that we have all dependencies installed, we can import everything we " -"need for this tutorial:" +"Please check out :code:`utils.py` `here " +"`_ for more details. The pre-defined functions are used in" +" the :code:`client.py` and imported. The :code:`client.py` also requires " +"to import several packages such as Flower and scikit-learn:" msgstr "" -"Maintenant que toutes les dépendances sont installées, nous pouvons " -"importer tout ce dont nous avons besoin pour ce tutoriel :" +"Tu peux consulter :code:`utils.py` `ici " +"`_ pour plus de détails. Les fonctions prédéfinies sont " +"utilisées dans :code:`client.py` et importées. :code:`client.py` " +"nécessite également d'importer plusieurs paquets tels que Flower et " +"scikit-learn :" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:101 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:102 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:101 +#: ../../source/tutorial-quickstart-scikitlearn.rst:67 msgid "" -"It is possible to switch to a runtime that has GPU acceleration enabled " -"(on Google Colab: ``Runtime > Change runtime type > Hardware acclerator: " -"GPU > Save``). Note, however, that Google Colab is not always able to " -"offer GPU acceleration. If you see an error related to GPU availability " -"in one of the following sections, consider switching back to CPU-based " -"execution by setting ``DEVICE = torch.device(\"cpu\")``. If the runtime " -"has GPU acceleration enabled, you should see the output ``Training on " -"cuda``, otherwise it'll say ``Training on cpu``." +"Prior to local training, we need to load the MNIST dataset, a popular " +"image classification dataset of handwritten digits for machine learning, " +"and partition the dataset for FL. This can be conveniently achieved using" +" `Flower Datasets `_. The " +":code:`FederatedDataset.load_partition()` method loads the partitioned " +"training set for each partition ID defined in the :code:`--partition-id` " +"argument." msgstr "" -"Il est possible de passer à un runtime dont l'accélération GPU est " -"activée (sur Google Colab : ``Runtime > Change runtime type > Hardware " -"acclerator : GPU > Save``). Note cependant que Google Colab n'est pas " -"toujours en mesure de proposer l'accélération GPU. Si tu vois une erreur " -"liée à la disponibilité du GPU dans l'une des sections suivantes, " -"envisage de repasser à une exécution basée sur le CPU en définissant " -"``DEVICE = torch.device(\"cpu\")``. Si le runtime a activé l'accélération" -" GPU, tu devrais voir apparaître le résultat ``Training on cuda``, sinon " -"il dira ``Training on cpu``." - -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:114 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:115 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:114 -msgid "Data loading" -msgstr "Chargement des données" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:116 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:116 +#: ../../source/tutorial-quickstart-scikitlearn.rst:95 msgid "" -"Let's now load the CIFAR-10 training and test set, partition them into " -"ten smaller datasets (each split into training and validation set), and " -"wrap everything in their own ``DataLoader``. We introduce a new parameter" -" ``num_clients`` which allows us to call ``load_datasets`` with different" -" numbers of clients." +"Next, the logistic regression model is defined and initialized with " +":code:`utils.set_initial_params()`." msgstr "" -"Chargeons maintenant les ensembles d'entraînement et de test CIFAR-10, " -"divisons-les en dix ensembles de données plus petits (chacun divisé en " -"ensemble d'entraînement et de validation), et enveloppons le tout dans " -"leur propre ``DataLoader``. Nous introduisons un nouveau paramètre " -"``num_clients`` qui nous permet d'appeler ``load_datasets`` avec " -"différents nombres de clients." - -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:167 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:168 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:167 -msgid "Model training/evaluation" -msgstr "Formation/évaluation du modèle" +"Ensuite, le modèle de régression logistique est défini et initialisé avec" +" :code:`utils.set_initial_params()`." -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:169 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:170 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:169 +#: ../../source/tutorial-quickstart-scikitlearn.rst:107 msgid "" -"Let's continue with the usual model definition (including " -"``set_parameters`` and ``get_parameters``), training and test functions:" +"The Flower server interacts with clients through an interface called " +":code:`Client`. When the server selects a particular client for training," +" it sends training instructions over the network. The client receives " +"those instructions and calls one of the :code:`Client` methods to run " +"your code (i.e., to fit the logistic regression we defined earlier)." msgstr "" -"Continuons avec la définition habituelle du modèle (y compris " -"``set_parameters`` et ``get_parameters``), les fonctions d'entraînement " -"et de test :" - -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:258 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:258 -msgid "Flower client" -msgstr "Client de Flower" +"Le serveur Flower interagit avec les clients par le biais d'une interface" +" appelée :code:`Client`. Lorsque le serveur sélectionne un client " +"particulier pour la formation, il envoie des instructions de formation " +"sur le réseau. Le client reçoit ces instructions et appelle l'une des " +"méthodes :code:`Client` pour exécuter ton code (c'est-à-dire pour ajuster" +" la régression logistique que nous avons définie plus tôt)." -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:260 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:260 +#: ../../source/tutorial-quickstart-scikitlearn.rst:113 msgid "" -"To implement the Flower client, we (again) create a subclass of " -"``flwr.client.NumPyClient`` and implement the three methods " -"``get_parameters``, ``fit``, and ``evaluate``. Here, we also pass the " -"``cid`` to the client and use it log additional details:" +"Flower provides a convenience class called :code:`NumPyClient` which " +"makes it easier to implement the :code:`Client` interface when your " +"workload uses scikit-learn. Implementing :code:`NumPyClient` usually " +"means defining the following methods (:code:`set_parameters` is optional " +"though):" msgstr "" -"Pour mettre en œuvre le client Flower, nous créons (à nouveau) une sous-" -"classe de ``flwr.client.NumPyClient`` et mettons en œuvre les trois " -"méthodes ``get_parameters``, ``fit`` et ``evaluate``. Ici, nous " -"transmettons également le ``cid`` au client et l'utilisons pour consigner" -" des détails supplémentaires :" +"Flower fournit une classe de commodité appelée :code:`NumPyClient` qui " +"facilite la mise en œuvre de l'interface :code:`Client` lorsque ta charge" +" de travail utilise scikit-learn. Mettre en œuvre :code:`NumPyClient` " +"signifie généralement définir les méthodes suivantes " +"(:code:`set_parameters` est cependant facultatif) :" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:308 -msgid "Let's test what we have so far before we continue:" -msgstr "Testons ce que nous avons jusqu'à présent avant de continuer :" +#: ../../source/tutorial-quickstart-scikitlearn.rst:122 +msgid "is directly imported with :code:`utils.set_model_params()`" +msgstr "est directement importé avec :code:`utils.set_model_params()`" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:339 -msgid "Build a Strategy from scratch" -msgstr "Élaborer une stratégie à partir de zéro" +#: ../../source/tutorial-quickstart-scikitlearn.rst:130 +msgid "The methods can be implemented in the following way:" +msgstr "Les méthodes peuvent être mises en œuvre de la manière suivante :" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:341 +#: ../../source/tutorial-quickstart-scikitlearn.rst:153 msgid "" -"Let’s overwrite the ``configure_fit`` method such that it passes a higher" -" learning rate (potentially also other hyperparameters) to the optimizer " -"of a fraction of the clients. We will keep the sampling of the clients as" -" it is in ``FedAvg`` and then change the configuration dictionary (one of" -" the ``FitIns`` attributes)." +"We can now create an instance of our class :code:`MnistClient` and add " +"one line to actually run this client:" msgstr "" -"Remplaçons la méthode ``configure_fit`` de façon à ce qu'elle transmette " -"un taux d'apprentissage plus élevé (potentiellement aussi d'autres " -"hyperparamètres) à l'optimiseur d'une fraction des clients. Nous " -"garderons l'échantillonnage des clients tel qu'il est dans ``FedAvg`` et " -"changerons ensuite le dictionnaire de configuration (l'un des attributs " -"``FitIns``)." +"Nous pouvons maintenant créer une instance de notre classe " +":code:`MnistClient` et ajouter une ligne pour exécuter ce client :" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:507 +#: ../../source/tutorial-quickstart-scikitlearn.rst:160 +#, fuzzy msgid "" -"The only thing left is to use the newly created custom Strategy " -"``FedCustom`` when starting the experiment:" +"That's it for the client. We only have to implement :code:`Client` or " +":code:`NumPyClient` and call :code:`fl.client.start_client()`. If you " +"implement a client of type :code:`NumPyClient` you'll need to first call " +"its :code:`to_client()` method. The string :code:`\"0.0.0.0:8080\"` tells" +" the client which server to connect to. In our case we can run the server" +" and the client on the same machine, therefore we use " +":code:`\"0.0.0.0:8080\"`. If we run a truly federated workload with the " +"server and clients running on different machines, all that needs to " +"change is the :code:`server_address` we pass to the client." msgstr "" -"Il ne reste plus qu'à utiliser la stratégie personnalisée nouvellement " -"créée ``FedCustom`` lors du démarrage de l'expérience :" - -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:534 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:932 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:697 -msgid "Recap" -msgstr "Récapitulation" +"C'est tout pour le client. Il nous suffit d'implémenter :code:`Client` ou" +" :code:`NumPyClient` et d'appeler :code:`fl.client.start_client()`. La " +"chaîne :code:`\"0.0.0:8080\"` indique au client à quel serveur se " +"connecter. Dans notre cas, nous pouvons exécuter le serveur et le client " +"sur la même machine, c'est pourquoi nous utilisons " +":code:`\"0.0.0:8080\"`. Si nous exécutons une charge de travail " +"véritablement fédérée avec le serveur et les clients s'exécutant sur des " +"machines différentes, tout ce qui doit changer est :code:`server_address`" +" que nous transmettons au client." -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:536 +#: ../../source/tutorial-quickstart-scikitlearn.rst:169 msgid "" -"In this notebook, we’ve seen how to implement a custom strategy. A custom" -" strategy enables granular control over client node configuration, result" -" aggregation, and more. To define a custom strategy, you only have to " -"overwrite the abstract methods of the (abstract) base class ``Strategy``." -" To make custom strategies even more powerful, you can pass custom " -"functions to the constructor of your new class (``__init__``) and then " -"call these functions whenever needed." +"The following Flower server is a little bit more advanced and returns an " +"evaluation function for the server-side evaluation. First, we import " +"again all required libraries such as Flower and scikit-learn." msgstr "" -"Dans ce carnet, nous avons vu comment mettre en place une stratégie " -"personnalisée. Une stratégie personnalisée permet un contrôle granulaire " -"sur la configuration des nœuds clients, l'agrégation des résultats, et " -"bien plus encore. Pour définir une stratégie personnalisée, il te suffit " -"d'écraser les méthodes abstraites de la classe de base (abstraite) " -"``Strategy``. Pour rendre les stratégies personnalisées encore plus " -"puissantes, tu peux passer des fonctions personnalisées au constructeur " -"de ta nouvelle classe (``__init__``) et appeler ensuite ces fonctions à " -"chaque fois que c'est nécessaire." +"Le serveur Flower suivant est un peu plus avancé et renvoie une fonction " +"d'évaluation pour l'évaluation côté serveur. Tout d'abord, nous importons" +" à nouveau toutes les bibliothèques requises telles que Flower et scikit-" +"learn." -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:550 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:948 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:729 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:715 -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:369 -msgid "" -"Before you continue, make sure to join the Flower community on Slack: " -"`Join Slack `__" -msgstr "" -"Avant de continuer, n'oublie pas de rejoindre la communauté Flower sur " -"Slack : `Join Slack `__" +#: ../../source/tutorial-quickstart-scikitlearn.rst:172 +msgid ":code:`server.py`, import Flower and start the server:" +msgstr ":code:`server.py`, importe Flower et démarre le serveur :" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:552 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:950 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:731 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:717 -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:371 +#: ../../source/tutorial-quickstart-scikitlearn.rst:185 +#, fuzzy msgid "" -"There's a dedicated ``#questions`` channel if you need help, but we'd " -"also love to hear who you are in ``#introductions``!" +"The number of federated learning rounds is set in :code:`fit_round()` and" +" the evaluation is defined in :code:`get_evaluate_fn()`. The evaluation " +"function is called after each federated learning round and gives you " +"information about loss and accuracy. Note that we also make use of Flower" +" Datasets here to load the test split of the MNIST dataset for server-" +"side evaluation." msgstr "" -"Il existe un canal dédié aux ``questions`` si vous avez besoin d'aide, " -"mais nous aimerions aussi savoir qui vous êtes dans ``#introductions`` !" +"Le nombre de tours d'apprentissage fédéré est défini dans " +":code:`fit_round()` et l'évaluation est définie dans " +":code:`get_evaluate_fn()`. La fonction d'évaluation est appelée après " +"chaque tour d'apprentissage fédéré et te donne des informations sur la " +"perte et la précision." -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:554 -#, fuzzy +#: ../../source/tutorial-quickstart-scikitlearn.rst:213 msgid "" -"The `Flower Federated Learning Tutorial - Part 4 " -"`__ introduces ``Client``, the flexible API underlying " -"``NumPyClient``." +"The :code:`main` contains the server-side parameter initialization " +":code:`utils.set_initial_params()` as well as the aggregation strategy " +":code:`fl.server.strategy:FedAvg()`. The strategy is the default one, " +"federated averaging (or FedAvg), with two clients and evaluation after " +"each federated learning round. The server can be started with the command" +" :code:`fl.server.start_server(server_address=\"0.0.0.0:8080\", " +"strategy=strategy, config=fl.server.ServerConfig(num_rounds=3))`." msgstr "" -"Le `Tutoriel d'apprentissage fédéré Flower - Partie 4 " -"`__ présente ``Client``, l'API flexible qui sous-tend " -"``NumPyClient``." - -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:9 -#, fuzzy -msgid "Customize the client" -msgstr "Création du client IMDBC" +"Le :code:`main` contient l'initialisation des paramètres côté serveur " +":code:`utils.set_initial_params()` ainsi que la stratégie d'agrégation " +":code:`fl.server.strategy:FedAvg()`. La stratégie est celle par défaut, " +"la moyenne fédérée (ou FedAvg), avec deux clients et une évaluation après" +" chaque tour d'apprentissage fédéré. Le serveur peut être démarré avec la" +" commande :code:`fl.server.start_server(server_address=\"0.0.0.0:8080\", " +"strategy=strategy, config=fl.server.ServerConfig(num_rounds=3))`." -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:11 -#, fuzzy +#: ../../source/tutorial-quickstart-scikitlearn.rst:232 msgid "" -"Welcome to the fourth part of the Flower federated learning tutorial. In " -"the previous parts of this tutorial, we introduced federated learning " -"with PyTorch and Flower (`part 1 `__), we learned how " -"strategies can be used to customize the execution on both the server and " -"the clients (`part 2 `__), and we built our own " -"custom strategy from scratch (`part 3 `__)." +"With both client and server ready, we can now run everything and see " +"federated learning in action. Federated learning systems usually have a " +"server and multiple clients. We, therefore, have to start the server " +"first:" msgstr "" -"Bienvenue dans la quatrième partie du tutoriel sur l'apprentissage fédéré" -" Flower. Dans les parties précédentes de ce tutoriel, nous avons présenté" -" l'apprentissage fédéré avec PyTorch et Flower (`partie 1 " -"`__), " -"nous avons appris comment les stratégies peuvent être utilisées pour " -"personnaliser l'exécution à la fois sur le serveur et les clients " -"(`partie 2 `__), et nous avons construit notre propre stratégie " -"personnalisée à partir de zéro (`partie 3 - WIP " -"`__)." +"Le client et le serveur étant prêts, nous pouvons maintenant tout lancer " +"et voir l'apprentissage fédéré en action. Les systèmes d'apprentissage " +"fédéré ont généralement un serveur et plusieurs clients. Nous devons donc" +" commencer par lancer le serveur :" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:14 +#: ../../source/tutorial-quickstart-scikitlearn.rst:286 msgid "" -"In this notebook, we revisit ``NumPyClient`` and introduce a new " -"baseclass for building clients, simply named ``Client``. In previous " -"parts of this tutorial, we've based our client on ``NumPyClient``, a " -"convenience class which makes it easy to work with machine learning " -"libraries that have good NumPy interoperability. With ``Client``, we gain" -" a lot of flexibility that we didn't have before, but we'll also have to " -"do a few things the we didn't have to do before." +"Congratulations! You've successfully built and run your first federated " +"learning system. The full `source code " +"`_ for this example can be found in :code:`examples/sklearn-logreg-" +"mnist`." msgstr "" -"Dans ce carnet, nous revisitons `NumPyClient`` et introduisons une " -"nouvelle classe de base pour construire des clients, simplement appelée " -"`Client``. Dans les parties précédentes de ce tutoriel, nous avons basé " -"notre client sur ``NumPyClient``, une classe de commodité qui facilite le" -" travail avec les bibliothèques d'apprentissage automatique qui ont une " -"bonne interopérabilité NumPy. Avec ``Client``, nous gagnons beaucoup de " -"flexibilité que nous n'avions pas auparavant, mais nous devrons également" -" faire quelques choses que nous n'avions pas à faire auparavant." +"Félicitations ! Tu as réussi à construire et à faire fonctionner ton " +"premier système d'apprentissage fédéré. Le code source complet " +"`_ de cet exemple se trouve dans :code:`examples/sklearn-logreg-" +"mnist`." -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:18 +#: ../../source/tutorial-quickstart-tensorflow.rst:-1 msgid "" -"Let's go deeper and see what it takes to move from ``NumPyClient`` to " -"``Client``!" +"Check out this Federated Learning quickstart tutorial for using Flower " +"with TensorFlow to train a MobilNetV2 model on CIFAR-10." msgstr "" -"Allons plus loin et voyons ce qu'il faut faire pour passer de " -"``NumPyClient`` à ``Client`` !" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:30 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:29 -msgid "Step 0: Preparation" -msgstr "Étape 0 : Préparation" +#: ../../source/tutorial-quickstart-tensorflow.rst:5 +msgid "Quickstart TensorFlow" +msgstr "Démarrage rapide de TensorFlow" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:117 -msgid "" -"Let's now load the CIFAR-10 training and test set, partition them into " -"ten smaller datasets (each split into training and validation set), and " -"wrap everything in their own ``DataLoader``." +#: ../../source/tutorial-quickstart-tensorflow.rst:13 +msgid "Let's build a federated learning system in less than 20 lines of code!" msgstr "" -"Chargeons maintenant les ensembles d'entraînement et de test CIFAR-10, " -"divisons-les en dix ensembles de données plus petits (chacun divisé en " -"ensemble d'entraînement et de validation) et enveloppons le tout dans " -"leur propre ``DataLoader``." +"Construisons un système d'apprentissage fédéré en moins de 20 lignes de " +"code !" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:259 -msgid "Step 1: Revisiting NumPyClient" -msgstr "Étape 1 : Revoir NumPyClient" +#: ../../source/tutorial-quickstart-tensorflow.rst:15 +msgid "Before Flower can be imported we have to install it:" +msgstr "Avant de pouvoir importer une fleur, nous devons l'installer :" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:261 +#: ../../source/tutorial-quickstart-tensorflow.rst:21 msgid "" -"So far, we've implemented our client by subclassing " -"``flwr.client.NumPyClient``. The three methods we implemented are " -"``get_parameters``, ``fit``, and ``evaluate``. Finally, we wrap the " -"creation of instances of this class in a function called ``client_fn``:" +"Since we want to use the Keras API of TensorFlow (TF), we have to install" +" TF as well:" msgstr "" -"Jusqu'à présent, nous avons implémenté notre client en sous-classant " -"``flwr.client.NumPyClient``. Les trois méthodes que nous avons " -"implémentées sont ``get_parameters``, ``fit`` et ``evaluate``. Enfin, " -"nous enveloppons la création d'instances de cette classe dans une " -"fonction appelée ``client_fn`` :" +"Comme nous voulons utiliser l'API Keras de TensorFlow (TF), nous devons " +"également installer TF :" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:309 -msgid "" -"We've seen this before, there's nothing new so far. The only *tiny* " -"difference compared to the previous notebook is naming, we've changed " -"``FlowerClient`` to ``FlowerNumPyClient`` and ``client_fn`` to " -"``numpyclient_fn``. Let's run it to see the output we get:" +#: ../../source/tutorial-quickstart-tensorflow.rst:31 +msgid "Next, in a file called :code:`client.py`, import Flower and TensorFlow:" msgstr "" -"Nous avons déjà vu cela auparavant, il n'y a rien de nouveau jusqu'à " -"présent. La seule *petite* différence par rapport au carnet précédent est" -" le nommage, nous avons changé ``FlowerClient`` en ``FlowerNumPyClient`` " -"et ``client_fn`` en ``numpyclient_fn``. Exécutons-le pour voir la sortie " -"que nous obtenons :" +"Ensuite, dans un fichier appelé :code:`client.py`, importe Flower et " +"TensorFlow :" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:339 +#: ../../source/tutorial-quickstart-tensorflow.rst:38 msgid "" -"This works as expected, two clients are training for three rounds of " -"federated learning." +"We use the Keras utilities of TF to load CIFAR10, a popular colored image" +" classification dataset for machine learning. The call to " +":code:`tf.keras.datasets.cifar10.load_data()` downloads CIFAR10, caches " +"it locally, and then returns the entire training and test set as NumPy " +"ndarrays." msgstr "" -"Cela fonctionne comme prévu, deux clients s'entraînent pour trois tours " -"d'apprentissage fédéré." +"Nous utilisons les utilitaires Keras de TF pour charger CIFAR10, un " +"ensemble de données de classification d'images colorées populaire pour " +"l'apprentissage automatique. L'appel à " +":code:`tf.keras.datasets.cifar10.load_data()` télécharge CIFAR10, le met " +"en cache localement, puis renvoie l'ensemble d'entraînement et de test " +"sous forme de NumPy ndarrays." -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:341 +#: ../../source/tutorial-quickstart-tensorflow.rst:47 msgid "" -"Let's dive a little bit deeper and discuss how Flower executes this " -"simulation. Whenever a client is selected to do some work, " -"``start_simulation`` calls the function ``numpyclient_fn`` to create an " -"instance of our ``FlowerNumPyClient`` (along with loading the model and " -"the data)." +"Next, we need a model. For the purpose of this tutorial, we use " +"MobilNetV2 with 10 output classes:" msgstr "" -"Plongeons un peu plus profondément et discutons de la façon dont Flower " -"exécute cette simulation. Chaque fois qu'un client est sélectionné pour " -"effectuer un travail, ``start_simulation`` appelle la fonction " -"``numpyclient_fn`` pour créer une instance de notre ``FlowerNumPyClient``" -" (en même temps qu'il charge le modèle et les données)." +"Ensuite, nous avons besoin d'un modèle. Pour les besoins de ce tutoriel, " +"nous utilisons MobilNetV2 avec 10 classes de sortie :" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:343 +#: ../../source/tutorial-quickstart-tensorflow.rst:60 msgid "" -"But here's the perhaps surprising part: Flower doesn't actually use the " -"``FlowerNumPyClient`` object directly. Instead, it wraps the object to " -"makes it look like a subclass of ``flwr.client.Client``, not " -"``flwr.client.NumPyClient``. In fact, the Flower core framework doesn't " -"know how to handle ``NumPyClient``'s, it only knows how to handle " -"``Client``'s. ``NumPyClient`` is just a convenience abstraction built on " -"top of ``Client``." +"Flower provides a convenience class called :code:`NumPyClient` which " +"makes it easier to implement the :code:`Client` interface when your " +"workload uses Keras. The :code:`NumPyClient` interface defines three " +"methods which can be implemented in the following way:" msgstr "" -"Mais voici la partie la plus surprenante : Flower n'utilise pas " -"directement l'objet `FlowerNumPyClient`. Au lieu de cela, il enveloppe " -"l'objet pour le faire ressembler à une sous-classe de " -"`flwr.client.Client`, et non de `flwr.client.NumPyClient`. En fait, le " -"noyau de Flower ne sait pas comment gérer les `NumPyClient`, il sait " -"seulement comment gérer les `Client`. `NumPyClient` est juste une " -"abstraction de commodité construite au dessus de `Client`." +"Flower fournit une classe de commodité appelée :code:`NumPyClient` qui " +"facilite la mise en œuvre de l'interface :code:`Client` lorsque ta charge" +" de travail utilise Keras. L'interface :code:`NumPyClient` définit trois " +"méthodes qui peuvent être mises en œuvre de la manière suivante :" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:345 +#: ../../source/tutorial-quickstart-tensorflow.rst:135 +msgid "Each client will have its own dataset." +msgstr "Chaque client aura son propre ensemble de données." + +#: ../../source/tutorial-quickstart-tensorflow.rst:137 msgid "" -"Instead of building on top of ``NumPyClient``, we can directly build on " -"top of ``Client``." +"You should now see how the training does in the very first terminal (the " +"one that started the server):" msgstr "" -"Au lieu de construire par-dessus `NumPyClient``, nous pouvons construire " -"directement par-dessus `Client``." +"Tu devrais maintenant voir comment la formation se déroule dans le tout " +"premier terminal (celui qui a démarré le serveur) :" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:357 -msgid "Step 2: Moving from ``NumPyClient`` to ``Client``" -msgstr "Étape 2 : Passer de ``NumPyClient`` à ``Client``" +#: ../../source/tutorial-quickstart-tensorflow.rst:169 +#, fuzzy +msgid "" +"Congratulations! You've successfully built and run your first federated " +"learning system. The full `source code " +"`_ for this can be found in :code:`examples" +"/quickstart-tensorflow/client.py`." +msgstr "" +"Félicitations ! Tu as réussi à construire et à faire fonctionner ton " +"premier système d'apprentissage fédéré. Le `code source complet " +"`_ pour cela se trouve dans :code:`examples" +"/quickstart-tensorflow/client.py`." -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:359 +#: ../../source/tutorial-quickstart-xgboost.rst:-1 msgid "" -"Let's try to do the same thing using ``Client`` instead of " -"``NumPyClient``." +"Check out this Federated Learning quickstart tutorial for using Flower " +"with XGBoost to train classification models on trees." msgstr "" -"Essayons de faire la même chose en utilisant ``Client`` au lieu de " -"``NumPyClient``." -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:465 +#: ../../source/tutorial-quickstart-xgboost.rst:5 +msgid "Quickstart XGBoost" +msgstr "Démarrage rapide XGBoost" + +#: ../../source/tutorial-quickstart-xgboost.rst:14 +#, fuzzy +msgid "Federated XGBoost" +msgstr "Formation fédérée" + +#: ../../source/tutorial-quickstart-xgboost.rst:16 msgid "" -"Before we discuss the code in more detail, let's try to run it! Gotta " -"make sure our new ``Client``-based client works, right?" +"EXtreme Gradient Boosting (**XGBoost**) is a robust and efficient " +"implementation of gradient-boosted decision tree (**GBDT**), that " +"maximises the computational boundaries for boosted tree methods. It's " +"primarily designed to enhance both the performance and computational " +"speed of machine learning models. In XGBoost, trees are constructed " +"concurrently, unlike the sequential approach taken by GBDT." msgstr "" -"Avant de discuter du code plus en détail, essayons de l'exécuter ! Nous " -"devons nous assurer que notre nouveau client basé sur le ``Client`` " -"fonctionne, n'est-ce pas ?" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:490 +#: ../../source/tutorial-quickstart-xgboost.rst:20 msgid "" -"That's it, we're now using ``Client``. It probably looks similar to what " -"we've done with ``NumPyClient``. So what's the difference?" +"Often, for tabular data on medium-sized datasets with fewer than 10k " +"training examples, XGBoost surpasses the results of deep learning " +"techniques." msgstr "" -"Voilà, nous utilisons maintenant ``Client``. Cela ressemble probablement " -"à ce que nous avons fait avec ``NumPyClient``. Alors quelle est la " -"différence ?" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:492 +#: ../../source/tutorial-quickstart-xgboost.rst:23 +#, fuzzy +msgid "Why federated XGBoost?" +msgstr "Qu'est-ce que l'apprentissage fédéré ?" + +#: ../../source/tutorial-quickstart-xgboost.rst:25 msgid "" -"First of all, it's more code. But why? The difference comes from the fact" -" that ``Client`` expects us to take care of parameter serialization and " -"deserialization. For Flower to be able to send parameters over the " -"network, it eventually needs to turn these parameters into ``bytes``. " -"Turning parameters (e.g., NumPy ``ndarray``'s) into raw bytes is called " -"serialization. Turning raw bytes into something more useful (like NumPy " -"``ndarray``'s) is called deserialization. Flower needs to do both: it " -"needs to serialize parameters on the server-side and send them to the " -"client, the client needs to deserialize them to use them for local " -"training, and then serialize the updated parameters again to send them " -"back to the server, which (finally!) deserializes them again in order to " -"aggregate them with the updates received from other clients." +"Indeed, as the demand for data privacy and decentralized learning grows, " +"there's an increasing requirement to implement federated XGBoost systems " +"for specialised applications, like survival analysis and financial fraud " +"detection." msgstr "" -"First of all, it's more code. But why? The difference comes from the fact" -" that ``Client`` expects us to take care of parameter serialization and " -"deserialization. For Flower to be able to send parameters over the " -"network, it eventually needs to turn these parameters into ``bytes``. " -"Turning parameters (e.g., NumPy ``ndarray``'s) into raw bytes is called " -"serialization. Turning raw bytes into something more useful (like NumPy " -"``ndarray``'s) is called deserialization. Flower needs to do both: it " -"needs to serialize parameters on the server-side and send them to the " -"client, the client needs to deserialize them to use them for local " -"training, and then serialize the updated parameters again to send them " -"back to the server, which (finally!) deserializes them again in order to " -"aggregate them with the updates received from other clients." -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:495 +#: ../../source/tutorial-quickstart-xgboost.rst:27 msgid "" -"The only *real* difference between Client and NumPyClient is that " -"NumPyClient takes care of serialization and deserialization for you. It " -"can do so because it expects you to return parameters as NumPy ndarray's," -" and it knows how to handle these. This makes working with machine " -"learning libraries that have good NumPy support (most of them) a breeze." +"Federated learning ensures that raw data remains on the local device, " +"making it an attractive approach for sensitive domains where data " +"security and privacy are paramount. Given the robustness and efficiency " +"of XGBoost, combining it with federated learning offers a promising " +"solution for these specific challenges." msgstr "" -"La seule *vraie* différence entre Client et NumPyClient est que " -"NumPyClient s'occupe de la sérialisation et de la désérialisation pour " -"toi. Il peut le faire parce qu'il s'attend à ce que tu renvoies des " -"paramètres sous forme de NumPy ndarray, et il sait comment les gérer. " -"Cela permet de travailler avec des bibliothèques d'apprentissage " -"automatique qui ont une bonne prise en charge de NumPy (la plupart " -"d'entre elles) en un clin d'œil." -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:497 +#: ../../source/tutorial-quickstart-xgboost.rst:30 msgid "" -"In terms of API, there's one major difference: all methods in Client take" -" exactly one argument (e.g., ``FitIns`` in ``Client.fit``) and return " -"exactly one value (e.g., ``FitRes`` in ``Client.fit``). The methods in " -"``NumPyClient`` on the other hand have multiple arguments (e.g., " -"``parameters`` and ``config`` in ``NumPyClient.fit``) and multiple return" -" values (e.g., ``parameters``, ``num_example``, and ``metrics`` in " -"``NumPyClient.fit``) if there are multiple things to handle. These " -"``*Ins`` and ``*Res`` objects in ``Client`` wrap all the individual " -"values you're used to from ``NumPyClient``." +"In this tutorial we will learn how to train a federated XGBoost model on " +"HIGGS dataset using Flower and :code:`xgboost` package. We use a simple " +"example (`full code xgboost-quickstart " +"`_)" +" with two *clients* and one *server* to demonstrate how federated XGBoost" +" works, and then we dive into a more complex example (`full code xgboost-" +"comprehensive `_) to run various experiments." msgstr "" -"In terms of API, there's one major difference: all methods in Client take" -" exactly one argument (e.g., ``FitIns`` in ``Client.fit``) and return " -"exactly one value (e.g., ``FitRes`` in ``Client.fit``). The methods in " -"``NumPyClient`` on the other hand have multiple arguments (e.g., " -"``parameters`` and ``config`` in ``NumPyClient.fit``) and multiple return" -" values (e.g., ``parameters``, ``num_example``, and ``metrics`` in " -"``NumPyClient.fit``) if there are multiple things to handle. These " -"``*Ins`` and ``*Res`` objects in ``Client`` wrap all the individual " -"values you're used to from ``NumPyClient``." -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:510 -msgid "Step 3: Custom serialization" -msgstr "Étape 3 : Sérialisation personnalisée" +#: ../../source/tutorial-quickstart-xgboost.rst:37 +msgid "Environment Setup" +msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:512 +#: ../../source/tutorial-quickstart-xgboost.rst:41 msgid "" -"Here we will explore how to implement custom serialization with a simple " -"example." +"We first need to install Flower and Flower Datasets. You can do this by " +"running :" msgstr "" -"Nous allons ici explorer comment mettre en œuvre une sérialisation " -"personnalisée à l'aide d'un exemple simple." -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:514 +#: ../../source/tutorial-quickstart-xgboost.rst:47 +#, fuzzy msgid "" -"But first what is serialization? Serialization is just the process of " -"converting an object into raw bytes, and equally as important, " -"deserialization is the process of converting raw bytes back into an " -"object. This is very useful for network communication. Indeed, without " -"serialization, you could not just a Python object through the internet." -msgstr "" -"Mais d'abord, qu'est-ce que la sérialisation ? La sérialisation est " -"simplement le processus de conversion d'un objet en octets bruts, et tout" -" aussi important, la désérialisation est le processus de reconversion des" -" octets bruts en objet. Ceci est très utile pour la communication réseau." -" En effet, sans la sérialisation, tu ne pourrais pas faire passer un " -"objet Python par Internet." +"Since we want to use :code:`xgboost` package to build up XGBoost trees, " +"let's go ahead and install :code:`xgboost`:" +msgstr "Puisque nous voulons utiliser scikt-learn, allons-y et installons-le :" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:516 +#: ../../source/tutorial-quickstart-xgboost.rst:57 msgid "" -"Federated Learning relies heavily on internet communication for training " -"by sending Python objects back and forth between the clients and the " -"server. This means that serialization is an essential part of Federated " -"Learning." +"*Clients* are responsible for generating individual weight-updates for " +"the model based on their local datasets. Now that we have all our " +"dependencies installed, let's run a simple distributed training with two " +"clients and one server." msgstr "" -"L'apprentissage fédéré s'appuie fortement sur la communication Internet " -"pour la formation en envoyant des objets Python dans les deux sens entre " -"les clients et le serveur, ce qui signifie que la sérialisation est un " -"élément essentiel de l'apprentissage fédéré." -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:518 +#: ../../source/tutorial-quickstart-xgboost.rst:60 +#, fuzzy msgid "" -"In the following section, we will write a basic example where instead of " -"sending a serialized version of our ``ndarray``\\ s containing our " -"parameters, we will first convert the ``ndarray`` into sparse matrices, " -"before sending them. This technique can be used to save bandwidth, as in " -"certain cases where the weights of a model are sparse (containing many 0 " -"entries), converting them to a sparse matrix can greatly improve their " -"bytesize." +"In a file called :code:`client.py`, import xgboost, Flower, Flower " +"Datasets and other related functions:" msgstr "" -"Dans la section suivante, nous allons écrire un exemple de base où, au " -"lieu d'envoyer une version sérialisée de nos ``ndarray`` contenant nos " -"paramètres, nous allons d'abord convertir les ``ndarray`` en matrices " -"éparses, avant de les envoyer. Cette technique peut être utilisée pour " -"économiser de la bande passante, car dans certains cas où les poids d'un " -"modèle sont épars (contenant de nombreuses entrées 0), les convertir en " -"une matrice éparse peut grandement améliorer leur taille en octets." +"Dans un fichier appelé :code:`client.py`, importe Flower et les paquets " +"liés à PyTorch :" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:521 -msgid "Our custom serialization/deserialization functions" -msgstr "Nos fonctions de sérialisation/désérialisation personnalisées" +#: ../../source/tutorial-quickstart-xgboost.rst:87 +msgid "Dataset partition and hyper-parameter selection" +msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:523 +#: ../../source/tutorial-quickstart-xgboost.rst:89 msgid "" -"This is where the real serialization/deserialization will happen, " -"especially in ``ndarray_to_sparse_bytes`` for serialization and " -"``sparse_bytes_to_ndarray`` for deserialization." +"Prior to local training, we require loading the HIGGS dataset from Flower" +" Datasets and conduct data partitioning for FL:" msgstr "" -"C'est là que la véritable sérialisation/désérialisation se produira, en " -"particulier dans ``ndarray_to_sparse_bytes`` pour la sérialisation et " -"``sparse_bytes_to_ndarray`` pour la désérialisation." -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:525 +#: ../../source/tutorial-quickstart-xgboost.rst:102 msgid "" -"Note that we imported the ``scipy.sparse`` library in order to convert " -"our arrays." +"In this example, we split the dataset into two partitions with uniform " +"distribution (:code:`IidPartitioner(num_partitions=2)`). Then, we load " +"the partition for the given client based on :code:`node_id`:" msgstr "" -"Notez que nous avons importé la bibliothèque ``scipy.sparse`` afin de " -"convertir nos tableaux." - -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:613 -msgid "Client-side" -msgstr "Côté client" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:615 +#: ../../source/tutorial-quickstart-xgboost.rst:121 msgid "" -"To be able to serialize our ``ndarray``\\ s into sparse parameters, we " -"will just have to call our custom functions in our " -"``flwr.client.Client``." +"After that, we do train/test splitting on the given partition (client's " +"local data), and transform data format for :code:`xgboost` package." msgstr "" -"Pour pouvoir sérialiser nos ``ndarray`` en paramètres sparse, il nous " -"suffira d'appeler nos fonctions personnalisées dans notre " -"``flwr.client.Client``." -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:617 +#: ../../source/tutorial-quickstart-xgboost.rst:134 msgid "" -"Indeed, in ``get_parameters`` we need to serialize the parameters we got " -"from our network using our custom ``ndarrays_to_sparse_parameters`` " -"defined above." +"The functions of :code:`train_test_split` and " +":code:`transform_dataset_to_dmatrix` are defined as below:" msgstr "" -"En effet, dans ``get_parameters`` nous devons sérialiser les paramètres " -"que nous avons obtenus de notre réseau en utilisant nos " -"``ndarrays_to_sparse_parameters`` personnalisés définis ci-dessus." -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:619 -msgid "" -"In ``fit``, we first need to deserialize the parameters coming from the " -"server using our custom ``sparse_parameters_to_ndarrays`` and then we " -"need to serialize our local results with " -"``ndarrays_to_sparse_parameters``." +#: ../../source/tutorial-quickstart-xgboost.rst:158 +msgid "Finally, we define the hyper-parameters used for XGBoost training." msgstr "" -"Dans ``fit``, nous devons d'abord désérialiser les paramètres provenant " -"du serveur en utilisant notre ``sparse_parameters_to_ndarrays`` " -"personnalisé, puis nous devons sérialiser nos résultats locaux avec " -"``ndarrays_to_sparse_parameters``." -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:621 +#: ../../source/tutorial-quickstart-xgboost.rst:174 msgid "" -"In ``evaluate``, we will only need to deserialize the global parameters " -"with our custom function." +"The :code:`num_local_round` represents the number of iterations for local" +" tree boost. We use CPU for the training in default. One can shift it to " +"GPU by setting :code:`tree_method` to :code:`gpu_hist`. We use AUC as " +"evaluation metric." msgstr "" -"Dans ``evaluate``, nous n'aurons besoin que de désérialiser les " -"paramètres globaux avec notre fonction personnalisée." -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:725 -msgid "Server-side" -msgstr "Côté serveur" +#: ../../source/tutorial-quickstart-xgboost.rst:181 +msgid "Flower client definition for XGBoost" +msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:727 +#: ../../source/tutorial-quickstart-xgboost.rst:183 msgid "" -"For this example, we will just use ``FedAvg`` as a strategy. To change " -"the serialization and deserialization here, we only need to reimplement " -"the ``evaluate`` and ``aggregate_fit`` functions of ``FedAvg``. The other" -" functions of the strategy will be inherited from the super class " -"``FedAvg``." +"After loading the dataset we define the Flower client. We follow the " +"general rule to define :code:`XgbClient` class inherited from " +":code:`fl.client.Client`." msgstr "" -"Pour cet exemple, nous utiliserons simplement ``FedAvg`` comme stratégie." -" Pour modifier la sérialisation et la désérialisation ici, il suffit de " -"réimplémenter les fonctions ``evaluate`` et ``aggregate_fit`` de " -"``FedAvg``. Les autres fonctions de la stratégie seront héritées de la " -"super-classe ``FedAvg``." -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:729 -msgid "As you can see only one line as change in ``evaluate``:" -msgstr "Comme tu peux le voir, seule une ligne a été modifiée dans ``evaluate`` :" - -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:735 +#: ../../source/tutorial-quickstart-xgboost.rst:193 msgid "" -"And for ``aggregate_fit``, we will first deserialize every result we " -"received:" +"The :code:`self.bst` is used to keep the Booster objects that remain " +"consistent across rounds, allowing them to store predictions from trees " +"integrated in earlier rounds and maintain other essential data structures" +" for training." msgstr "" -"Et pour ``aggregate_fit``, nous allons d'abord désérialiser chaque " -"résultat que nous avons reçu :" - -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:744 -msgid "And then serialize the aggregated result:" -msgstr "Puis sérialise le résultat agrégé :" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:903 -msgid "We can now run our custom serialization example!" +#: ../../source/tutorial-quickstart-xgboost.rst:196 +msgid "" +"Then, we override :code:`get_parameters`, :code:`fit` and " +":code:`evaluate` methods insides :code:`XgbClient` class as follows." msgstr "" -"Nous pouvons maintenant exécuter notre exemple de sérialisation " -"personnalisée !" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:934 +#: ../../source/tutorial-quickstart-xgboost.rst:210 msgid "" -"In this part of the tutorial, we've seen how we can build clients by " -"subclassing either ``NumPyClient`` or ``Client``. ``NumPyClient`` is a " -"convenience abstraction that makes it easier to work with machine " -"learning libraries that have good NumPy interoperability. ``Client`` is a" -" more flexible abstraction that allows us to do things that are not " -"possible in ``NumPyClient``. In order to do so, it requires us to handle " -"parameter serialization and deserialization ourselves." +"Unlike neural network training, XGBoost trees are not started from a " +"specified random weights. In this case, we do not use " +":code:`get_parameters` and :code:`set_parameters` to initialise model " +"parameters for XGBoost. As a result, let's return an empty tensor in " +":code:`get_parameters` when it is called by the server at the first " +"round." msgstr "" -"Dans cette partie du tutoriel, nous avons vu comment construire des " -"clients en sous-classant soit ``NumPyClient``, soit ``Client``. " -"``NumPyClient`` est une abstraction de commodité qui facilite le travail " -"avec les bibliothèques d'apprentissage automatique qui ont une bonne " -"interopérabilité NumPy. ``Client`` est une abstraction plus flexible qui " -"nous permet de faire des choses qui ne sont pas possibles dans " -"``NumPyClient``. Pour ce faire, elle nous oblige à gérer nous-mêmes la " -"sérialisation et la désérialisation des paramètres." -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:952 +#: ../../source/tutorial-quickstart-xgboost.rst:251 msgid "" -"This is the final part of the Flower tutorial (for now!), " -"congratulations! You're now well equipped to understand the rest of the " -"documentation. There are many topics we didn't cover in the tutorial, we " -"recommend the following resources:" +"In :code:`fit`, at the first round, we call :code:`xgb.train()` to build " +"up the first set of trees. the returned Booster object and config are " +"stored in :code:`self.bst` and :code:`self.config`, respectively. From " +"the second round, we load the global model sent from server to " +":code:`self.bst`, and then update model weights on local training data " +"with function :code:`local_boost` as follows:" msgstr "" -"C'est la dernière partie du tutoriel Flower (pour l'instant !), " -"félicitations ! Tu es maintenant bien équipé pour comprendre le reste de " -"la documentation. Il y a de nombreux sujets que nous n'avons pas abordés " -"dans le tutoriel, nous te recommandons les ressources suivantes :" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:954 -msgid "`Read Flower Docs `__" -msgstr "`Lire les docs sur les fleurs `__" +#: ../../source/tutorial-quickstart-xgboost.rst:269 +msgid "" +"Given :code:`num_local_round`, we update trees by calling " +":code:`self.bst.update` method. After training, the last " +":code:`N=num_local_round` trees will be extracted to send to the server." +msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:955 +#: ../../source/tutorial-quickstart-xgboost.rst:291 msgid "" -"`Check out Flower Code Examples " -"`__" +"In :code:`evaluate`, we call :code:`self.bst.eval_set` function to " +"conduct evaluation on valid set. The AUC value will be returned." msgstr "" -"`Check out Flower Code Examples " -"`__" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:956 +#: ../../source/tutorial-quickstart-xgboost.rst:294 #, fuzzy msgid "" -"`Use Flower Baselines for your research " -"`__" +"Now, we can create an instance of our class :code:`XgbClient` and add one" +" line to actually run this client:" msgstr "" -"`Utilise les lignes de base des fleurs pour ta recherche " -"`__" +"Nous pouvons maintenant créer une instance de notre classe " +":code:`MnistClient` et ajouter une ligne pour exécuter ce client :" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:957 +#: ../../source/tutorial-quickstart-xgboost.rst:300 #, fuzzy msgid "" -"`Watch Flower Summit 2023 videos `__" +"That's it for the client. We only have to implement :code:`Client`and " +"call :code:`fl.client.start_client()`. The string :code:`\"[::]:8080\"` " +"tells the client which server to connect to. In our case we can run the " +"server and the client on the same machine, therefore we use " +":code:`\"[::]:8080\"`. If we run a truly federated workload with the " +"server and clients running on different machines, all that needs to " +"change is the :code:`server_address` we point the client at." msgstr "" -"`Regardez les vidéos du Flower Summit 2022 `__" +"C'est tout pour le client. Il nous suffit d'implémenter :code:`Client` ou" +" :code:`NumPyClient` et d'appeler :code:`fl.client.start_client()`. La " +"chaîne :code:`\"[: :]:8080\"` indique au client à quel serveur se " +"connecter. Dans notre cas, nous pouvons exécuter le serveur et le client " +"sur la même machine, c'est pourquoi nous utilisons :code:`\"[: " +":]:8080\"`. Si nous exécutons une charge de travail véritablement fédérée" +" avec le serveur et les clients fonctionnant sur des machines " +"différentes, tout ce qui doit changer est l'adresse " +":code:`server_address` vers laquelle nous dirigeons le client." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:9 -msgid "Get started with Flower" +#: ../../source/tutorial-quickstart-xgboost.rst:311 +#, fuzzy +msgid "" +"These updates are then sent to the *server* which will aggregate them to " +"produce a better model. Finally, the *server* sends this improved version" +" of the model back to each *client* to finish a complete FL round." msgstr "" +"*Les clients* sont chargés de générer des mises à jour de poids " +"individuelles pour le modèle en fonction de leurs ensembles de données " +"locales. Ces mises à jour sont ensuite envoyées au *serveur* qui les " +"agrège pour produire un meilleur modèle. Enfin, le *serveur* renvoie " +"cette version améliorée du modèle à chaque *client*. Un cycle complet de " +"mises à jour de poids s'appelle un *round*." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:11 -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:11 -msgid "Welcome to the Flower federated learning tutorial!" -msgstr "Bienvenue au tutoriel sur l'apprentissage fédéré de la fleur !" - -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:13 +#: ../../source/tutorial-quickstart-xgboost.rst:314 #, fuzzy msgid "" -"In this notebook, we'll build a federated learning system using Flower, " -"`Flower Datasets `__ and PyTorch. In " -"part 1, we use PyTorch for the model training pipeline and data loading. " -"In part 2, we continue to federate the PyTorch-based pipeline using " -"Flower." +"In a file named :code:`server.py`, import Flower and FedXgbBagging from " +":code:`flwr.server.strategy`." msgstr "" -"Dans ce carnet, nous allons construire un système d'apprentissage fédéré " -"en utilisant Flower et PyTorch. Dans la première partie, nous utilisons " -"PyTorch pour le pipeline d'entraînement des modèles et le chargement des " -"données. Dans la deuxième partie, nous continuons à fédérer le pipeline " -"basé sur PyTorch en utilisant Flower." +"Dans un fichier appelé :code:`client.py`, importe Flower et les paquets " +"liés au MXNet :" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:17 -msgid "Let's get stated!" -msgstr "Allons-y, déclarons-le !" +#: ../../source/tutorial-quickstart-xgboost.rst:316 +msgid "We first define a strategy for XGBoost bagging aggregation." +msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:31 +#: ../../source/tutorial-quickstart-xgboost.rst:339 msgid "" -"Before we begin with any actual code, let's make sure that we have " -"everything we need." +"We use two clients for this example. An " +":code:`evaluate_metrics_aggregation` function is defined to collect and " +"wighted average the AUC values from clients." msgstr "" -"Avant de commencer à coder, assurons-nous que nous disposons de tout ce " -"dont nous avons besoin." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:45 +#: ../../source/tutorial-quickstart-xgboost.rst:342 #, fuzzy -msgid "" -"Next, we install the necessary packages for PyTorch (``torch`` and " -"``torchvision``), Flower Datasets (``flwr-datasets``) and Flower " -"(``flwr``):" +msgid "Then, we start the server:" +msgstr "Démarrer le serveur" + +#: ../../source/tutorial-quickstart-xgboost.rst:354 +msgid "Tree-based bagging aggregation" msgstr "" -"Ensuite, nous installons les paquets nécessaires pour PyTorch (``torch`` " -"et ``torchvision``) et Flower (``flwr``) :" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:105 -#, fuzzy +#: ../../source/tutorial-quickstart-xgboost.rst:356 msgid "" -"It is possible to switch to a runtime that has GPU acceleration enabled " -"(on Google Colab: ``Runtime > Change runtime type > Hardware accelerator:" -" GPU > Save``). Note, however, that Google Colab is not always able to " -"offer GPU acceleration. If you see an error related to GPU availability " -"in one of the following sections, consider switching back to CPU-based " -"execution by setting ``DEVICE = torch.device(\"cpu\")``. If the runtime " -"has GPU acceleration enabled, you should see the output ``Training on " -"cuda``, otherwise it'll say ``Training on cpu``." +"You must be curious about how bagging aggregation works. Let's look into " +"the details." msgstr "" -"Il est possible de passer à un runtime dont l'accélération GPU est " -"activée (sur Google Colab : ``Runtime > Change runtime type > Hardware " -"acclerator : GPU > Save``). Note cependant que Google Colab n'est pas " -"toujours en mesure de proposer l'accélération GPU. Si tu vois une erreur " -"liée à la disponibilité du GPU dans l'une des sections suivantes, " -"envisage de repasser à une exécution basée sur le CPU en définissant " -"``DEVICE = torch.device(\"cpu\")``. Si le runtime a activé l'accélération" -" GPU, tu devrais voir apparaître le résultat ``Training on cuda``, sinon " -"il dira ``Training on cpu``." - -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:118 -msgid "Loading the data" -msgstr "Chargement des données" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:120 -#, fuzzy +#: ../../source/tutorial-quickstart-xgboost.rst:358 msgid "" -"Federated learning can be applied to many different types of tasks across" -" different domains. In this tutorial, we introduce federated learning by " -"training a simple convolutional neural network (CNN) on the popular " -"CIFAR-10 dataset. CIFAR-10 can be used to train image classifiers that " -"distinguish between images from ten different classes: 'airplane', " -"'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', and " -"'truck'." +"In file :code:`flwr.server.strategy.fedxgb_bagging.py`, we define " +":code:`FedXgbBagging` inherited from :code:`flwr.server.strategy.FedAvg`." +" Then, we override the :code:`aggregate_fit`, :code:`aggregate_evaluate` " +"and :code:`evaluate` methods as follows:" msgstr "" -"L'apprentissage fédéré peut être appliqué à de nombreux types de tâches " -"dans différents domaines. Dans ce tutoriel, nous présentons " -"l'apprentissage fédéré en formant un simple réseau neuronal " -"convolutionnel (CNN) sur l'ensemble de données populaire CIFAR-10. " -"CIFAR-10 peut être utilisé pour former des classificateurs d'images qui " -"font la distinction entre les images de dix classes différentes :" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:131 +#: ../../source/tutorial-quickstart-xgboost.rst:454 msgid "" -"We simulate having multiple datasets from multiple organizations (also " -"called the \"cross-silo\" setting in federated learning) by splitting the" -" original CIFAR-10 dataset into multiple partitions. Each partition will " -"represent the data from a single organization. We're doing this purely " -"for experimentation purposes, in the real world there's no need for data " -"splitting because each organization already has their own data (so the " -"data is naturally partitioned)." +"In :code:`aggregate_fit`, we sequentially aggregate the clients' XGBoost " +"trees by calling :code:`aggregate()` function:" msgstr "" -"Nous simulons le fait d'avoir plusieurs ensembles de données provenant de" -" plusieurs organisations (également appelé le paramètre \"cross-silo\" " -"dans l'apprentissage fédéré) en divisant l'ensemble de données CIFAR-10 " -"original en plusieurs partitions. Chaque partition représentera les " -"données d'une seule organisation. Nous faisons cela purement à des fins " -"d'expérimentation, dans le monde réel, il n'y a pas besoin de diviser les" -" données parce que chaque organisation a déjà ses propres données (les " -"données sont donc naturellement partitionnées)." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:133 -#, fuzzy +#: ../../source/tutorial-quickstart-xgboost.rst:513 msgid "" -"Each organization will act as a client in the federated learning system. " -"So having ten organizations participate in a federation means having ten " -"clients connected to the federated learning server." +"In this function, we first fetch the number of trees and the number of " +"parallel trees for the current and previous model by calling " +":code:`_get_tree_nums`. Then, the fetched information will be aggregated." +" After that, the trees (containing model weights) are aggregated to " +"generate a new tree model." msgstr "" -"Chaque organisation agira comme un client dans le système d'apprentissage" -" fédéré. Ainsi, le fait que dix organisations participent à une " -"fédération signifie que dix clients sont connectés au serveur " -"d'apprentissage fédéré :" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:144 +#: ../../source/tutorial-quickstart-xgboost.rst:518 msgid "" -"Let's now create the Federated Dataset abstraction that from ``flwr-" -"datasets`` that partitions the CIFAR-10. We will create small training " -"and test set for each edge device and wrap each of them into a PyTorch " -"``DataLoader``:" +"After traversal of all clients' models, a new global model is generated, " +"followed by the serialisation, and sending back to each client." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:198 -msgid "" -"We now have a list of ten training sets and ten validation sets " -"(``trainloaders`` and ``valloaders``) representing the data of ten " -"different organizations. Each ``trainloader``/``valloader`` pair contains" -" 4500 training examples and 500 validation examples. There's also a " -"single ``testloader`` (we did not split the test set). Again, this is " -"only necessary for building research or educational systems, actual " -"federated learning systems have their data naturally distributed across " -"multiple partitions." +#: ../../source/tutorial-quickstart-xgboost.rst:523 +msgid "Launch Federated XGBoost!" msgstr "" -"Nous avons maintenant une liste de dix ensembles de formation et dix " -"ensembles de validation (``trainloaders`` et ``valloaders``) représentant" -" les données de dix organisations différentes. Chaque paire " -"``trainloader`/``valloader`` contient 4500 exemples de formation et 500 " -"exemples de validation. Il y a également un seul ``testloader`` (nous " -"n'avons pas divisé l'ensemble de test). Encore une fois, cela n'est " -"nécessaire que pour construire des systèmes de recherche ou d'éducation, " -"les systèmes d'apprentissage fédérés actuels ont leurs données " -"naturellement distribuées à travers plusieurs partitions." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:201 +#: ../../source/tutorial-quickstart-xgboost.rst:585 msgid "" -"Let's take a look at the first batch of images and labels in the first " -"training set (i.e., ``trainloaders[0]``) before we move on:" +"Congratulations! You've successfully built and run your first federated " +"XGBoost system. The AUC values can be checked in " +":code:`metrics_distributed`. One can see that the average AUC increases " +"over FL rounds." msgstr "" -"Jetons un coup d'œil au premier lot d'images et d'étiquettes du premier " -"ensemble d'entraînement (c'est-à-dire ``trainloaders[0]``) avant de " -"poursuivre :" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:240 +#: ../../source/tutorial-quickstart-xgboost.rst:590 +#, fuzzy msgid "" -"The output above shows a random batch of images from the first " -"``trainloader`` in our list of ten ``trainloaders``. It also prints the " -"labels associated with each image (i.e., one of the ten possible labels " -"we've seen above). If you run the cell again, you should see another " -"batch of images." +"The full `source code `_ for this example can be found in :code:`examples" +"/xgboost-quickstart`." msgstr "" -"La sortie ci-dessus montre un lot aléatoire d'images provenant du premier" -" ``chargeur de formation`` de notre liste de dix ``chargeurs de " -"formation``. Elle imprime également les étiquettes associées à chaque " -"image (c'est-à-dire l'une des dix étiquettes possibles que nous avons " -"vues ci-dessus). Si tu exécutes à nouveau la cellule, tu devrais voir un " -"autre lot d'images." +"Félicitations ! Tu as réussi à construire et à faire fonctionner ton " +"premier système d'apprentissage fédéré. Le code source complet " +"`_ de cet exemple se trouve dans :code:`examples" +"/quickstart-mxnet`." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:252 -msgid "Step 1: Centralized Training with PyTorch" -msgstr "Étape 1 : Formation centralisée avec PyTorch" +#: ../../source/tutorial-quickstart-xgboost.rst:594 +msgid "Comprehensive Federated XGBoost" +msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:263 +#: ../../source/tutorial-quickstart-xgboost.rst:596 msgid "" -"Next, we're going to use PyTorch to define a simple convolutional neural " -"network. This introduction assumes basic familiarity with PyTorch, so it " -"doesn't cover the PyTorch-related aspects in full detail. If you want to " -"dive deeper into PyTorch, we recommend `DEEP LEARNING WITH PYTORCH: A 60 " -"MINUTE BLITZ " -"`__." +"Now that you have known how federated XGBoost work with Flower, it's time" +" to run some more comprehensive experiments by customising the " +"experimental settings. In the xgboost-comprehensive example (`full code " +"`_), we provide more options to define various experimental" +" setups, including aggregation strategies, data partitioning and " +"centralised/distributed evaluation. We also support :doc:`Flower " +"simulation ` making it easy to simulate large " +"client cohorts in a resource-aware manner. Let's take a look!" msgstr "" -"Ensuite, nous allons utiliser PyTorch pour définir un simple réseau " -"neuronal convolutif. Cette introduction suppose une familiarité de base " -"avec PyTorch, elle ne couvre donc pas en détail les aspects liés à " -"PyTorch. Si tu veux plonger plus profondément dans PyTorch, nous te " -"recommandons `DEEP LEARNING WITH PYTORCH : A 60 MINUTE BLITZ " -"`__." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:275 -msgid "Defining the model" -msgstr "Définir le modèle" +#: ../../source/tutorial-quickstart-xgboost.rst:603 +#, fuzzy +msgid "Cyclic training" +msgstr "Formation centralisée" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:277 +#: ../../source/tutorial-quickstart-xgboost.rst:605 msgid "" -"We use the simple CNN described in the `PyTorch tutorial " -"`__:" +"In addition to bagging aggregation, we offer a cyclic training scheme, " +"which performs FL in a client-by-client fashion. Instead of aggregating " +"multiple clients, there is only one single client participating in the " +"training per round in the cyclic training scenario. The trained local " +"XGBoost trees will be passed to the next client as an initialised model " +"for next round's boosting." msgstr "" -"Nous utilisons le CNN simple décrit dans le tutoriel `PyTorch " -"`__ :" - -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:314 -msgid "Let's continue with the usual training and test functions:" -msgstr "Poursuivons avec les fonctions habituelles de formation et de test :" - -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:374 -msgid "Training the model" -msgstr "Entraîne le modèle" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:376 +#: ../../source/tutorial-quickstart-xgboost.rst:609 msgid "" -"We now have all the basic building blocks we need: a dataset, a model, a " -"training function, and a test function. Let's put them together to train " -"the model on the dataset of one of our organizations " -"(``trainloaders[0]``). This simulates the reality of most machine " -"learning projects today: each organization has their own data and trains " -"models only on this internal data:" +"To do this, we first customise a :code:`ClientManager` in " +":code:`server_utils.py`:" msgstr "" -"Nous avons maintenant tous les éléments de base dont nous avons besoin : " -"un ensemble de données, un modèle, une fonction d'entraînement et une " -"fonction de test. Assemblons-les pour entraîner le modèle sur l'ensemble " -"de données de l'une de nos organisations (``trainloaders[0]``). Cela " -"simule la réalité de la plupart des projets d'apprentissage automatique " -"aujourd'hui : chaque organisation possède ses propres données et entraîne" -" les modèles uniquement sur ces données internes :" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:406 +#: ../../source/tutorial-quickstart-xgboost.rst:649 msgid "" -"Training the simple CNN on our CIFAR-10 split for 5 epochs should result " -"in a test set accuracy of about 41%, which is not good, but at the same " -"time, it doesn't really matter for the purposes of this tutorial. The " -"intent was just to show a simplistic centralized training pipeline that " -"sets the stage for what comes next - federated learning!" -msgstr "" -"L'entraînement du CNN simple sur notre fractionnement CIFAR-10 pendant 5 " -"époques devrait se traduire par une précision de l'ensemble de test " -"d'environ 41 %, ce qui n'est pas bon, mais en même temps, cela n'a pas " -"vraiment d'importance pour les besoins de ce tutoriel. L'intention était " -"juste de montrer un pipeline d'entraînement centralisé simpliste qui " -"prépare le terrain pour ce qui vient ensuite - l'apprentissage fédéré !" - -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:418 -msgid "Step 2: Federated Learning with Flower" -msgstr "Étape 2 : Apprentissage fédéré avec Flower" +"The customised :code:`ClientManager` samples all available clients in " +"each FL round based on the order of connection to the server. Then, we " +"define a new strategy :code:`FedXgbCyclic` in " +":code:`flwr.server.strategy.fedxgb_cyclic.py`, in order to sequentially " +"select only one client in given round and pass the received model to next" +" client." +msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:420 +#: ../../source/tutorial-quickstart-xgboost.rst:690 msgid "" -"Step 1 demonstrated a simple centralized training pipeline. All data was " -"in one place (i.e., a single ``trainloader`` and a single ``valloader``)." -" Next, we'll simulate a situation where we have multiple datasets in " -"multiple organizations and where we train a model over these " -"organizations using federated learning." +"Unlike the original :code:`FedAvg`, we don't perform aggregation here. " +"Instead, we just make a copy of the received client model as global model" +" by overriding :code:`aggregate_fit`." msgstr "" -"L'étape 1 a montré un simple pipeline de formation centralisé. Toutes les" -" données étaient au même endroit (c'est-à-dire un seul ``trainloader`` et" -" un seul ``valloader``). Ensuite, nous allons simuler une situation où " -"nous avons plusieurs ensembles de données dans plusieurs organisations et" -" où nous formons un modèle sur ces organisations à l'aide de " -"l'apprentissage fédéré." - -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:432 -msgid "Updating model parameters" -msgstr "Mise à jour des paramètres du modèle" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:434 +#: ../../source/tutorial-quickstart-xgboost.rst:693 msgid "" -"In federated learning, the server sends the global model parameters to " -"the client, and the client updates the local model with the parameters " -"received from the server. It then trains the model on the local data " -"(which changes the model parameters locally) and sends the " -"updated/changed model parameters back to the server (or, alternatively, " -"it sends just the gradients back to the server, not the full model " -"parameters)." +"Also, the customised :code:`configure_fit` and :code:`configure_evaluate`" +" methods ensure the clients to be sequentially selected given FL round:" msgstr "" -"Dans l'apprentissage fédéré, le serveur envoie les paramètres du modèle " -"global au client, et le client met à jour le modèle local avec les " -"paramètres reçus du serveur. Il entraîne ensuite le modèle sur les " -"données locales (ce qui modifie les paramètres du modèle localement) et " -"renvoie les paramètres du modèle mis à jour/changés au serveur (ou, " -"alternativement, il renvoie seulement les gradients au serveur, et non " -"pas les paramètres complets du modèle)." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:436 -msgid "" -"We need two helper functions to update the local model with parameters " -"received from the server and to get the updated model parameters from the" -" local model: ``set_parameters`` and ``get_parameters``. The following " -"two functions do just that for the PyTorch model above." +#: ../../source/tutorial-quickstart-xgboost.rst:757 +msgid "Customised data partitioning" msgstr "" -"Nous avons besoin de deux fonctions d'aide pour mettre à jour le modèle " -"local avec les paramètres reçus du serveur et pour obtenir les paramètres" -" mis à jour du modèle local : ``set_parameters`` et ``get_parameters``. " -"Les deux fonctions suivantes font exactement cela pour le modèle PyTorch " -"ci-dessus." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:438 +#: ../../source/tutorial-quickstart-xgboost.rst:759 msgid "" -"The details of how this works are not really important here (feel free to" -" consult the PyTorch documentation if you want to learn more). In " -"essence, we use ``state_dict`` to access PyTorch model parameter tensors." -" The parameter tensors are then converted to/from a list of NumPy " -"ndarray's (which Flower knows how to serialize/deserialize):" +"In :code:`dataset.py`, we have a function :code:`instantiate_partitioner`" +" to instantiate the data partitioner based on the given " +":code:`num_partitions` and :code:`partitioner_type`. Currently, we " +"provide four supported partitioner type to simulate the uniformity/non-" +"uniformity in data quantity (uniform, linear, square, exponential)." msgstr "" -"Les détails de ce fonctionnement ne sont pas vraiment importants ici " -"(n'hésite pas à consulter la documentation PyTorch si tu veux en savoir " -"plus). En substance, nous utilisons ``state_dict`` pour accéder aux " -"tenseurs de paramètres du modèle PyTorch. Les tenseurs de paramètres sont" -" ensuite convertis en/depuis une liste de ndarray NumPy (que Flower sait " -"sérialiser/désérialiser) :" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:466 -msgid "Implementing a Flower client" -msgstr "Mise en place d'un client Flower" +#: ../../source/tutorial-quickstart-xgboost.rst:790 +#, fuzzy +msgid "Customised centralised/distributed evaluation" +msgstr "Évaluation centralisée" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:468 +#: ../../source/tutorial-quickstart-xgboost.rst:792 msgid "" -"With that out of the way, let's move on to the interesting part. " -"Federated learning systems consist of a server and multiple clients. In " -"Flower, we create clients by implementing subclasses of " -"``flwr.client.Client`` or ``flwr.client.NumPyClient``. We use " -"``NumPyClient`` in this tutorial because it is easier to implement and " -"requires us to write less boilerplate." +"To facilitate centralised evaluation, we define a function in " +":code:`server_utils.py`:" msgstr "" -"Ceci étant dit, passons à la partie intéressante. Les systèmes " -"d'apprentissage fédérés se composent d'un serveur et de plusieurs " -"clients. Dans Flower, nous créons des clients en mettant en œuvre des " -"sous-classes de ``flwr.client.Client`` ou de ``flwr.client.NumPyClient``." -" Nous utilisons ``NumPyClient`` dans ce tutoriel parce qu'il est plus " -"facile à mettre en œuvre et qu'il nous oblige à rédiger moins de modèles " -"de chaudière." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:470 +#: ../../source/tutorial-quickstart-xgboost.rst:824 msgid "" -"To implement the Flower client, we create a subclass of " -"``flwr.client.NumPyClient`` and implement the three methods " -"``get_parameters``, ``fit``, and ``evaluate``:" +"This function returns a evaluation function which instantiates a " +":code:`Booster` object and loads the global model weights to it. The " +"evaluation is conducted by calling :code:`eval_set()` method, and the " +"tested AUC value is reported." msgstr "" -"Pour mettre en œuvre le client Flower, nous créons une sous-classe de " -"``flwr.client.NumPyClient`` et mettons en œuvre les trois méthodes " -"``get_parameters``, ``fit`` et ``evaluate`` :" - -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:472 -msgid "``get_parameters``: Return the current local model parameters" -msgstr "``get_parameters`` : renvoie les paramètres du modèle local actuel" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:473 +#: ../../source/tutorial-quickstart-xgboost.rst:827 msgid "" -"``fit``: Receive model parameters from the server, train the model " -"parameters on the local data, and return the (updated) model parameters " -"to the server" +"As for distributed evaluation on the clients, it's same as the quick-" +"start example by overriding the :code:`evaluate()` method insides the " +":code:`XgbClient` class in :code:`client_utils.py`." msgstr "" -"``fit`` : reçoit les paramètres du modèle du serveur, entraîne les " -"paramètres du modèle sur les données locales et renvoie les paramètres du" -" modèle (mis à jour) au serveur" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:474 +#: ../../source/tutorial-quickstart-xgboost.rst:831 +#, fuzzy +msgid "Flower simulation" +msgstr "Simulation de moniteur" + +#: ../../source/tutorial-quickstart-xgboost.rst:832 msgid "" -"``evaluate``: Receive model parameters from the server, evaluate the " -"model parameters on the local data, and return the evaluation result to " -"the server" +"We also provide an example code (:code:`sim.py`) to use the simulation " +"capabilities of Flower to simulate federated XGBoost training on either a" +" single machine or a cluster of machines." msgstr "" -"``evaluate`` : reçoit les paramètres du modèle du serveur, évalue les " -"paramètres du modèle sur les données locales et renvoie le résultat de " -"l'évaluation au serveur" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:476 +#: ../../source/tutorial-quickstart-xgboost.rst:866 msgid "" -"We mentioned that our clients will use the previously defined PyTorch " -"components for model training and evaluation. Let's see a simple Flower " -"client implementation that brings everything together:" +"After importing all required packages, we define a :code:`main()` " +"function to perform the simulation process:" msgstr "" -"Nous avons mentionné que nos clients utiliseront les composants PyTorch " -"définis précédemment pour la formation et l'évaluation des modèles. " -"Voyons une simple mise en œuvre du client Flower qui réunit tout cela :" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:513 +#: ../../source/tutorial-quickstart-xgboost.rst:921 msgid "" -"Our class ``FlowerClient`` defines how local training/evaluation will be " -"performed and allows Flower to call the local training/evaluation through" -" ``fit`` and ``evaluate``. Each instance of ``FlowerClient`` represents a" -" *single client* in our federated learning system. Federated learning " -"systems have multiple clients (otherwise, there's not much to federate), " -"so each client will be represented by its own instance of " -"``FlowerClient``. If we have, for example, three clients in our workload," -" then we'd have three instances of ``FlowerClient``. Flower calls " -"``FlowerClient.fit`` on the respective instance when the server selects a" -" particular client for training (and ``FlowerClient.evaluate`` for " -"evaluation)." +"We first load the dataset and perform data partitioning, and the pre-" +"processed data is stored in a :code:`list`. After the simulation begins, " +"the clients won't need to pre-process their partitions again." msgstr "" -"Our class ``FlowerClient`` defines how local training/evaluation will be " -"performed and allows Flower to call the local training/evaluation through" -" ``fit`` and ``evaluate``. Each instance of ``FlowerClient`` represents a" -" *single client* in our federated learning system. Federated learning " -"systems have multiple clients (otherwise, there's not much to federate), " -"so each client will be represented by its own instance of " -"``FlowerClient``. If we have, for example, three clients in our workload," -" then we'd have three instances of ``FlowerClient``. Flower calls " -"``FlowerClient.fit`` on the respective instance when the server selects a" -" particular client for training (and ``FlowerClient.evaluate`` for " -"evaluation)." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:517 -msgid "Using the Virtual Client Engine" -msgstr "Utilisation du moteur du client virtuel" +#: ../../source/tutorial-quickstart-xgboost.rst:924 +msgid "Then, we define the strategies and other hyper-parameters:" +msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:519 +#: ../../source/tutorial-quickstart-xgboost.rst:975 msgid "" -"In this notebook, we want to simulate a federated learning system with 10" -" clients on a single machine. This means that the server and all 10 " -"clients will live on a single machine and share resources such as CPU, " -"GPU, and memory. Having 10 clients would mean having 10 instances of " -"``FlowerClient`` in memory. Doing this on a single machine can quickly " -"exhaust the available memory resources, even if only a subset of these " -"clients participates in a single round of federated learning." +"After that, we start the simulation by calling " +":code:`fl.simulation.start_simulation`:" msgstr "" -"Dans ce carnet, nous voulons simuler un système d'apprentissage fédéré " -"avec 10 clients sur une seule machine. Cela signifie que le serveur et " -"les 10 clients vivront sur une seule machine et partageront des " -"ressources telles que le CPU, le GPU et la mémoire. Avoir 10 clients " -"signifierait avoir 10 instances de ``FlowerClient`` en mémoire. Faire " -"cela sur une seule machine peut rapidement épuiser les ressources mémoire" -" disponibles, même si seulement un sous-ensemble de ces clients participe" -" à un seul tour d'apprentissage fédéré." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:521 +#: ../../source/tutorial-quickstart-xgboost.rst:995 msgid "" -"In addition to the regular capabilities where server and clients run on " -"multiple machines, Flower, therefore, provides special simulation " -"capabilities that create ``FlowerClient`` instances only when they are " -"actually necessary for training or evaluation. To enable the Flower " -"framework to create clients when necessary, we need to implement a " -"function called ``client_fn`` that creates a ``FlowerClient`` instance on" -" demand. Flower calls ``client_fn`` whenever it needs an instance of one " -"particular client to call ``fit`` or ``evaluate`` (those instances are " -"usually discarded after use, so they should not keep any local state). " -"Clients are identified by a client ID, or short ``cid``. The ``cid`` can " -"be used, for example, to load different local data partitions for " -"different clients, as can be seen below:" +"One of key parameters for :code:`start_simulation` is :code:`client_fn` " +"which returns a function to construct a client. We define it as follows:" msgstr "" -"In addition to the regular capabilities where server and clients run on " -"multiple machines, Flower, therefore, provides special simulation " -"capabilities that create ``FlowerClient`` instances only when they are " -"actually necessary for training or evaluation. To enable the Flower " -"framework to create clients when necessary, we need to implement a " -"function called ``client_fn`` that creates a ``FlowerClient`` instance on" -" demand. Flower calls ``client_fn`` whenever it needs an instance of one " -"particular client to call ``fit`` or ``evaluate`` (those instances are " -"usually discarded after use, so they should not keep any local state). " -"Clients are identified by a client ID, or short ``cid``. The ``cid`` can " -"be used, for example, to load different local data partitions for " -"different clients, as can be seen below:" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:556 -msgid "Starting the training" -msgstr "Commencer la formation" +#: ../../source/tutorial-quickstart-xgboost.rst:1038 +msgid "Arguments parser" +msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:558 +#: ../../source/tutorial-quickstart-xgboost.rst:1040 msgid "" -"We now have the class ``FlowerClient`` which defines client-side " -"training/evaluation and ``client_fn`` which allows Flower to create " -"``FlowerClient`` instances whenever it needs to call ``fit`` or " -"``evaluate`` on one particular client. The last step is to start the " -"actual simulation using ``flwr.simulation.start_simulation``." +"In :code:`utils.py`, we define the arguments parsers for clients, server " +"and simulation, allowing users to specify different experimental " +"settings. Let's first see the sever side:" msgstr "" -"Nous avons maintenant la classe ``FlowerClient`` qui définit " -"l'entraînement/évaluation côté client et ``client_fn`` qui permet à " -"Flower de créer des instances de ``FlowerClient`` chaque fois qu'il a " -"besoin d'appeler ``fit`` ou ``evaluate`` sur un client particulier. La " -"dernière étape consiste à démarrer la simulation réelle en utilisant " -"``flwr.simulation.start_simulation``." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:560 +#: ../../source/tutorial-quickstart-xgboost.rst:1086 msgid "" -"The function ``start_simulation`` accepts a number of arguments, amongst " -"them the ``client_fn`` used to create ``FlowerClient`` instances, the " -"number of clients to simulate (``num_clients``), the number of federated " -"learning rounds (``num_rounds``), and the strategy. The strategy " -"encapsulates the federated learning approach/algorithm, for example, " -"*Federated Averaging* (FedAvg)." +"This allows user to specify training strategies / the number of total " +"clients / FL rounds / participating clients / clients for evaluation, and" +" evaluation fashion. Note that with :code:`--centralised-eval`, the sever" +" will do centralised evaluation and all functionalities for client " +"evaluation will be disabled." msgstr "" -"La fonction ``start_simulation`` accepte un certain nombre d'arguments, " -"parmi lesquels le ``client_fn`` utilisé pour créer les instances " -"``FlowerClient``, le nombre de clients à simuler (``num_clients``), le " -"nombre de tours d'apprentissage fédéré (``num_rounds``), et la stratégie." -" La stratégie encapsule l'approche/algorithme d'apprentissage fédéré, par" -" exemple, *Federated Averaging* (FedAvg)." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:562 -msgid "" -"Flower has a number of built-in strategies, but we can also use our own " -"strategy implementations to customize nearly all aspects of the federated" -" learning approach. For this example, we use the built-in ``FedAvg`` " -"implementation and customize it using a few basic parameters. The last " -"step is the actual call to ``start_simulation`` which - you guessed it - " -"starts the simulation:" +#: ../../source/tutorial-quickstart-xgboost.rst:1090 +msgid "Then, the argument parser on client side:" msgstr "" -"Flower dispose d'un certain nombre de stratégies intégrées, mais nous " -"pouvons également utiliser nos propres implémentations de stratégies pour" -" personnaliser presque tous les aspects de l'approche de l'apprentissage " -"fédéré. Pour cet exemple, nous utilisons l'implémentation intégrée " -"``FedAvg`` et nous la personnalisons en utilisant quelques paramètres de " -"base. La dernière étape est l'appel à ``start_simulation`` qui - tu l'as " -"deviné - démarre la simulation :" - -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:608 -msgid "Behind the scenes" -msgstr "Dans les coulisses" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:610 -msgid "So how does this work? How does Flower execute this simulation?" +#: ../../source/tutorial-quickstart-xgboost.rst:1144 +msgid "" +"This defines various options for client data partitioning. Besides, " +"clients also have an option to conduct evaluation on centralised test set" +" by setting :code:`--centralised-eval`, as well as an option to perform " +"scaled learning rate based on the number of clients by setting :code" +":`--scaled-lr`." msgstr "" -"Alors, comment cela fonctionne-t-il ? Comment Flower exécute-t-il cette " -"simulation ?" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:612 -#, python-format -msgid "" -"When we call ``start_simulation``, we tell Flower that there are 10 " -"clients (``num_clients=10``). Flower then goes ahead an asks the " -"``FedAvg`` strategy to select clients. ``FedAvg`` knows that it should " -"select 100% of the available clients (``fraction_fit=1.0``), so it goes " -"ahead and selects 10 random clients (i.e., 100% of 10)." +#: ../../source/tutorial-quickstart-xgboost.rst:1148 +msgid "We also have an argument parser for simulation:" msgstr "" -"Lorsque nous appelons ``start_simulation``, nous disons à Flower qu'il y " -"a 10 clients (``num_clients=10``). Flower demande alors à la stratégie " -"``FedAvg`` de sélectionner des clients. ``FedAvg` sait qu'il doit " -"sélectionner 100% des clients disponibles (``fraction_fit=1.0``), alors " -"il choisit 10 clients au hasard (c'est à dire 100% de 10)." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:614 -msgid "" -"Flower then asks the selected 10 clients to train the model. When the " -"server receives the model parameter updates from the clients, it hands " -"those updates over to the strategy (*FedAvg*) for aggregation. The " -"strategy aggregates those updates and returns the new global model, which" -" then gets used in the next round of federated learning." +#: ../../source/tutorial-quickstart-xgboost.rst:1226 +msgid "This integrates all arguments for both client and server sides." msgstr "" -"Flower demande ensuite aux 10 clients sélectionnés d'entraîner le modèle." -" Lorsque le serveur reçoit les mises à jour des paramètres du modèle de " -"la part des clients, il les transmet à la stratégie (*FedAvg*) pour " -"qu'elle les agrège. La stratégie agrège ces mises à jour et renvoie le " -"nouveau modèle global, qui est ensuite utilisé dans le prochain cycle " -"d'apprentissage fédéré." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:626 -msgid "Where's the accuracy?" -msgstr "Où est la précision ?" +#: ../../source/tutorial-quickstart-xgboost.rst:1229 +#, fuzzy +msgid "Example commands" +msgstr "Exemples de PyTorch" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:628 +#: ../../source/tutorial-quickstart-xgboost.rst:1231 msgid "" -"You may have noticed that all metrics except for ``losses_distributed`` " -"are empty. Where did the ``{\"accuracy\": float(accuracy)}`` go?" +"To run a centralised evaluated experiment with bagging strategy on 5 " +"clients with exponential distribution for 50 rounds, we first start the " +"server as below:" msgstr "" -"Tu as peut-être remarqué que toutes les mesures, à l'exception de " -"``pertes_distribuées``, sont vides. Où est passée la ``{\"précision\" : " -"float(précision)}`` ?" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:630 -msgid "" -"Flower can automatically aggregate losses returned by individual clients," -" but it cannot do the same for metrics in the generic metrics dictionary " -"(the one with the ``accuracy`` key). Metrics dictionaries can contain " -"very different kinds of metrics and even key/value pairs that are not " -"metrics at all, so the framework does not (and can not) know how to " -"handle these automatically." -msgstr "" -"Flower peut automatiquement agréger les pertes renvoyées par les clients " -"individuels, mais il ne peut pas faire la même chose pour les mesures " -"dans le dictionnaire de mesures générique (celui avec la clé " -"``accuracy``). Les dictionnaires de mesures peuvent contenir des types de" -" mesures très différents et même des paires clé/valeur qui ne sont pas " -"des mesures du tout, donc le cadre ne sait pas (et ne peut pas) savoir " -"comment les gérer automatiquement." +#: ../../source/tutorial-quickstart-xgboost.rst:1238 +#, fuzzy +msgid "Then, on each client terminal, we start the clients:" +msgstr "Ouvre un autre terminal et démarre le deuxième client :" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:632 -msgid "" -"As users, we need to tell the framework how to handle/aggregate these " -"custom metrics, and we do so by passing metric aggregation functions to " -"the strategy. The strategy will then call these functions whenever it " -"receives fit or evaluate metrics from clients. The two possible functions" -" are ``fit_metrics_aggregation_fn`` and " -"``evaluate_metrics_aggregation_fn``." +#: ../../source/tutorial-quickstart-xgboost.rst:1244 +msgid "To run the same experiment with Flower simulation:" msgstr "" -"En tant qu'utilisateurs, nous devons indiquer au framework comment " -"gérer/agréger ces métriques personnalisées, et nous le faisons en passant" -" des fonctions d'agrégation de métriques à la stratégie. La stratégie " -"appellera alors ces fonctions chaque fois qu'elle recevra des métriques " -"d'ajustement ou d'évaluation de la part des clients. Les deux fonctions " -"possibles sont ``fit_metrics_aggregation_fn`` et " -"``evaluate_metrics_aggregation_fn``." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:634 +#: ../../source/tutorial-quickstart-xgboost.rst:1250 +#, fuzzy msgid "" -"Let's create a simple weighted averaging function to aggregate the " -"``accuracy`` metric we return from ``evaluate``:" +"The full `code `_ for this comprehensive example can be found in" +" :code:`examples/xgboost-comprehensive`." msgstr "" -"Créons une simple fonction de calcul de la moyenne pondérée pour agréger " -"la mesure de \"précision\" que nous renvoie ``evaluate`` :" +"Félicitations ! Tu as réussi à construire et à faire fonctionner ton " +"premier système d'apprentissage fédéré. Le code source complet " +"`_ de cet exemple se trouve dans :code:`examples" +"/quickstart-mxnet`." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:660 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:9 +#, fuzzy +msgid "Build a strategy from scratch" +msgstr "Élaborer une stratégie à partir de zéro" + +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:11 +#, fuzzy msgid "" -"The only thing left to do is to tell the strategy to call this function " -"whenever it receives evaluation metric dictionaries from the clients:" +"Welcome to the third part of the Flower federated learning tutorial. In " +"previous parts of this tutorial, we introduced federated learning with " +"PyTorch and Flower (`part 1 `__) and we learned how strategies " +"can be used to customize the execution on both the server and the clients" +" (`part 2 `__)." msgstr "" -"La seule chose qui reste à faire est d'indiquer à la stratégie d'appeler " -"cette fonction chaque fois qu'elle reçoit des dictionnaires de métriques " -"d'évaluation de la part des clients :" +"Bienvenue dans la troisième partie du tutoriel sur l'apprentissage fédéré" +" Flower. Dans les parties précédentes de ce tutoriel, nous avons présenté" +" l'apprentissage fédéré avec PyTorch et Flower (`partie 1 " +"`__) " +"et nous avons appris comment les stratégies peuvent être utilisées pour " +"personnaliser l'exécution à la fois sur le serveur et sur les clients " +"(`partie 2 `__)." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:697 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:13 msgid "" -"We now have a full system that performs federated training and federated " -"evaluation. It uses the ``weighted_average`` function to aggregate custom" -" evaluation metrics and calculates a single ``accuracy`` metric across " -"all clients on the server side." +"In this notebook, we'll continue to customize the federated learning " +"system we built previously by creating a custom version of FedAvg (again," +" using `Flower `__ and `PyTorch " +"`__)." msgstr "" -"Nous avons maintenant un système complet qui effectue la formation " -"fédérée et l'évaluation fédérée. Il utilise la fonction ``moyenne " -"pondérée`` pour agréger les mesures d'évaluation personnalisées et " -"calcule une seule mesure de ``précision`` pour tous les clients du côté " -"du serveur." +"Dans ce carnet, nous allons continuer à personnaliser le système " +"d'apprentissage fédéré que nous avons construit précédemment en créant " +"une version personnalisée de FedAvg (encore une fois, en utilisant " +"`Flower `__ et `PyTorch `__)." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:699 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:15 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:16 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:15 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:15 msgid "" -"The other two categories of metrics (``losses_centralized`` and " -"``metrics_centralized``) are still empty because they only apply when " -"centralized evaluation is being used. Part two of the Flower tutorial " -"will cover centralized evaluation." +"`Star Flower on GitHub `__ ⭐️ and join " +"the Flower community on Slack to connect, ask questions, and get help: " +"`Join Slack `__ 🌼 We'd love to hear from " +"you in the ``#introductions`` channel! And if anything is unclear, head " +"over to the ``#questions`` channel." msgstr "" -"Les deux autres catégories de mesures (``pertes_centralisées`` et " -"``métriques_centralisées``) sont toujours vides car elles ne s'appliquent" -" que lorsque l'évaluation centralisée est utilisée. La deuxième partie du" -" tutoriel sur les fleurs couvrira l'évaluation centralisée." +"`Star Flower on GitHub `__ ⭐️ et " +"rejoignez la communauté Flower sur Slack pour vous connecter, poser des " +"questions et obtenir de l'aide : `Join Slack `__ 🌼 Nous serions ravis d'avoir de vos nouvelles dans le canal " +"``#introductions`` ! Et si quelque chose n'est pas clair, rendez-vous sur" +" le canal ``#questions``." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:711 -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:351 -msgid "Final remarks" -msgstr "Remarques finales" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:17 +msgid "Let's build a new ``Strategy`` from scratch!" +msgstr "Construisons une nouvelle ``Stratégie`` à partir de zéro !" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:713 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:29 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:29 +msgid "Preparation" +msgstr "Préparation" + +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:31 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:32 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:31 msgid "" -"Congratulations, you just trained a convolutional neural network, " -"federated over 10 clients! With that, you understand the basics of " -"federated learning with Flower. The same approach you've seen can be used" -" with other machine learning frameworks (not just PyTorch) and tasks (not" -" just CIFAR-10 images classification), for example NLP with Hugging Face " -"Transformers or speech with SpeechBrain." +"Before we begin with the actual code, let's make sure that we have " +"everything we need." msgstr "" -"Félicitations, tu viens d'entraîner un réseau neuronal convolutif, fédéré" -" sur 10 clients ! Avec ça, tu comprends les bases de l'apprentissage " -"fédéré avec Flower. La même approche que tu as vue peut être utilisée " -"avec d'autres cadres d'apprentissage automatique (pas seulement PyTorch) " -"et d'autres tâches (pas seulement la classification des images CIFAR-10)," -" par exemple le NLP avec Hugging Face Transformers ou la parole avec " -"SpeechBrain." +"Avant de commencer le code proprement dit, assurons-nous que nous " +"disposons de tout ce dont nous avons besoin." -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:715 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:43 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:44 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:43 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:43 +msgid "Installing dependencies" +msgstr "Installation des dépendances" + +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:45 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:46 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:45 +msgid "First, we install the necessary packages:" +msgstr "Tout d'abord, nous installons les paquets nécessaires :" + +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:65 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:66 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:65 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:65 msgid "" -"In the next notebook, we're going to cover some more advanced concepts. " -"Want to customize your strategy? Initialize parameters on the server " -"side? Or evaluate the aggregated model on the server side? We'll cover " -"all this and more in the next tutorial." +"Now that we have all dependencies installed, we can import everything we " +"need for this tutorial:" msgstr "" -"Dans le prochain cahier, nous allons aborder des concepts plus avancés. " -"Tu veux personnaliser ta stratégie ? Initialiser des paramètres côté " -"serveur ? Ou évaluer le modèle agrégé côté serveur ? Nous aborderons tout" -" cela et bien plus encore dans le prochain tutoriel." +"Maintenant que toutes les dépendances sont installées, nous pouvons " +"importer tout ce dont nous avons besoin pour ce tutoriel :" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:733 -#, fuzzy +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:101 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:102 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:101 msgid "" -"The `Flower Federated Learning Tutorial - Part 2 " -"`__ goes into more depth about strategies and all " -"the advanced things you can build with them." +"It is possible to switch to a runtime that has GPU acceleration enabled " +"(on Google Colab: ``Runtime > Change runtime type > Hardware acclerator: " +"GPU > Save``). Note, however, that Google Colab is not always able to " +"offer GPU acceleration. If you see an error related to GPU availability " +"in one of the following sections, consider switching back to CPU-based " +"execution by setting ``DEVICE = torch.device(\"cpu\")``. If the runtime " +"has GPU acceleration enabled, you should see the output ``Training on " +"cuda``, otherwise it'll say ``Training on cpu``." msgstr "" -"Le `Tutoriel d'apprentissage fédéré Flower - Partie 2 " -"`__ va plus en profondeur sur les stratégies et toutes les " -"choses avancées que tu peux construire avec elles." +"Il est possible de passer à un runtime dont l'accélération GPU est " +"activée (sur Google Colab : ``Runtime > Change runtime type > Hardware " +"acclerator : GPU > Save``). Note cependant que Google Colab n'est pas " +"toujours en mesure de proposer l'accélération GPU. Si tu vois une erreur " +"liée à la disponibilité du GPU dans l'une des sections suivantes, " +"envisage de repasser à une exécution basée sur le CPU en définissant " +"``DEVICE = torch.device(\"cpu\")``. Si le runtime a activé l'accélération" +" GPU, tu devrais voir apparaître le résultat ``Training on cuda``, sinon " +"il dira ``Training on cpu``." -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:9 -#, fuzzy -msgid "Use a federated learning strategy" -msgstr "Stratégie de moyenne fédérée." +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:114 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:115 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:114 +msgid "Data loading" +msgstr "Chargement des données" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:11 -#, fuzzy +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:116 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:116 msgid "" -"Welcome to the next part of the federated learning tutorial. In previous " -"parts of this tutorial, we introduced federated learning with PyTorch and" -" Flower (`part 1 `__)." +"Let's now load the CIFAR-10 training and test set, partition them into " +"ten smaller datasets (each split into training and validation set), and " +"wrap everything in their own ``DataLoader``. We introduce a new parameter" +" ``num_clients`` which allows us to call ``load_datasets`` with different" +" numbers of clients." msgstr "" -"Bienvenue dans la prochaine partie du tutoriel sur l'apprentissage " -"fédéré. Dans les parties précédentes de ce tutoriel, nous avons présenté " -"l'apprentissage fédéré avec PyTorch et Flower (`partie 1 " -"`__)." +"Chargeons maintenant les ensembles d'entraînement et de test CIFAR-10, " +"divisons-les en dix ensembles de données plus petits (chacun divisé en " +"ensemble d'entraînement et de validation), et enveloppons le tout dans " +"leur propre ``DataLoader``. Nous introduisons un nouveau paramètre " +"``num_clients`` qui nous permet d'appeler ``load_datasets`` avec " +"différents nombres de clients." -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:13 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:167 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:168 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:167 +msgid "Model training/evaluation" +msgstr "Formation/évaluation du modèle" + +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:169 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:170 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:169 msgid "" -"In this notebook, we'll begin to customize the federated learning system " -"we built in the introductory notebook (again, using `Flower " -"`__ and `PyTorch `__)." +"Let's continue with the usual model definition (including " +"``set_parameters`` and ``get_parameters``), training and test functions:" msgstr "" -"Dans ce carnet, nous allons commencer à personnaliser le système " -"d'apprentissage fédéré que nous avons construit dans le carnet " -"d'introduction (toujours en utilisant `Flower `__ et " -"`PyTorch `__)." - -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:17 -#, fuzzy -msgid "Let's move beyond FedAvg with Flower strategies!" -msgstr "Dépassons FedAvg avec les stratégies florales !" +"Continuons avec la définition habituelle du modèle (y compris " +"``set_parameters`` et ``get_parameters``), les fonctions d'entraînement " +"et de test :" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:309 -msgid "Strategy customization" -msgstr "Personnalisation de la stratégie" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:258 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:258 +msgid "Flower client" +msgstr "Client de Flower" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:311 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:260 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:260 msgid "" -"So far, everything should look familiar if you've worked through the " -"introductory notebook. With that, we're ready to introduce a number of " -"new features." +"To implement the Flower client, we (again) create a subclass of " +"``flwr.client.NumPyClient`` and implement the three methods " +"``get_parameters``, ``fit``, and ``evaluate``. Here, we also pass the " +"``cid`` to the client and use it log additional details:" msgstr "" -"Jusqu'à présent, tout devrait te sembler familier si tu as travaillé sur " -"le cahier d'introduction. Avec cela, nous sommes prêts à présenter un " -"certain nombre de nouvelles fonctionnalités." +"Pour mettre en œuvre le client Flower, nous créons (à nouveau) une sous-" +"classe de ``flwr.client.NumPyClient`` et mettons en œuvre les trois " +"méthodes ``get_parameters``, ``fit`` et ``evaluate``. Ici, nous " +"transmettons également le ``cid`` au client et l'utilisons pour consigner" +" des détails supplémentaires :" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:323 -msgid "Server-side parameter **initialization**" -msgstr "Paramètres côté serveur **initialisation**" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:308 +msgid "Let's test what we have so far before we continue:" +msgstr "Testons ce que nous avons jusqu'à présent avant de continuer :" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:325 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:339 +msgid "Build a Strategy from scratch" +msgstr "Élaborer une stratégie à partir de zéro" + +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:341 msgid "" -"Flower, by default, initializes the global model by asking one random " -"client for the initial parameters. In many cases, we want more control " -"over parameter initialization though. Flower therefore allows you to " -"directly pass the initial parameters to the Strategy:" +"Let’s overwrite the ``configure_fit`` method such that it passes a higher" +" learning rate (potentially also other hyperparameters) to the optimizer " +"of a fraction of the clients. We will keep the sampling of the clients as" +" it is in ``FedAvg`` and then change the configuration dictionary (one of" +" the ``FitIns`` attributes)." msgstr "" -"Flower, par défaut, initialise le modèle global en demandant à un client " -"aléatoire les paramètres initiaux. Dans de nombreux cas, nous voulons " -"cependant avoir plus de contrôle sur l'initialisation des paramètres. " -"Flower te permet donc de passer directement les paramètres initiaux à la " -"Stratégie :" +"Remplaçons la méthode ``configure_fit`` de façon à ce qu'elle transmette " +"un taux d'apprentissage plus élevé (potentiellement aussi d'autres " +"hyperparamètres) à l'optimiseur d'une fraction des clients. Nous " +"garderons l'échantillonnage des clients tel qu'il est dans ``FedAvg`` et " +"changerons ensuite le dictionnaire de configuration (l'un des attributs " +"``FitIns``)." -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:370 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:507 msgid "" -"Passing ``initial_parameters`` to the ``FedAvg`` strategy prevents Flower" -" from asking one of the clients for the initial parameters. If we look " -"closely, we can see that the logs do not show any calls to the " -"``FlowerClient.get_parameters`` method." +"The only thing left is to use the newly created custom Strategy " +"``FedCustom`` when starting the experiment:" msgstr "" -"Le fait de passer ``initial_parameters`` à la stratégie ``FedAvg`` " -"empêche Flower de demander les paramètres initiaux à l'un des clients. Si" -" nous regardons de près, nous pouvons voir que les journaux ne montrent " -"aucun appel à la méthode ``FlowerClient.get_parameters``." +"Il ne reste plus qu'à utiliser la stratégie personnalisée nouvellement " +"créée ``FedCustom`` lors du démarrage de l'expérience :" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:382 -msgid "Starting with a customized strategy" -msgstr "Commencer par une stratégie personnalisée" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:534 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:932 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:697 +msgid "Recap" +msgstr "Récapitulation" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:384 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:536 msgid "" -"We've seen the function ``start_simulation`` before. It accepts a number " -"of arguments, amongst them the ``client_fn`` used to create " -"``FlowerClient`` instances, the number of clients to simulate " -"``num_clients``, the number of rounds ``num_rounds``, and the strategy." +"In this notebook, we’ve seen how to implement a custom strategy. A custom" +" strategy enables granular control over client node configuration, result" +" aggregation, and more. To define a custom strategy, you only have to " +"overwrite the abstract methods of the (abstract) base class ``Strategy``." +" To make custom strategies even more powerful, you can pass custom " +"functions to the constructor of your new class (``__init__``) and then " +"call these functions whenever needed." msgstr "" -"Elle accepte un certain nombre d'arguments, parmi lesquels le " -"``client_fn`` utilisé pour créer les instances de ``FlowerClient``, le " -"nombre de clients à simuler ``num_clients``, le nombre de rounds " -"``num_rounds``, et la stratégie." +"Dans ce carnet, nous avons vu comment mettre en place une stratégie " +"personnalisée. Une stratégie personnalisée permet un contrôle granulaire " +"sur la configuration des nœuds clients, l'agrégation des résultats, et " +"bien plus encore. Pour définir une stratégie personnalisée, il te suffit " +"d'écraser les méthodes abstraites de la classe de base (abstraite) " +"``Strategy``. Pour rendre les stratégies personnalisées encore plus " +"puissantes, tu peux passer des fonctions personnalisées au constructeur " +"de ta nouvelle classe (``__init__``) et appeler ensuite ces fonctions à " +"chaque fois que c'est nécessaire." -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:386 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:550 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:948 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:729 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:715 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:369 msgid "" -"The strategy encapsulates the federated learning approach/algorithm, for " -"example, ``FedAvg`` or ``FedAdagrad``. Let's try to use a different " -"strategy this time:" +"Before you continue, make sure to join the Flower community on Slack: " +"`Join Slack `__" msgstr "" -"La stratégie englobe l'approche/l'algorithme d'apprentissage fédéré, par " -"exemple, ``FedAvg`` ou ``FedAdagrad``. Essayons d'utiliser une stratégie " -"différente cette fois-ci :" - -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:424 -msgid "Server-side parameter **evaluation**" -msgstr "Paramètre côté serveur **évaluation**" +"Avant de continuer, n'oublie pas de rejoindre la communauté Flower sur " +"Slack : `Join Slack `__" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:426 -msgid "" -"Flower can evaluate the aggregated model on the server-side or on the " -"client-side. Client-side and server-side evaluation are similar in some " -"ways, but different in others." +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:552 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:950 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:731 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:717 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:371 +msgid "" +"There's a dedicated ``#questions`` channel if you need help, but we'd " +"also love to hear who you are in ``#introductions``!" msgstr "" -"Flower peut évaluer le modèle agrégé côté serveur ou côté client. Les " -"évaluations côté client et côté serveur sont similaires à certains " -"égards, mais différentes à d'autres." +"Il existe un canal dédié aux ``questions`` si vous avez besoin d'aide, " +"mais nous aimerions aussi savoir qui vous êtes dans ``#introductions`` !" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:428 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:554 +#, fuzzy msgid "" -"**Centralized Evaluation** (or *server-side evaluation*) is conceptually " -"simple: it works the same way that evaluation in centralized machine " -"learning does. If there is a server-side dataset that can be used for " -"evaluation purposes, then that's great. We can evaluate the newly " -"aggregated model after each round of training without having to send the " -"model to clients. We're also fortunate in the sense that our entire " -"evaluation dataset is available at all times." +"The `Flower Federated Learning Tutorial - Part 4 " +"`__ introduces ``Client``, the flexible API underlying " +"``NumPyClient``." msgstr "" -"**L'évaluation centralisée** (ou *évaluation côté serveur*) est " -"conceptuellement simple : elle fonctionne de la même manière que " -"l'évaluation dans l'apprentissage automatique centralisé. S'il existe un " -"ensemble de données côté serveur qui peut être utilisé à des fins " -"d'évaluation, alors c'est parfait. Nous pouvons évaluer le modèle " -"nouvellement agrégé après chaque cycle de formation sans avoir à envoyer " -"le modèle aux clients. Nous avons également la chance que l'ensemble de " -"notre ensemble de données d'évaluation soit disponible à tout moment." +"Le `Tutoriel d'apprentissage fédéré Flower - Partie 4 " +"`__ présente ``Client``, l'API flexible qui sous-tend " +"``NumPyClient``." -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:430 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:9 +#, fuzzy +msgid "Customize the client" +msgstr "Création du client IMDBC" + +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:11 +#, fuzzy msgid "" -"**Federated Evaluation** (or *client-side evaluation*) is more complex, " -"but also more powerful: it doesn't require a centralized dataset and " -"allows us to evaluate models over a larger set of data, which often " -"yields more realistic evaluation results. In fact, many scenarios require" -" us to use **Federated Evaluation** if we want to get representative " -"evaluation results at all. But this power comes at a cost: once we start " -"to evaluate on the client side, we should be aware that our evaluation " -"dataset can change over consecutive rounds of learning if those clients " -"are not always available. Moreover, the dataset held by each client can " -"also change over consecutive rounds. This can lead to evaluation results " -"that are not stable, so even if we would not change the model, we'd see " -"our evaluation results fluctuate over consecutive rounds." +"Welcome to the fourth part of the Flower federated learning tutorial. In " +"the previous parts of this tutorial, we introduced federated learning " +"with PyTorch and Flower (`part 1 `__), we learned how " +"strategies can be used to customize the execution on both the server and " +"the clients (`part 2 `__), and we built our own " +"custom strategy from scratch (`part 3 `__)." msgstr "" -"**L'évaluation fédérée** (ou évaluation côté client) est plus complexe, " -"mais aussi plus puissante : elle ne nécessite pas d'ensemble de données " -"centralisé et nous permet d'évaluer les modèles sur un plus grand " -"ensemble de données, ce qui donne souvent des résultats d'évaluation plus" -" réalistes. En fait, de nombreux scénarios exigent que nous utilisions " -"l'évaluation fédérée** si nous voulons obtenir des résultats d'évaluation" -" représentatifs. Mais cette puissance a un coût : une fois que nous " -"commençons à évaluer côté client, nous devons savoir que notre ensemble " -"de données d'évaluation peut changer au cours des cycles d'apprentissage " -"consécutifs si ces clients ne sont pas toujours disponibles. De plus, " -"l'ensemble de données détenu par chaque client peut également changer au " -"cours des cycles consécutifs. Cela peut conduire à des résultats " -"d'évaluation qui ne sont pas stables, donc même si nous ne changions pas " -"le modèle, nous verrions nos résultats d'évaluation fluctuer au cours des" -" cycles consécutifs." +"Bienvenue dans la quatrième partie du tutoriel sur l'apprentissage fédéré" +" Flower. Dans les parties précédentes de ce tutoriel, nous avons présenté" +" l'apprentissage fédéré avec PyTorch et Flower (`partie 1 " +"`__), " +"nous avons appris comment les stratégies peuvent être utilisées pour " +"personnaliser l'exécution à la fois sur le serveur et les clients " +"(`partie 2 `__), et nous avons construit notre propre stratégie " +"personnalisée à partir de zéro (`partie 3 - WIP " +"`__)." -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:433 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:14 msgid "" -"We've seen how federated evaluation works on the client side (i.e., by " -"implementing the ``evaluate`` method in ``FlowerClient``). Now let's see " -"how we can evaluate aggregated model parameters on the server-side:" +"In this notebook, we revisit ``NumPyClient`` and introduce a new " +"baseclass for building clients, simply named ``Client``. In previous " +"parts of this tutorial, we've based our client on ``NumPyClient``, a " +"convenience class which makes it easy to work with machine learning " +"libraries that have good NumPy interoperability. With ``Client``, we gain" +" a lot of flexibility that we didn't have before, but we'll also have to " +"do a few things the we didn't have to do before." msgstr "" -"Nous avons vu comment l'évaluation fédérée fonctionne du côté client " -"(c'est-à-dire en implémentant la méthode ``evaluate`` dans " -"``FlowerClient``). Voyons maintenant comment nous pouvons évaluer les " -"paramètres du modèle agrégé du côté serveur :" - -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:490 -msgid "Sending/receiving arbitrary values to/from clients" -msgstr "Envoi/réception de valeurs arbitraires vers/depuis les clients" +"Dans ce carnet, nous revisitons `NumPyClient`` et introduisons une " +"nouvelle classe de base pour construire des clients, simplement appelée " +"`Client``. Dans les parties précédentes de ce tutoriel, nous avons basé " +"notre client sur ``NumPyClient``, une classe de commodité qui facilite le" +" travail avec les bibliothèques d'apprentissage automatique qui ont une " +"bonne interopérabilité NumPy. Avec ``Client``, nous gagnons beaucoup de " +"flexibilité que nous n'avions pas auparavant, mais nous devrons également" +" faire quelques choses que nous n'avions pas à faire auparavant." -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:492 -#, fuzzy +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:18 msgid "" -"In some situations, we want to configure client-side execution (training," -" evaluation) from the server-side. One example for that is the server " -"asking the clients to train for a certain number of local epochs. Flower " -"provides a way to send configuration values from the server to the " -"clients using a dictionary. Let's look at an example where the clients " -"receive values from the server through the ``config`` parameter in " -"``fit`` (``config`` is also available in ``evaluate``). The ``fit`` " -"method receives the configuration dictionary through the ``config`` " -"parameter and can then read values from this dictionary. In this example," -" it reads ``server_round`` and ``local_epochs`` and uses those values to " -"improve the logging and configure the number of local training epochs:" +"Let's go deeper and see what it takes to move from ``NumPyClient`` to " +"``Client``!" msgstr "" -"In some situations, we want to configure client-side execution (training," -" evaluation) from the server-side. One example for that is the server " -"asking the clients to train for a certain number of local epochs. Flower " -"provides a way to send configuration values from the server to the " -"clients using a dictionary. Let's look at an example where the clients " -"receive values from the server through the ``config`` parameter in " -"``fit`` (``config`` is also available in ``evaluate``). The ``fit`` " -"method receives the configuration dictionary through the ``config`` " -"parameter and can then read values from this dictionary. In this example," -" it reads ``server_round`` and ``local_epochs`` and uses those values to " -"improve the logging and configure the number of local training epochs:" +"Allons plus loin et voyons ce qu'il faut faire pour passer de " +"``NumPyClient`` à ``Client`` !" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:546 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:30 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:29 +msgid "Step 0: Preparation" +msgstr "Étape 0 : Préparation" + +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:117 msgid "" -"So how can we send this config dictionary from server to clients? The " -"built-in Flower Strategies provide way to do this, and it works similarly" -" to the way server-side evaluation works. We provide a function to the " -"strategy, and the strategy calls this function for every round of " -"federated learning:" +"Let's now load the CIFAR-10 training and test set, partition them into " +"ten smaller datasets (each split into training and validation set), and " +"wrap everything in their own ``DataLoader``." msgstr "" -"Comment pouvons-nous donc envoyer ce dictionnaire de configuration du " -"serveur aux clients ? Les stratégies de Flower intégrées fournissent un " -"moyen de le faire, et cela fonctionne de la même façon que l'évaluation " -"côté serveur. Nous fournissons une fonction à la stratégie, et la " -"stratégie appelle cette fonction pour chaque cycle d'apprentissage fédéré" -" :" +"Chargeons maintenant les ensembles d'entraînement et de test CIFAR-10, " +"divisons-les en dix ensembles de données plus petits (chacun divisé en " +"ensemble d'entraînement et de validation) et enveloppons le tout dans " +"leur propre ``DataLoader``." -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:576 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:259 +msgid "Step 1: Revisiting NumPyClient" +msgstr "Étape 1 : Revoir NumPyClient" + +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:261 msgid "" -"Next, we'll just pass this function to the FedAvg strategy before " -"starting the simulation:" +"So far, we've implemented our client by subclassing " +"``flwr.client.NumPyClient``. The three methods we implemented are " +"``get_parameters``, ``fit``, and ``evaluate``. Finally, we wrap the " +"creation of instances of this class in a function called ``client_fn``:" msgstr "" -"Ensuite, nous allons simplement passer cette fonction à la stratégie " -"FedAvg avant de commencer la simulation :" +"Jusqu'à présent, nous avons implémenté notre client en sous-classant " +"``flwr.client.NumPyClient``. Les trois méthodes que nous avons " +"implémentées sont ``get_parameters``, ``fit`` et ``evaluate``. Enfin, " +"nous enveloppons la création d'instances de cette classe dans une " +"fonction appelée ``client_fn`` :" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:613 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:309 msgid "" -"As we can see, the client logs now include the current round of federated" -" learning (which they read from the ``config`` dictionary). We can also " -"configure local training to run for one epoch during the first and second" -" round of federated learning, and then for two epochs during the third " -"round." +"We've seen this before, there's nothing new so far. The only *tiny* " +"difference compared to the previous notebook is naming, we've changed " +"``FlowerClient`` to ``FlowerNumPyClient`` and ``client_fn`` to " +"``numpyclient_fn``. Let's run it to see the output we get:" msgstr "" -"Comme nous pouvons le voir, les journaux des clients incluent maintenant " -"le cycle actuel d'apprentissage fédéré (qu'ils lisent depuis le " -"dictionnaire ``config``). Nous pouvons également configurer " -"l'apprentissage local pour qu'il s'exécute pendant une époque au cours du" -" premier et du deuxième cycle d'apprentissage fédéré, puis pendant deux " -"époques au cours du troisième cycle." +"Nous avons déjà vu cela auparavant, il n'y a rien de nouveau jusqu'à " +"présent. La seule *petite* différence par rapport au carnet précédent est" +" le nommage, nous avons changé ``FlowerClient`` en ``FlowerNumPyClient`` " +"et ``client_fn`` en ``numpyclient_fn``. Exécutons-le pour voir la sortie " +"que nous obtenons :" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:615 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:339 msgid "" -"Clients can also return arbitrary values to the server. To do so, they " -"return a dictionary from ``fit`` and/or ``evaluate``. We have seen and " -"used this concept throughout this notebook without mentioning it " -"explicitly: our ``FlowerClient`` returns a dictionary containing a custom" -" key/value pair as the third return value in ``evaluate``." +"This works as expected, two clients are training for three rounds of " +"federated learning." msgstr "" -"Les clients peuvent également renvoyer des valeurs arbitraires au " -"serveur. Pour ce faire, ils renvoient un dictionnaire depuis ``fit`` " -"et/ou ``evaluate``. Nous avons vu et utilisé ce concept tout au long de " -"ce carnet sans le mentionner explicitement : notre ``FlowerClient`` " -"renvoie un dictionnaire contenant une paire clé/valeur personnalisée en " -"tant que troisième valeur de retour dans ``evaluate``." - -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:627 -msgid "Scaling federated learning" -msgstr "Mise à l'échelle de l'apprentissage fédéré" +"Cela fonctionne comme prévu, deux clients s'entraînent pour trois tours " +"d'apprentissage fédéré." -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:629 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:341 msgid "" -"As a last step in this notebook, let's see how we can use Flower to " -"experiment with a large number of clients." +"Let's dive a little bit deeper and discuss how Flower executes this " +"simulation. Whenever a client is selected to do some work, " +"``start_simulation`` calls the function ``numpyclient_fn`` to create an " +"instance of our ``FlowerNumPyClient`` (along with loading the model and " +"the data)." msgstr "" -"Comme dernière étape de ce carnet, voyons comment nous pouvons utiliser " -"Flower pour expérimenter avec un grand nombre de clients." +"Plongeons un peu plus profondément et discutons de la façon dont Flower " +"exécute cette simulation. Chaque fois qu'un client est sélectionné pour " +"effectuer un travail, ``start_simulation`` appelle la fonction " +"``numpyclient_fn`` pour créer une instance de notre ``FlowerNumPyClient``" +" (en même temps qu'il charge le modèle et les données)." -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:651 -#, python-format +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:343 msgid "" -"We now have 1000 partitions, each holding 45 training and 5 validation " -"examples. Given that the number of training examples on each client is " -"quite small, we should probably train the model a bit longer, so we " -"configure the clients to perform 3 local training epochs. We should also " -"adjust the fraction of clients selected for training during each round " -"(we don't want all 1000 clients participating in every round), so we " -"adjust ``fraction_fit`` to ``0.05``, which means that only 5% of " -"available clients (so 50 clients) will be selected for training each " -"round:" +"But here's the perhaps surprising part: Flower doesn't actually use the " +"``FlowerNumPyClient`` object directly. Instead, it wraps the object to " +"makes it look like a subclass of ``flwr.client.Client``, not " +"``flwr.client.NumPyClient``. In fact, the Flower core framework doesn't " +"know how to handle ``NumPyClient``'s, it only knows how to handle " +"``Client``'s. ``NumPyClient`` is just a convenience abstraction built on " +"top of ``Client``." msgstr "" -"Nous avons maintenant 1000 partitions, chacune contenant 45 exemples " -"d'entraînement et 5 exemples de validation. Etant donné que le nombre " -"d'exemples d'entraînement sur chaque client est assez faible, nous " -"devrions probablement entraîner le modèle un peu plus longtemps, nous " -"configurons donc les clients pour qu'ils effectuent 3 époques " -"d'entraînement local. Nous devrions également ajuster la fraction de " -"clients sélectionnés pour l'entraînement à chaque tour (nous ne voulons " -"pas que les 1000 clients participent à chaque tour), nous ajustons donc " -"``fraction_fit`` à ``0.05``, ce qui signifie que seulement 5% des clients" -" disponibles (donc 50 clients) seront sélectionnés pour l'entraînement à " -"chaque tour :" +"Mais voici la partie la plus surprenante : Flower n'utilise pas " +"directement l'objet `FlowerNumPyClient`. Au lieu de cela, il enveloppe " +"l'objet pour le faire ressembler à une sous-classe de " +"`flwr.client.Client`, et non de `flwr.client.NumPyClient`. En fait, le " +"noyau de Flower ne sait pas comment gérer les `NumPyClient`, il sait " +"seulement comment gérer les `Client`. `NumPyClient` est juste une " +"abstraction de commodité construite au dessus de `Client`." -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:699 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:345 msgid "" -"In this notebook, we've seen how we can gradually enhance our system by " -"customizing the strategy, initializing parameters on the server side, " -"choosing a different strategy, and evaluating models on the server-side. " -"That's quite a bit of flexibility with so little code, right?" +"Instead of building on top of ``NumPyClient``, we can directly build on " +"top of ``Client``." msgstr "" -"Dans ce carnet, nous avons vu comment nous pouvons progressivement " -"améliorer notre système en personnalisant la stratégie, en initialisant " -"les paramètres côté serveur, en choisissant une stratégie différente et " -"en évaluant les modèles côté serveur. C'est une sacrée flexibilité avec " -"si peu de code, n'est-ce pas ?" +"Au lieu de construire par-dessus `NumPyClient``, nous pouvons construire " +"directement par-dessus `Client``." -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:701 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:357 +msgid "Step 2: Moving from ``NumPyClient`` to ``Client``" +msgstr "Étape 2 : Passer de ``NumPyClient`` à ``Client``" + +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:359 msgid "" -"In the later sections, we've seen how we can communicate arbitrary values" -" between server and clients to fully customize client-side execution. " -"With that capability, we built a large-scale Federated Learning " -"simulation using the Flower Virtual Client Engine and ran an experiment " -"involving 1000 clients in the same workload - all in a Jupyter Notebook!" +"Let's try to do the same thing using ``Client`` instead of " +"``NumPyClient``." msgstr "" -"Dans les sections ultérieures, nous avons vu comment nous pouvons " -"communiquer des valeurs arbitraires entre le serveur et les clients pour " -"personnaliser entièrement l'exécution côté client. Grâce à cette " -"capacité, nous avons construit une simulation d'apprentissage fédéré à " -"grande échelle en utilisant le moteur de client virtuel Flower et nous " -"avons mené une expérience impliquant 1000 clients dans la même charge de " -"travail - le tout dans un carnet Jupyter !" +"Essayons de faire la même chose en utilisant ``Client`` au lieu de " +"``NumPyClient``." -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:719 -#, fuzzy +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:465 msgid "" -"The `Flower Federated Learning Tutorial - Part 3 " -"`__ shows how to build a fully custom ``Strategy`` from " -"scratch." +"Before we discuss the code in more detail, let's try to run it! Gotta " +"make sure our new ``Client``-based client works, right?" msgstr "" -"Le `Tutoriel d'apprentissage fédéré Flower - Partie 3 [WIP] " -"`__ montre comment construire une ``Stratégie`` entièrement " -"personnalisée à partir de zéro." - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:9 -msgid "What is Federated Learning?" -msgstr "Qu'est-ce que l'apprentissage fédéré ?" +"Avant de discuter du code plus en détail, essayons de l'exécuter ! Nous " +"devons nous assurer que notre nouveau client basé sur le ``Client`` " +"fonctionne, n'est-ce pas ?" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:13 -#, fuzzy +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:490 msgid "" -"In this tutorial, you will learn what federated learning is, build your " -"first system in Flower, and gradually extend it. If you work through all " -"parts of the tutorial, you will be able to build advanced federated " -"learning systems that approach the current state of the art in the field." +"That's it, we're now using ``Client``. It probably looks similar to what " +"we've done with ``NumPyClient``. So what's the difference?" msgstr "" -"Dans ce tutoriel, tu apprendras ce qu'est l'apprentissage fédéré, tu " -"construiras ton premier système dans Flower, et tu l'étendras " -"progressivement. Si tu travailles sur toutes les parties du tutoriel, tu " -"seras capable de construire des systèmes d'apprentissage fédéré avancés " -"qui se rapprochent de l'état actuel de l'art dans le domaine." +"Voilà, nous utilisons maintenant ``Client``. Cela ressemble probablement " +"à ce que nous avons fait avec ``NumPyClient``. Alors quelle est la " +"différence ?" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:15 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:492 msgid "" -"🧑‍🏫 This tutorial starts at zero and expects no familiarity with " -"federated learning. Only a basic understanding of data science and Python" -" programming is assumed." +"First of all, it's more code. But why? The difference comes from the fact" +" that ``Client`` expects us to take care of parameter serialization and " +"deserialization. For Flower to be able to send parameters over the " +"network, it eventually needs to turn these parameters into ``bytes``. " +"Turning parameters (e.g., NumPy ``ndarray``'s) into raw bytes is called " +"serialization. Turning raw bytes into something more useful (like NumPy " +"``ndarray``'s) is called deserialization. Flower needs to do both: it " +"needs to serialize parameters on the server-side and send them to the " +"client, the client needs to deserialize them to use them for local " +"training, and then serialize the updated parameters again to send them " +"back to the server, which (finally!) deserializes them again in order to " +"aggregate them with the updates received from other clients." msgstr "" -"🧑‍🏫 Ce tutoriel part de zéro et n'attend aucune familiarité avec " -"l'apprentissage fédéré. Seule une compréhension de base de la science des" -" données et de la programmation Python est supposée." +"First of all, it's more code. But why? The difference comes from the fact" +" that ``Client`` expects us to take care of parameter serialization and " +"deserialization. For Flower to be able to send parameters over the " +"network, it eventually needs to turn these parameters into ``bytes``. " +"Turning parameters (e.g., NumPy ``ndarray``'s) into raw bytes is called " +"serialization. Turning raw bytes into something more useful (like NumPy " +"``ndarray``'s) is called deserialization. Flower needs to do both: it " +"needs to serialize parameters on the server-side and send them to the " +"client, the client needs to deserialize them to use them for local " +"training, and then serialize the updated parameters again to send them " +"back to the server, which (finally!) deserializes them again in order to " +"aggregate them with the updates received from other clients." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:17 -#, fuzzy +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:495 msgid "" -"`Star Flower on GitHub `__ ⭐️ and join " -"the open-source Flower community on Slack to connect, ask questions, and " -"get help: `Join Slack `__ 🌼 We'd love to " -"hear from you in the ``#introductions`` channel! And if anything is " -"unclear, head over to the ``#questions`` channel." +"The only *real* difference between Client and NumPyClient is that " +"NumPyClient takes care of serialization and deserialization for you. It " +"can do so because it expects you to return parameters as NumPy ndarray's," +" and it knows how to handle these. This makes working with machine " +"learning libraries that have good NumPy support (most of them) a breeze." msgstr "" -"`Star Flower on GitHub `__ ⭐️ et " -"rejoignez la communauté Flower sur Slack pour vous connecter, poser des " -"questions et obtenir de l'aide : `Join Slack `__ 🌼 Nous serions ravis d'avoir de vos nouvelles dans le canal " -"``#introductions`` ! Et si quelque chose n'est pas clair, rendez-vous sur" -" le canal ``#questions``." +"La seule *vraie* différence entre Client et NumPyClient est que " +"NumPyClient s'occupe de la sérialisation et de la désérialisation pour " +"toi. Il peut le faire parce qu'il s'attend à ce que tu renvoies des " +"paramètres sous forme de NumPy ndarray, et il sait comment les gérer. " +"Cela permet de travailler avec des bibliothèques d'apprentissage " +"automatique qui ont une bonne prise en charge de NumPy (la plupart " +"d'entre elles) en un clin d'œil." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:19 -#, fuzzy -msgid "Let's get started!" -msgstr "Allons-y, déclarons-le !" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:497 +msgid "" +"In terms of API, there's one major difference: all methods in Client take" +" exactly one argument (e.g., ``FitIns`` in ``Client.fit``) and return " +"exactly one value (e.g., ``FitRes`` in ``Client.fit``). The methods in " +"``NumPyClient`` on the other hand have multiple arguments (e.g., " +"``parameters`` and ``config`` in ``NumPyClient.fit``) and multiple return" +" values (e.g., ``parameters``, ``num_example``, and ``metrics`` in " +"``NumPyClient.fit``) if there are multiple things to handle. These " +"``*Ins`` and ``*Res`` objects in ``Client`` wrap all the individual " +"values you're used to from ``NumPyClient``." +msgstr "" +"In terms of API, there's one major difference: all methods in Client take" +" exactly one argument (e.g., ``FitIns`` in ``Client.fit``) and return " +"exactly one value (e.g., ``FitRes`` in ``Client.fit``). The methods in " +"``NumPyClient`` on the other hand have multiple arguments (e.g., " +"``parameters`` and ``config`` in ``NumPyClient.fit``) and multiple return" +" values (e.g., ``parameters``, ``num_example``, and ``metrics`` in " +"``NumPyClient.fit``) if there are multiple things to handle. These " +"``*Ins`` and ``*Res`` objects in ``Client`` wrap all the individual " +"values you're used to from ``NumPyClient``." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:31 -msgid "Classic machine learning" -msgstr "Apprentissage automatique classique" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:510 +msgid "Step 3: Custom serialization" +msgstr "Étape 3 : Sérialisation personnalisée" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:33 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:512 msgid "" -"Before we begin to discuss federated learning, let us quickly recap how " -"most machine learning works today." +"Here we will explore how to implement custom serialization with a simple " +"example." msgstr "" -"Avant de commencer à discuter de l'apprentissage fédéré, récapitulons " -"rapidement la façon dont la plupart des apprentissages automatiques " -"fonctionnent aujourd'hui." +"Nous allons ici explorer comment mettre en œuvre une sérialisation " +"personnalisée à l'aide d'un exemple simple." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:35 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:514 msgid "" -"In machine learning, we have a model, and we have data. The model could " -"be a neural network (as depicted here), or something else, like classical" -" linear regression." +"But first what is serialization? Serialization is just the process of " +"converting an object into raw bytes, and equally as important, " +"deserialization is the process of converting raw bytes back into an " +"object. This is very useful for network communication. Indeed, without " +"serialization, you could not just a Python object through the internet." msgstr "" -"Dans l'apprentissage automatique, nous avons un modèle et des données. Le" -" modèle peut être un réseau neuronal (comme illustré ici), ou quelque " -"chose d'autre, comme la régression linéaire classique." +"Mais d'abord, qu'est-ce que la sérialisation ? La sérialisation est " +"simplement le processus de conversion d'un objet en octets bruts, et tout" +" aussi important, la désérialisation est le processus de reconversion des" +" octets bruts en objet. Ceci est très utile pour la communication réseau." +" En effet, sans la sérialisation, tu ne pourrais pas faire passer un " +"objet Python par Internet." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:41 -msgid "|2b5c62c529f6416f840c594cce062fbb|" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:516 +msgid "" +"Federated Learning relies heavily on internet communication for training " +"by sending Python objects back and forth between the clients and the " +"server. This means that serialization is an essential part of Federated " +"Learning." msgstr "" +"L'apprentissage fédéré s'appuie fortement sur la communication Internet " +"pour la formation en envoyant des objets Python dans les deux sens entre " +"les clients et le serveur, ce qui signifie que la sérialisation est un " +"élément essentiel de l'apprentissage fédéré." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:109 -msgid "Model and data" -msgstr "Modèle et données" - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:47 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:518 msgid "" -"We train the model using the data to perform a useful task. A task could " -"be to detect objects in images, transcribe an audio recording, or play a " -"game like Go." -msgstr "" -"Nous entraînons le modèle en utilisant les données pour effectuer une " -"tâche utile. Une tâche peut consister à détecter des objets dans des " -"images, à transcrire un enregistrement audio ou à jouer à un jeu comme le" -" Go." - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:53 -msgid "|90b334680cb7467d9a04d39b8e8dca9f|" +"In the following section, we will write a basic example where instead of " +"sending a serialized version of our ``ndarray``\\ s containing our " +"parameters, we will first convert the ``ndarray`` into sparse matrices, " +"before sending them. This technique can be used to save bandwidth, as in " +"certain cases where the weights of a model are sparse (containing many 0 " +"entries), converting them to a sparse matrix can greatly improve their " +"bytesize." msgstr "" +"Dans la section suivante, nous allons écrire un exemple de base où, au " +"lieu d'envoyer une version sérialisée de nos ``ndarray`` contenant nos " +"paramètres, nous allons d'abord convertir les ``ndarray`` en matrices " +"éparses, avant de les envoyer. Cette technique peut être utilisée pour " +"économiser de la bande passante, car dans certains cas où les poids d'un " +"modèle sont épars (contenant de nombreuses entrées 0), les convertir en " +"une matrice éparse peut grandement améliorer leur taille en octets." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:111 -msgid "Train model using data" -msgstr "Entraîne le modèle à l'aide des données" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:521 +msgid "Our custom serialization/deserialization functions" +msgstr "Nos fonctions de sérialisation/désérialisation personnalisées" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:59 -#, fuzzy +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:523 msgid "" -"Now, in practice, the training data we work with doesn't originate on the" -" machine we train the model on. It gets created somewhere else." +"This is where the real serialization/deserialization will happen, " +"especially in ``ndarray_to_sparse_bytes`` for serialization and " +"``sparse_bytes_to_ndarray`` for deserialization." msgstr "" -"Dans la pratique, les données d'entraînement avec lesquelles nous " -"travaillons ne proviennent pas de la machine sur laquelle nous entraînons" -" le modèle. Elles sont créées ailleurs." +"C'est là que la véritable sérialisation/désérialisation se produira, en " +"particulier dans ``ndarray_to_sparse_bytes`` pour la sérialisation et " +"``sparse_bytes_to_ndarray`` pour la désérialisation." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:61 -#, fuzzy +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:525 msgid "" -"It originates on a smartphone by the user interacting with an app, a car " -"collecting sensor data, a laptop receiving input via the keyboard, or a " -"smart speaker listening to someone trying to sing a song." -msgstr "" -"Elle prend naissance sur un smartphone par l'interaction de l'utilisateur" -" avec une application, une voiture qui collecte des données de capteurs, " -"un ordinateur portable qui reçoit des entrées via le clavier, ou un haut-" -"parleur intelligent qui écoute quelqu'un qui essaie de chanter une " -"chanson." - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:67 -msgid "|65764ceee89f4335bfd93fd0b115e831|" +"Note that we imported the ``scipy.sparse`` library in order to convert " +"our arrays." msgstr "" +"Notez que nous avons importé la bibliothèque ``scipy.sparse`` afin de " +"convertir nos tableaux." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:113 -msgid "Data on a phone" -msgstr "Données sur un téléphone" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:613 +msgid "Client-side" +msgstr "Côté client" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:73 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:615 msgid "" -"What's also important to mention, this \"somewhere else\" is usually not " -"just one place, it's many places. It could be several devices all running" -" the same app. But it could also be several organizations, all generating" -" data for the same task." +"To be able to serialize our ``ndarray``\\ s into sparse parameters, we " +"will just have to call our custom functions in our " +"``flwr.client.Client``." msgstr "" -"Il est également important de mentionner que cet \"ailleurs\" n'est " -"généralement pas un seul endroit, mais plusieurs. Il peut s'agir de " -"plusieurs appareils fonctionnant tous avec la même application. Mais il " -"peut également s'agir de plusieurs organisations, qui génèrent toutes des" -" données pour la même tâche." +"Pour pouvoir sérialiser nos ``ndarray`` en paramètres sparse, il nous " +"suffira d'appeler nos fonctions personnalisées dans notre " +"``flwr.client.Client``." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:79 -msgid "|d97319ec28bb407ea0ab9705e38f3bcf|" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:617 +msgid "" +"Indeed, in ``get_parameters`` we need to serialize the parameters we got " +"from our network using our custom ``ndarrays_to_sparse_parameters`` " +"defined above." msgstr "" +"En effet, dans ``get_parameters`` nous devons sérialiser les paramètres " +"que nous avons obtenus de notre réseau en utilisant nos " +"``ndarrays_to_sparse_parameters`` personnalisés définis ci-dessus." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:115 -msgid "Data is on many devices" -msgstr "Les données se trouvent sur de nombreux appareils" - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:85 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:619 msgid "" -"So to use machine learning, or any kind of data analysis, the approach " -"that has been used in the past was to collect all data on a central " -"server. This server can be somewhere in a data center, or somewhere in " -"the cloud." +"In ``fit``, we first need to deserialize the parameters coming from the " +"server using our custom ``sparse_parameters_to_ndarrays`` and then we " +"need to serialize our local results with " +"``ndarrays_to_sparse_parameters``." msgstr "" -"Ainsi, pour utiliser l'apprentissage automatique, ou tout autre type " -"d'analyse de données, l'approche utilisée par le passé consistait à " -"collecter toutes les données sur un serveur central. Ce serveur peut se " -"trouver quelque part dans un centre de données, ou quelque part dans le " -"cloud." +"Dans ``fit``, nous devons d'abord désérialiser les paramètres provenant " +"du serveur en utilisant notre ``sparse_parameters_to_ndarrays`` " +"personnalisé, puis nous devons sérialiser nos résultats locaux avec " +"``ndarrays_to_sparse_parameters``." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:91 -msgid "|11e95ac83a8548d8b3505b4663187d07|" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:621 +msgid "" +"In ``evaluate``, we will only need to deserialize the global parameters " +"with our custom function." msgstr "" +"Dans ``evaluate``, nous n'aurons besoin que de désérialiser les " +"paramètres globaux avec notre fonction personnalisée." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:117 -msgid "Central data collection" -msgstr "Collecte centralisée des données" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:725 +msgid "Server-side" +msgstr "Côté serveur" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:97 -#, fuzzy +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:727 msgid "" -"Once all the data is collected in one place, we can finally use machine " -"learning algorithms to train our model on the data. This is the machine " -"learning approach that we've basically always relied on." +"For this example, we will just use ``FedAvg`` as a strategy. To change " +"the serialization and deserialization here, we only need to reimplement " +"the ``evaluate`` and ``aggregate_fit`` functions of ``FedAvg``. The other" +" functions of the strategy will be inherited from the super class " +"``FedAvg``." msgstr "" -"Une fois que toutes les données sont rassemblées en un seul endroit, nous" -" pouvons enfin utiliser des algorithmes d'apprentissage automatique pour " -"entraîner notre modèle sur les données. C'est l'approche d'apprentissage " -"automatique sur laquelle nous nous sommes fondamentalement toujours " -"appuyés." +"Pour cet exemple, nous utiliserons simplement ``FedAvg`` comme stratégie." +" Pour modifier la sérialisation et la désérialisation ici, il suffit de " +"réimplémenter les fonctions ``evaluate`` et ``aggregate_fit`` de " +"``FedAvg``. Les autres fonctions de la stratégie seront héritées de la " +"super-classe ``FedAvg``." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:103 -msgid "|1dab2f3a23674abc8a6731f20fa10730|" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:729 +msgid "As you can see only one line as change in ``evaluate``:" +msgstr "Comme tu peux le voir, seule une ligne a été modifiée dans ``evaluate`` :" + +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:735 +msgid "" +"And for ``aggregate_fit``, we will first deserialize every result we " +"received:" msgstr "" +"Et pour ``aggregate_fit``, nous allons d'abord désérialiser chaque " +"résultat que nous avons reçu :" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:119 -msgid "Central model training" -msgstr "Formation au modèle central" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:744 +msgid "And then serialize the aggregated result:" +msgstr "Puis sérialise le résultat agrégé :" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:130 -msgid "Challenges of classical machine learning" -msgstr "Les défis de l'apprentissage automatique classique" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:903 +msgid "We can now run our custom serialization example!" +msgstr "" +"Nous pouvons maintenant exécuter notre exemple de sérialisation " +"personnalisée !" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:132 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:934 msgid "" -"The classic machine learning approach we've just seen can be used in some" -" cases. Great examples include categorizing holiday photos, or analyzing " -"web traffic. Cases, where all the data is naturally available on a " -"centralized server." +"In this part of the tutorial, we've seen how we can build clients by " +"subclassing either ``NumPyClient`` or ``Client``. ``NumPyClient`` is a " +"convenience abstraction that makes it easier to work with machine " +"learning libraries that have good NumPy interoperability. ``Client`` is a" +" more flexible abstraction that allows us to do things that are not " +"possible in ``NumPyClient``. In order to do so, it requires us to handle " +"parameter serialization and deserialization ourselves." msgstr "" -"L'approche classique de l'apprentissage automatique que nous venons de " -"voir peut être utilisée dans certains cas. Parmi les grands exemples, on " -"peut citer la catégorisation des photos de vacances, ou l'analyse du " -"trafic web. Des cas, où toutes les données sont naturellement disponibles" -" sur un serveur centralisé." +"Dans cette partie du tutoriel, nous avons vu comment construire des " +"clients en sous-classant soit ``NumPyClient``, soit ``Client``. " +"``NumPyClient`` est une abstraction de commodité qui facilite le travail " +"avec les bibliothèques d'apprentissage automatique qui ont une bonne " +"interopérabilité NumPy. ``Client`` est une abstraction plus flexible qui " +"nous permet de faire des choses qui ne sont pas possibles dans " +"``NumPyClient``. Pour ce faire, elle nous oblige à gérer nous-mêmes la " +"sérialisation et la désérialisation des paramètres." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:138 -msgid "|7f0ee162da38450788493a21627306f7|" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:952 +msgid "" +"This is the final part of the Flower tutorial (for now!), " +"congratulations! You're now well equipped to understand the rest of the " +"documentation. There are many topics we didn't cover in the tutorial, we " +"recommend the following resources:" msgstr "" +"C'est la dernière partie du tutoriel Flower (pour l'instant !), " +"félicitations ! Tu es maintenant bien équipé pour comprendre le reste de " +"la documentation. Il y a de nombreux sujets que nous n'avons pas abordés " +"dans le tutoriel, nous te recommandons les ressources suivantes :" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:173 -msgid "Centralized possible" -msgstr "Possibilité de centralisation" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:954 +msgid "`Read Flower Docs `__" +msgstr "`Lire les docs sur les fleurs `__" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:144 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:955 msgid "" -"But the approach can not be used in many other cases. Cases, where the " -"data is not available on a centralized server, or cases where the data " -"available on one server is not enough to train a good model." -msgstr "" -"Mais cette approche ne peut pas être utilisée dans de nombreux autres cas" -" : lorsque les données ne sont pas disponibles sur un serveur centralisé," -" ou lorsque les données disponibles sur un serveur ne sont pas " -"suffisantes pour former un bon modèle." - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:150 -msgid "|296a1fb72c514b23b3d8905ff0ff98c6|" +"`Check out Flower Code Examples " +"`__" msgstr "" +"`Check out Flower Code Examples " +"`__" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:175 -msgid "Centralized impossible" -msgstr "Impossible de centraliser" - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:156 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:956 #, fuzzy msgid "" -"There are many reasons why the classic centralized machine learning " -"approach does not work for a large number of highly important real-world " -"use cases. Those reasons include:" +"`Use Flower Baselines for your research " +"`__" msgstr "" -"Il existe de nombreuses raisons pour lesquelles l'approche classique " -"centralisée de l'apprentissage automatique ne fonctionne pas pour un " -"grand nombre de cas d'utilisation très importants dans le monde réel, " -"notamment :" +"`Utilise les lignes de base des fleurs pour ta recherche " +"`__" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:158 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:957 #, fuzzy msgid "" -"**Regulations**: GDPR (Europe), CCPA (California), PIPEDA (Canada), LGPD " -"(Brazil), PDPL (Argentina), KVKK (Turkey), POPI (South Africa), FSS " -"(Russia), CDPR (China), PDPB (India), PIPA (Korea), APPI (Japan), PDP " -"(Indonesia), PDPA (Singapore), APP (Australia), and other regulations " -"protect sensitive data from being moved. In fact, those regulations " -"sometimes even prevent single organizations from combining their own " -"users' data for artificial intelligence training because those users live" -" in different parts of the world, and their data is governed by different" -" data protection regulations." +"`Watch Flower Summit 2023 videos `__" msgstr "" -"**Réglementations** : GDPR (Europe), CCPA (Californie), PIPEDA (Canada), " -"LGPD (Brésil), PDPL (Argentine), KVKK (Turquie), POPI (Afrique du Sud), " -"FSS (Russie), CDPR (Chine), PDPB (Inde), PIPA (Corée), APPI (Japon), PDP " -"(Indonésie), PDPA (Singapour), APP (Australie), et d'autres " -"réglementations protègent les données sensibles contre le déplacement. En" -" fait, ces réglementations empêchent même parfois des organisations " -"individuelles de combiner les données de leurs propres utilisateurs pour " -"la formation à l'intelligence artificielle parce que ces utilisateurs " -"vivent dans différentes parties du monde, et que leurs données sont " -"régies par des réglementations différentes en matière de protection des " -"données." +"`Regardez les vidéos du Flower Summit 2022 `__" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:160 -msgid "" -"**User preference**: In addition to regulation, there are use cases where" -" users just expect that no data leaves their device, ever. If you type " -"your passwords and credit card info into the digital keyboard of your " -"phone, you don't expect those passwords to end up on the server of the " -"company that developed that keyboard, do you? In fact, that use case was " -"the reason federated learning was invented in the first place." +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:9 +msgid "Get started with Flower" msgstr "" -"**Préférence de l'utilisateur** : En plus de la réglementation, il existe" -" des cas d'utilisation où les utilisateurs s'attendent tout simplement à " -"ce qu'aucune donnée ne quitte leur appareil, jamais. Si tu tapes tes mots" -" de passe et tes informations de carte de crédit sur le clavier numérique" -" de ton téléphone, tu ne t'attends pas à ce que ces mots de passe " -"finissent sur le serveur de l'entreprise qui a développé ce clavier, n" -"'est-ce pas ? En fait, ce cas d'utilisation est la raison pour laquelle " -"l'apprentissage fédéré a été inventé en premier lieu." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:161 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:11 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:11 +msgid "Welcome to the Flower federated learning tutorial!" +msgstr "Bienvenue au tutoriel sur l'apprentissage fédéré de la fleur !" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:13 #, fuzzy msgid "" -"**Data volume**: Some sensors, like cameras, produce such a high data " -"volume that it is neither feasible nor economic to collect all the data " -"(due to, for example, bandwidth or communication efficiency). Think about" -" a national rail service with hundreds of train stations across the " -"country. If each of these train stations is outfitted with a number of " -"security cameras, the volume of raw on-device data they produce requires " -"incredibly powerful and exceedingly expensive infrastructure to process " -"and store. And most of the data isn't even useful." -msgstr "" -"**volume de données** : certains capteurs, comme les caméras, produisent " -"un volume de données si important qu'il n'est ni possible ni économique " -"de collecter toutes les données (en raison, par exemple, de la bande " -"passante ou de l'efficacité des communications). Pensez à un service " -"ferroviaire national comptant des centaines de gares à travers le pays. " -"Si chacune de ces gares est équipée d'un certain nombre de caméras de " -"sécurité, le volume de données brutes sur les appareils qu'elles " -"produisent nécessite une infrastructure incroyablement puissante et " -"excessivement coûteuse pour les traiter et les stocker. Et la plupart de " -"ces données ne sont même pas utiles." - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:164 -msgid "Examples where centralized machine learning does not work include:" +"In this notebook, we'll build a federated learning system using Flower, " +"`Flower Datasets `__ and PyTorch. In " +"part 1, we use PyTorch for the model training pipeline and data loading. " +"In part 2, we continue to federate the PyTorch-based pipeline using " +"Flower." msgstr "" -"Voici quelques exemples où l'apprentissage automatique centralisé ne " -"fonctionne pas :" +"Dans ce carnet, nous allons construire un système d'apprentissage fédéré " +"en utilisant Flower et PyTorch. Dans la première partie, nous utilisons " +"PyTorch pour le pipeline d'entraînement des modèles et le chargement des " +"données. Dans la deuxième partie, nous continuons à fédérer le pipeline " +"basé sur PyTorch en utilisant Flower." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:166 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:17 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:19 #, fuzzy +msgid "Let's get started!" +msgstr "Allons-y, déclarons-le !" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:31 msgid "" -"Sensitive healthcare records from multiple hospitals to train cancer " -"detection models" +"Before we begin with any actual code, let's make sure that we have " +"everything we need." msgstr "" -"Des dossiers médicaux sensibles provenant de plusieurs hôpitaux pour " -"former des modèles de détection du cancer" +"Avant de commencer à coder, assurons-nous que nous disposons de tout ce " +"dont nous avons besoin." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:167 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:45 +#, fuzzy msgid "" -"Financial information from different organizations to detect financial " -"fraud" +"Next, we install the necessary packages for PyTorch (``torch`` and " +"``torchvision``), Flower Datasets (``flwr-datasets``) and Flower " +"(``flwr``):" msgstr "" -"Informations financières provenant de différentes organisations pour " -"détecter les fraudes financières" +"Ensuite, nous installons les paquets nécessaires pour PyTorch (``torch`` " +"et ``torchvision``) et Flower (``flwr``) :" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:168 -msgid "Location data from your electric car to make better range prediction" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:105 +#, fuzzy +msgid "" +"It is possible to switch to a runtime that has GPU acceleration enabled " +"(on Google Colab: ``Runtime > Change runtime type > Hardware accelerator:" +" GPU > Save``). Note, however, that Google Colab is not always able to " +"offer GPU acceleration. If you see an error related to GPU availability " +"in one of the following sections, consider switching back to CPU-based " +"execution by setting ``DEVICE = torch.device(\"cpu\")``. If the runtime " +"has GPU acceleration enabled, you should see the output ``Training on " +"cuda``, otherwise it'll say ``Training on cpu``." msgstr "" -"Les données de localisation de ta voiture électrique pour mieux prédire " -"l'autonomie" +"Il est possible de passer à un runtime dont l'accélération GPU est " +"activée (sur Google Colab : ``Runtime > Change runtime type > Hardware " +"acclerator : GPU > Save``). Note cependant que Google Colab n'est pas " +"toujours en mesure de proposer l'accélération GPU. Si tu vois une erreur " +"liée à la disponibilité du GPU dans l'une des sections suivantes, " +"envisage de repasser à une exécution basée sur le CPU en définissant " +"``DEVICE = torch.device(\"cpu\")``. Si le runtime a activé l'accélération" +" GPU, tu devrais voir apparaître le résultat ``Training on cuda``, sinon " +"il dira ``Training on cpu``." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:169 -msgid "End-to-end encrypted messages to train better auto-complete models" -msgstr "" -"Messages cryptés de bout en bout pour former de meilleurs modèles " -"d'autocomplétion" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:118 +msgid "Loading the data" +msgstr "Chargement des données" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:171 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:120 #, fuzzy msgid "" -"The popularity of privacy-enhancing systems like the `Brave " -"`__ browser or the `Signal `__ " -"messenger shows that users care about privacy. In fact, they choose the " -"privacy-enhancing version over other alternatives, if such an alternative" -" exists. But what can we do to apply machine learning and data science to" -" these cases to utilize private data? After all, these are all areas that" -" would benefit significantly from recent advances in AI." +"Federated learning can be applied to many different types of tasks across" +" different domains. In this tutorial, we introduce federated learning by " +"training a simple convolutional neural network (CNN) on the popular " +"CIFAR-10 dataset. CIFAR-10 can be used to train image classifiers that " +"distinguish between images from ten different classes: 'airplane', " +"'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', and " +"'truck'." msgstr "" -"La popularité des systèmes améliorant la confidentialité comme le " -"navigateur `Brave `__ ou le messager `Signal " -"`__ montre que les utilisateurs se soucient de la " -"confidentialité. En fait, ils choisissent la version améliorant la " -"confidentialité plutôt que d'autres alternatives, si une telle " -"alternative existe. Mais que pouvons-nous faire pour appliquer " -"l'apprentissage automatique et la science des données à ces cas afin " -"d'utiliser les données privées ? Après tout, ce sont tous des domaines " -"qui bénéficieraient de manière significative des récentes avancées en " -"matière d'IA." - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:186 -msgid "Federated learning" -msgstr "Apprentissage fédéré" +"L'apprentissage fédéré peut être appliqué à de nombreux types de tâches " +"dans différents domaines. Dans ce tutoriel, nous présentons " +"l'apprentissage fédéré en formant un simple réseau neuronal " +"convolutionnel (CNN) sur l'ensemble de données populaire CIFAR-10. " +"CIFAR-10 peut être utilisé pour former des classificateurs d'images qui " +"font la distinction entre les images de dix classes différentes :" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:188 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:131 msgid "" -"Federated learning simply reverses this approach. It enables machine " -"learning on distributed data by moving the training to the data, instead " -"of moving the data to the training. Here's the single-sentence " -"explanation:" +"We simulate having multiple datasets from multiple organizations (also " +"called the \"cross-silo\" setting in federated learning) by splitting the" +" original CIFAR-10 dataset into multiple partitions. Each partition will " +"represent the data from a single organization. We're doing this purely " +"for experimentation purposes, in the real world there's no need for data " +"splitting because each organization already has their own data (so the " +"data is naturally partitioned)." msgstr "" -"L'apprentissage fédéré inverse simplement cette approche. Il permet " -"l'apprentissage automatique sur des données distribuées en déplaçant la " -"formation vers les données, au lieu de déplacer les données vers la " -"formation. Voici l'explication en une seule phrase :" - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:190 -msgid "Central machine learning: move the data to the computation" -msgstr "Apprentissage automatique central : déplace les données vers le calcul" - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:191 -msgid "Federated (machine) learning: move the computation to the data" -msgstr "Apprentissage (machine) fédéré : déplacer le calcul vers les données" +"Nous simulons le fait d'avoir plusieurs ensembles de données provenant de" +" plusieurs organisations (également appelé le paramètre \"cross-silo\" " +"dans l'apprentissage fédéré) en divisant l'ensemble de données CIFAR-10 " +"original en plusieurs partitions. Chaque partition représentera les " +"données d'une seule organisation. Nous faisons cela purement à des fins " +"d'expérimentation, dans le monde réel, il n'y a pas besoin de diviser les" +" données parce que chaque organisation a déjà ses propres données (les " +"données sont donc naturellement partitionnées)." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:193 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:133 +#, fuzzy msgid "" -"By doing so, it enables us to use machine learning (and other data " -"science approaches) in areas where it wasn't possible before. We can now " -"train excellent medical AI models by enabling different hospitals to work" -" together. We can solve financial fraud by training AI models on the data" -" of different financial institutions. We can build novel privacy-" -"enhancing applications (such as secure messaging) that have better built-" -"in AI than their non-privacy-enhancing alternatives. And those are just a" -" few of the examples that come to mind. As we deploy federated learning, " -"we discover more and more areas that can suddenly be reinvented because " -"they now have access to vast amounts of previously inaccessible data." +"Each organization will act as a client in the federated learning system. " +"So having ten organizations participate in a federation means having ten " +"clients connected to the federated learning server." msgstr "" -"Ce faisant, il nous permet d'utiliser l'apprentissage automatique (et " -"d'autres approches de science des données) dans des domaines où cela " -"n'était pas possible auparavant. Nous pouvons désormais former " -"d'excellents modèles d'IA médicale en permettant à différents hôpitaux de" -" travailler ensemble. Nous pouvons résoudre les fraudes financières en " -"formant des modèles d'IA sur les données de différentes institutions " -"financières. Nous pouvons créer de nouvelles applications d'amélioration " -"de la confidentialité (telles que la messagerie sécurisée) qui ont une " -"meilleure IA intégrée que leurs alternatives d'amélioration de la " -"confidentialité. Et ce ne sont là que quelques exemples qui me viennent à" -" l'esprit. Au fur et à mesure que nous déployons l'apprentissage fédéré, " -"nous découvrons de plus en plus de domaines qui peuvent soudainement être" -" réinventés parce qu'ils ont maintenant accès à de vastes quantités de " -"données auparavant inaccessibles." +"Chaque organisation agira comme un client dans le système d'apprentissage" +" fédéré. Ainsi, le fait que dix organisations participent à une " +"fédération signifie que dix clients sont connectés au serveur " +"d'apprentissage fédéré :" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:196 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:144 msgid "" -"So how does federated learning work, exactly? Let's start with an " -"intuitive explanation." +"Let's now create the Federated Dataset abstraction that from ``flwr-" +"datasets`` that partitions the CIFAR-10. We will create small training " +"and test set for each edge device and wrap each of them into a PyTorch " +"``DataLoader``:" msgstr "" -"Comment fonctionne l'apprentissage fédéré ? Commençons par une " -"explication intuitive." - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:199 -msgid "Federated learning in five steps" -msgstr "L'apprentissage fédéré en cinq étapes" - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:202 -msgid "Step 0: Initialize global model" -msgstr "Étape 0 : Initialisation du modèle global" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:204 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:198 +#, fuzzy msgid "" -"We start by initializing the model on the server. This is exactly the " -"same in classic centralized learning: we initialize the model parameters," -" either randomly or from a previously saved checkpoint." -msgstr "" -"Nous commençons par initialiser le modèle sur le serveur. C'est " -"exactement la même chose dans l'apprentissage centralisé classique : nous" -" initialisons les paramètres du modèle, soit de façon aléatoire, soit à " -"partir d'un point de contrôle précédemment sauvegardé." - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:210 -msgid "|5b1408eec0d746cdb91162a9107b6089|" +"We now have a list of ten training sets and ten validation sets " +"(``trainloaders`` and ``valloaders``) representing the data of ten " +"different organizations. Each ``trainloader``/``valloader`` pair contains" +" 4000 training examples and 1000 validation examples. There's also a " +"single ``testloader`` (we did not split the test set). Again, this is " +"only necessary for building research or educational systems, actual " +"federated learning systems have their data naturally distributed across " +"multiple partitions." msgstr "" +"Nous avons maintenant une liste de dix ensembles de formation et dix " +"ensembles de validation (``trainloaders`` et ``valloaders``) représentant" +" les données de dix organisations différentes. Chaque paire " +"``trainloader`/``valloader`` contient 4500 exemples de formation et 500 " +"exemples de validation. Il y a également un seul ``testloader`` (nous " +"n'avons pas divisé l'ensemble de test). Encore une fois, cela n'est " +"nécessaire que pour construire des systèmes de recherche ou d'éducation, " +"les systèmes d'apprentissage fédérés actuels ont leurs données " +"naturellement distribuées à travers plusieurs partitions." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:307 -msgid "Initialize global model" -msgstr "Initialise le modèle global" - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:217 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:201 msgid "" -"Step 1: Send model to a number of connected organizations/devices (client" -" nodes)" +"Let's take a look at the first batch of images and labels in the first " +"training set (i.e., ``trainloaders[0]``) before we move on:" msgstr "" -"Étape 1 : envoyer le modèle à un certain nombre d'organisations/appareils" -" connectés (nœuds clients)" +"Jetons un coup d'œil au premier lot d'images et d'étiquettes du premier " +"ensemble d'entraînement (c'est-à-dire ``trainloaders[0]``) avant de " +"poursuivre :" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:219 -#, fuzzy +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:240 msgid "" -"Next, we send the parameters of the global model to the connected client " -"nodes (think: edge devices like smartphones or servers belonging to " -"organizations). This is to ensure that each participating node starts " -"their local training using the same model parameters. We often use only a" -" few of the connected nodes instead of all nodes. The reason for this is " -"that selecting more and more client nodes has diminishing returns." -msgstr "" -"Ensuite, nous envoyons les paramètres du modèle global aux nœuds clients " -"connectés (par exemple, les appareils périphériques comme les smartphones" -" ou les serveurs appartenant à des organisations). Cela permet de " -"s'assurer que chaque nœud participant commence sa formation locale en " -"utilisant les mêmes paramètres de modèle. Nous n'utilisons souvent que " -"quelques-uns des nœuds connectés au lieu de tous les nœuds. La raison en " -"est que la sélection d'un nombre croissant de nœuds clients a des " -"rendements décroissants." - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:225 -msgid "|aef19f4b122c4e8d9f4c57f99bcd5dd2|" +"The output above shows a random batch of images from the first " +"``trainloader`` in our list of ten ``trainloaders``. It also prints the " +"labels associated with each image (i.e., one of the ten possible labels " +"we've seen above). If you run the cell again, you should see another " +"batch of images." msgstr "" +"La sortie ci-dessus montre un lot aléatoire d'images provenant du premier" +" ``chargeur de formation`` de notre liste de dix ``chargeurs de " +"formation``. Elle imprime également les étiquettes associées à chaque " +"image (c'est-à-dire l'une des dix étiquettes possibles que nous avons " +"vues ci-dessus). Si tu exécutes à nouveau la cellule, tu devrais voir un " +"autre lot d'images." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:309 -msgid "Send global model" -msgstr "Envoyer le modèle global" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:252 +msgid "Step 1: Centralized Training with PyTorch" +msgstr "Étape 1 : Formation centralisée avec PyTorch" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:232 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:263 msgid "" -"Step 2: Train model locally on the data of each organization/device " -"(client node)" +"Next, we're going to use PyTorch to define a simple convolutional neural " +"network. This introduction assumes basic familiarity with PyTorch, so it " +"doesn't cover the PyTorch-related aspects in full detail. If you want to " +"dive deeper into PyTorch, we recommend `DEEP LEARNING WITH PYTORCH: A 60 " +"MINUTE BLITZ " +"`__." msgstr "" -"Étape 2 : Entraîne le modèle localement sur les données de chaque " -"organisation/appareil (nœud client)" +"Ensuite, nous allons utiliser PyTorch pour définir un simple réseau " +"neuronal convolutif. Cette introduction suppose une familiarité de base " +"avec PyTorch, elle ne couvre donc pas en détail les aspects liés à " +"PyTorch. Si tu veux plonger plus profondément dans PyTorch, nous te " +"recommandons `DEEP LEARNING WITH PYTORCH : A 60 MINUTE BLITZ " +"`__." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:234 -msgid "" -"Now that all (selected) client nodes have the latest version of the " -"global model parameters, they start the local training. They use their " -"own local dataset to train their own local model. They don't train the " -"model until full convergence, but they only train for a little while. " -"This could be as little as one epoch on the local data, or even just a " -"few steps (mini-batches)." -msgstr "" -"Maintenant que tous les nœuds clients (sélectionnés) disposent de la " -"dernière version des paramètres du modèle global, ils commencent " -"l'entraînement local. Ils utilisent leur propre ensemble de données " -"locales pour entraîner leur propre modèle local. Ils n'entraînent pas le " -"modèle jusqu'à la convergence totale, mais ils ne s'entraînent que " -"pendant un petit moment. Il peut s'agir d'une seule époque sur les " -"données locales, ou même de quelques étapes (mini-batchs)." +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:275 +msgid "Defining the model" +msgstr "Définir le modèle" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:240 -msgid "|2881a86d8fc54ba29d96b29fc2819f4a|" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:277 +msgid "" +"We use the simple CNN described in the `PyTorch tutorial " +"`__:" msgstr "" +"Nous utilisons le CNN simple décrit dans le tutoriel `PyTorch " +"`__ :" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:311 -msgid "Train on local data" -msgstr "Forme-toi aux données locales" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:314 +msgid "Let's continue with the usual training and test functions:" +msgstr "Poursuivons avec les fonctions habituelles de formation et de test :" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:247 -msgid "Step 3: Return model updates back to the server" -msgstr "Étape 3 : Renvoyer les mises à jour du modèle au serveur" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:374 +msgid "Training the model" +msgstr "Entraîne le modèle" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:249 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:376 msgid "" -"After local training, each client node has a slightly different version " -"of the model parameters they originally received. The parameters are all " -"different because each client node has different examples in its local " -"dataset. The client nodes then send those model updates back to the " -"server. The model updates they send can either be the full model " -"parameters or just the gradients that were accumulated during local " -"training." +"We now have all the basic building blocks we need: a dataset, a model, a " +"training function, and a test function. Let's put them together to train " +"the model on the dataset of one of our organizations " +"(``trainloaders[0]``). This simulates the reality of most machine " +"learning projects today: each organization has their own data and trains " +"models only on this internal data:" msgstr "" -"Après l'entraînement local, chaque nœud client possède une version " -"légèrement différente des paramètres du modèle qu'il a reçus à l'origine." -" Les paramètres sont tous différents parce que chaque nœud client a des " -"exemples différents dans son ensemble de données local. Les nœuds clients" -" renvoient ensuite ces mises à jour du modèle au serveur. Les mises à " -"jour du modèle qu'ils envoient peuvent être soit les paramètres complets " -"du modèle, soit seulement les gradients qui ont été accumulés au cours de" -" l'entraînement local." +"Nous avons maintenant tous les éléments de base dont nous avons besoin : " +"un ensemble de données, un modèle, une fonction d'entraînement et une " +"fonction de test. Assemblons-les pour entraîner le modèle sur l'ensemble " +"de données de l'une de nos organisations (``trainloaders[0]``). Cela " +"simule la réalité de la plupart des projets d'apprentissage automatique " +"aujourd'hui : chaque organisation possède ses propres données et entraîne" +" les modèles uniquement sur ces données internes :" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:255 -msgid "|ec1fe880237247e0975f52766775ab84|" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:406 +msgid "" +"Training the simple CNN on our CIFAR-10 split for 5 epochs should result " +"in a test set accuracy of about 41%, which is not good, but at the same " +"time, it doesn't really matter for the purposes of this tutorial. The " +"intent was just to show a simplistic centralized training pipeline that " +"sets the stage for what comes next - federated learning!" msgstr "" +"L'entraînement du CNN simple sur notre fractionnement CIFAR-10 pendant 5 " +"époques devrait se traduire par une précision de l'ensemble de test " +"d'environ 41 %, ce qui n'est pas bon, mais en même temps, cela n'a pas " +"vraiment d'importance pour les besoins de ce tutoriel. L'intention était " +"juste de montrer un pipeline d'entraînement centralisé simpliste qui " +"prépare le terrain pour ce qui vient ensuite - l'apprentissage fédéré !" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:313 -msgid "Send model updates" -msgstr "Envoyer les mises à jour du modèle" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:418 +msgid "Step 2: Federated Learning with Flower" +msgstr "Étape 2 : Apprentissage fédéré avec Flower" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:262 -msgid "Step 4: Aggregate model updates into a new global model" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:420 +msgid "" +"Step 1 demonstrated a simple centralized training pipeline. All data was " +"in one place (i.e., a single ``trainloader`` and a single ``valloader``)." +" Next, we'll simulate a situation where we have multiple datasets in " +"multiple organizations and where we train a model over these " +"organizations using federated learning." msgstr "" -"Étape 4 : Agréger les mises à jour des modèles dans un nouveau modèle " -"global" +"L'étape 1 a montré un simple pipeline de formation centralisé. Toutes les" +" données étaient au même endroit (c'est-à-dire un seul ``trainloader`` et" +" un seul ``valloader``). Ensuite, nous allons simuler une situation où " +"nous avons plusieurs ensembles de données dans plusieurs organisations et" +" où nous formons un modèle sur ces organisations à l'aide de " +"l'apprentissage fédéré." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:264 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:432 +msgid "Updating model parameters" +msgstr "Mise à jour des paramètres du modèle" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:434 msgid "" -"The server receives model updates from the selected client nodes. If it " -"selected 100 client nodes, it now has 100 slightly different versions of " -"the original global model, each trained on the local data of one client. " -"But didn't we want to have one model that contains the learnings from the" -" data of all 100 client nodes?" +"In federated learning, the server sends the global model parameters to " +"the client, and the client updates the local model with the parameters " +"received from the server. It then trains the model on the local data " +"(which changes the model parameters locally) and sends the " +"updated/changed model parameters back to the server (or, alternatively, " +"it sends just the gradients back to the server, not the full model " +"parameters)." msgstr "" -"Le serveur reçoit les mises à jour du modèle des nœuds clients " -"sélectionnés. S'il a sélectionné 100 nœuds clients, il dispose maintenant" -" de 100 versions légèrement différentes du modèle global original, " -"chacune ayant été formée sur les données locales d'un client. Mais ne " -"voulions-nous pas avoir un seul modèle qui contienne les apprentissages " -"des données de l'ensemble des 100 nœuds clients ?" +"Dans l'apprentissage fédéré, le serveur envoie les paramètres du modèle " +"global au client, et le client met à jour le modèle local avec les " +"paramètres reçus du serveur. Il entraîne ensuite le modèle sur les " +"données locales (ce qui modifie les paramètres du modèle localement) et " +"renvoie les paramètres du modèle mis à jour/changés au serveur (ou, " +"alternativement, il renvoie seulement les gradients au serveur, et non " +"pas les paramètres complets du modèle)." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:266 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:436 msgid "" -"In order to get one single model, we have to combine all the model " -"updates we received from the client nodes. This process is called " -"*aggregation*, and there are many different ways to do it. The most basic" -" way to do it is called *Federated Averaging* (`McMahan et al., 2016 " -"`__), often abbreviated as *FedAvg*. " -"*FedAvg* takes the 100 model updates and, as the name suggests, averages " -"them. To be more precise, it takes the *weighted average* of the model " -"updates, weighted by the number of examples each client used for " -"training. The weighting is important to make sure that each data example " -"has the same \"influence\" on the resulting global model. If one client " -"has 10 examples, and another client has 100 examples, then - without " -"weighting - each of the 10 examples would influence the global model ten " -"times as much as each of the 100 examples." +"We need two helper functions to update the local model with parameters " +"received from the server and to get the updated model parameters from the" +" local model: ``set_parameters`` and ``get_parameters``. The following " +"two functions do just that for the PyTorch model above." msgstr "" -"In order to get one single model, we have to combine all the model " -"updates we received from the client nodes. This process is called " -"*aggregation*, and there are many different ways to do it. The most basic" -" way to do it is called *Federated Averaging* (`McMahan et al., 2016 " -"`__), often abbreviated as *FedAvg*. " -"*FedAvg* takes the 100 model updates and, as the name suggests, averages " -"them. To be more precise, it takes the *weighted average* of the model " -"updates, weighted by the number of examples each client used for " -"training. The weighting is important to make sure that each data example " -"has the same \"influence\" on the resulting global model. If one client " -"has 10 examples, and another client has 100 examples, then - without " -"weighting - each of the 10 examples would influence the global model ten " -"times as much as each of the 100 examples." - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:273 -msgid "|9fdf048ed58d4467b2718cdf4aaf1ec3|" -msgstr "" - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:315 -msgid "Aggregate model updates" -msgstr "Mises à jour globales du modèle" - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:280 -msgid "Step 5: Repeat steps 1 to 4 until the model converges" -msgstr "Étape 5 : répète les étapes 1 à 4 jusqu'à ce que le modèle converge" +"Nous avons besoin de deux fonctions d'aide pour mettre à jour le modèle " +"local avec les paramètres reçus du serveur et pour obtenir les paramètres" +" mis à jour du modèle local : ``set_parameters`` et ``get_parameters``. " +"Les deux fonctions suivantes font exactement cela pour le modèle PyTorch " +"ci-dessus." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:282 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:438 msgid "" -"Steps 1 to 4 are what we call a single round of federated learning. The " -"global model parameters get sent to the participating client nodes (step " -"1), the client nodes train on their local data (step 2), they send their " -"updated models to the server (step 3), and the server then aggregates the" -" model updates to get a new version of the global model (step 4)." +"The details of how this works are not really important here (feel free to" +" consult the PyTorch documentation if you want to learn more). In " +"essence, we use ``state_dict`` to access PyTorch model parameter tensors." +" The parameter tensors are then converted to/from a list of NumPy " +"ndarray's (which Flower knows how to serialize/deserialize):" msgstr "" -"Les étapes 1 à 4 constituent ce que nous appelons un cycle unique " -"d'apprentissage fédéré. Les paramètres du modèle global sont envoyés aux " -"nœuds clients participants (étape 1), les nœuds clients s'entraînent sur " -"leurs données locales (étape 2), ils envoient leurs modèles mis à jour au" -" serveur (étape 3), et le serveur agrège ensuite les mises à jour du " -"modèle pour obtenir une nouvelle version du modèle global (étape 4)." +"Les détails de ce fonctionnement ne sont pas vraiment importants ici " +"(n'hésite pas à consulter la documentation PyTorch si tu veux en savoir " +"plus). En substance, nous utilisons ``state_dict`` pour accéder aux " +"tenseurs de paramètres du modèle PyTorch. Les tenseurs de paramètres sont" +" ensuite convertis en/depuis une liste de ndarray NumPy (que Flower sait " +"sérialiser/désérialiser) :" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:284 -#, fuzzy -msgid "" -"During a single round, each client node that participates in that " -"iteration only trains for a little while. This means that after the " -"aggregation step (step 4), we have a model that has been trained on all " -"the data of all participating client nodes, but only for a little while. " -"We then have to repeat this training process over and over again to " -"eventually arrive at a fully trained model that performs well across the " -"data of all client nodes." -msgstr "" -"Au cours d'un seul tour, chaque nœud client qui participe à cette " -"itération ne s'entraîne que pendant un petit moment. Cela signifie " -"qu'après l'étape d'agrégation (étape 4), nous avons un modèle qui a été " -"entraîné sur toutes les données de tous les nœuds clients participants, " -"mais seulement pendant un petit moment. Nous devons ensuite répéter ce " -"processus d'entraînement encore et encore pour finalement arriver à un " -"modèle entièrement entraîné qui fonctionne bien sur l'ensemble des " -"données de tous les nœuds clients." +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:466 +msgid "Implementing a Flower client" +msgstr "Mise en place d'un client Flower" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:289 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:468 msgid "" -"Congratulations, you now understand the basics of federated learning. " -"There's a lot more to discuss, of course, but that was federated learning" -" in a nutshell. In later parts of this tutorial, we will go into more " -"detail. Interesting questions include: How can we select the best client " -"nodes that should participate in the next round? What's the best way to " -"aggregate model updates? How can we handle failing client nodes " -"(stragglers)?" +"With that out of the way, let's move on to the interesting part. " +"Federated learning systems consist of a server and multiple clients. In " +"Flower, we create clients by implementing subclasses of " +"``flwr.client.Client`` or ``flwr.client.NumPyClient``. We use " +"``NumPyClient`` in this tutorial because it is easier to implement and " +"requires us to write less boilerplate." msgstr "" -"Félicitations, tu comprends maintenant les bases de l'apprentissage " -"fédéré. Il y a bien sûr beaucoup plus à discuter, mais c'était " -"l'apprentissage fédéré en quelques mots. Dans les parties suivantes de ce" -" tutoriel, nous irons plus en détail. Les questions intéressantes " -"comprennent : comment pouvons-nous sélectionner les meilleurs nœuds " -"clients qui devraient participer au prochain tour ? Quelle est la " -"meilleure façon d'agréger les mises à jour du modèle ? Comment pouvons-" -"nous gérer les nœuds clients qui échouent (stragglers) ?" +"Ceci étant dit, passons à la partie intéressante. Les systèmes " +"d'apprentissage fédérés se composent d'un serveur et de plusieurs " +"clients. Dans Flower, nous créons des clients en mettant en œuvre des " +"sous-classes de ``flwr.client.Client`` ou de ``flwr.client.NumPyClient``." +" Nous utilisons ``NumPyClient`` dans ce tutoriel parce qu'il est plus " +"facile à mettre en œuvre et qu'il nous oblige à rédiger moins de modèles " +"de chaudière." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:294 -#, fuzzy +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:470 msgid "" -"Just like we can train a model on the decentralized data of different " -"client nodes, we can also evaluate the model on that data to receive " -"valuable metrics. This is called federated evaluation, sometimes " -"abbreviated as FE. In fact, federated evaluation is an integral part of " -"most federated learning systems." +"To implement the Flower client, we create a subclass of " +"``flwr.client.NumPyClient`` and implement the three methods " +"``get_parameters``, ``fit``, and ``evaluate``:" msgstr "" -"Tout comme nous pouvons former un modèle sur les données décentralisées " -"de différents nœuds clients, nous pouvons également évaluer le modèle sur" -" ces données pour recevoir des mesures précieuses. C'est ce qu'on appelle" -" l'évaluation fédérée, parfois abrégée en FE. En fait, l'évaluation " -"fédérée fait partie intégrante de la plupart des systèmes d'apprentissage" -" fédéré." +"Pour mettre en œuvre le client Flower, nous créons une sous-classe de " +"``flwr.client.NumPyClient`` et mettons en œuvre les trois méthodes " +"``get_parameters``, ``fit`` et ``evaluate`` :" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:297 -msgid "Federated analytics" -msgstr "Analyses fédérées" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:472 +msgid "``get_parameters``: Return the current local model parameters" +msgstr "``get_parameters`` : renvoie les paramètres du modèle local actuel" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:299 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:473 msgid "" -"In many cases, machine learning isn't necessary to derive value from " -"data. Data analysis can yield valuable insights, but again, there's often" -" not enough data to get a clear answer. What's the average age at which " -"people develop a certain type of health condition? Federated analytics " -"enables such queries over multiple client nodes. It is usually used in " -"conjunction with other privacy-enhancing technologies like secure " -"aggregation to prevent the server from seeing the results submitted by " -"individual client nodes." +"``fit``: Receive model parameters from the server, train the model " +"parameters on the local data, and return the (updated) model parameters " +"to the server" msgstr "" -"Dans de nombreux cas, l'apprentissage automatique n'est pas nécessaire " -"pour tirer de la valeur des données. L'analyse des données peut donner " -"des indications précieuses, mais là encore, il n'y a souvent pas assez de" -" données pour obtenir une réponse claire. Quel est l'âge moyen auquel les" -" gens développent un certain type de problème de santé ? L'analyse " -"fédérée permet de telles requêtes sur plusieurs nœuds clients. Elle est " -"généralement utilisée en conjonction avec d'autres technologies de " -"renforcement de la confidentialité, comme l'agrégation sécurisée, pour " -"empêcher le serveur de voir les résultats soumis par les nœuds clients " -"individuels." +"``fit`` : reçoit les paramètres du modèle du serveur, entraîne les " +"paramètres du modèle sur les données locales et renvoie les paramètres du" +" modèle (mis à jour) au serveur" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:305 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:474 msgid "" -"Differential privacy (DP) is often mentioned in the context of Federated " -"Learning. It is a privacy-preserving method used when analyzing and " -"sharing statistical data, ensuring the privacy of individual " -"participants. DP achieves this by adding statistical noise to the model " -"updates, ensuring any individual participants’ information cannot be " -"distinguished or re-identified. This technique can be considered an " -"optimization that provides a quantifiable privacy protection measure." +"``evaluate``: Receive model parameters from the server, evaluate the " +"model parameters on the local data, and return the evaluation result to " +"the server" msgstr "" +"``evaluate`` : reçoit les paramètres du modèle du serveur, évalue les " +"paramètres du modèle sur les données locales et renvoie le résultat de " +"l'évaluation au serveur" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:326 -msgid "Flower" -msgstr "Fleur" - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:328 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:476 msgid "" -"Federated learning, federated evaluation, and federated analytics require" -" infrastructure to move machine learning models back and forth, train and" -" evaluate them on local data, and then aggregate the updated models. " -"Flower provides the infrastructure to do exactly that in an easy, " -"scalable, and secure way. In short, Flower presents a unified approach to" -" federated learning, analytics, and evaluation. It allows the user to " -"federate any workload, any ML framework, and any programming language." -msgstr "" -"L'apprentissage fédéré, l'évaluation fédérée et l'analyse fédérée " -"nécessitent une infrastructure pour déplacer les modèles d'apprentissage " -"automatique dans les deux sens, les entraîner et les évaluer sur des " -"données locales, puis agréger les modèles mis à jour. Flower fournit " -"l'infrastructure pour faire exactement cela de manière simple, évolutive " -"et sécurisée. En bref, Flower présente une approche unifiée de " -"l'apprentissage, de l'analyse et de l'évaluation fédérés. Il permet à " -"l'utilisateur de fédérer n'importe quelle charge de travail, n'importe " -"quel cadre de ML et n'importe quel langage de programmation." - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:334 -msgid "|ff726bc5505e432388ee2fdd6ef420b9|" +"We mentioned that our clients will use the previously defined PyTorch " +"components for model training and evaluation. Let's see a simple Flower " +"client implementation that brings everything together:" msgstr "" +"Nous avons mentionné que nos clients utiliseront les composants PyTorch " +"définis précédemment pour la formation et l'évaluation des modèles. " +"Voyons une simple mise en œuvre du client Flower qui réunit tout cela :" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:340 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:513 msgid "" -"Flower federated learning server and client nodes (car, scooter, personal" -" computer, roomba, and phone)" +"Our class ``FlowerClient`` defines how local training/evaluation will be " +"performed and allows Flower to call the local training/evaluation through" +" ``fit`` and ``evaluate``. Each instance of ``FlowerClient`` represents a" +" *single client* in our federated learning system. Federated learning " +"systems have multiple clients (otherwise, there's not much to federate), " +"so each client will be represented by its own instance of " +"``FlowerClient``. If we have, for example, three clients in our workload," +" then we'd have three instances of ``FlowerClient``. Flower calls " +"``FlowerClient.fit`` on the respective instance when the server selects a" +" particular client for training (and ``FlowerClient.evaluate`` for " +"evaluation)." msgstr "" -"Serveur d'apprentissage fédéré de Flower et nœuds clients (voiture, " -"scooter, ordinateur personnel, roomba et téléphone)" +"Our class ``FlowerClient`` defines how local training/evaluation will be " +"performed and allows Flower to call the local training/evaluation through" +" ``fit`` and ``evaluate``. Each instance of ``FlowerClient`` represents a" +" *single client* in our federated learning system. Federated learning " +"systems have multiple clients (otherwise, there's not much to federate), " +"so each client will be represented by its own instance of " +"``FlowerClient``. If we have, for example, three clients in our workload," +" then we'd have three instances of ``FlowerClient``. Flower calls " +"``FlowerClient.fit`` on the respective instance when the server selects a" +" particular client for training (and ``FlowerClient.evaluate`` for " +"evaluation)." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:353 -msgid "" -"Congratulations, you just learned the basics of federated learning and " -"how it relates to the classic (centralized) machine learning!" -msgstr "" -"Félicitations, tu viens d'apprendre les bases de l'apprentissage fédéré " -"et son rapport avec l'apprentissage automatique classique (centralisé) !" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:517 +msgid "Using the Virtual Client Engine" +msgstr "Utilisation du moteur du client virtuel" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:355 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:519 msgid "" -"In the next part of this tutorial, we are going to build a first " -"federated learning system with Flower." +"In this notebook, we want to simulate a federated learning system with 10" +" clients on a single machine. This means that the server and all 10 " +"clients will live on a single machine and share resources such as CPU, " +"GPU, and memory. Having 10 clients would mean having 10 instances of " +"``FlowerClient`` in memory. Doing this on a single machine can quickly " +"exhaust the available memory resources, even if only a subset of these " +"clients participates in a single round of federated learning." msgstr "" -"Dans la prochaine partie de ce tutoriel, nous allons construire un " -"premier système d'apprentissage fédéré avec Flower." +"Dans ce carnet, nous voulons simuler un système d'apprentissage fédéré " +"avec 10 clients sur une seule machine. Cela signifie que le serveur et " +"les 10 clients vivront sur une seule machine et partageront des " +"ressources telles que le CPU, le GPU et la mémoire. Avoir 10 clients " +"signifierait avoir 10 instances de ``FlowerClient`` en mémoire. Faire " +"cela sur une seule machine peut rapidement épuiser les ressources mémoire" +" disponibles, même si seulement un sous-ensemble de ces clients participe" +" à un seul tour d'apprentissage fédéré." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:373 -#, fuzzy +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:521 msgid "" -"The `Flower Federated Learning Tutorial - Part 1 " -"`__ shows how to build a simple federated learning system " -"with PyTorch and Flower." +"In addition to the regular capabilities where server and clients run on " +"multiple machines, Flower, therefore, provides special simulation " +"capabilities that create ``FlowerClient`` instances only when they are " +"actually necessary for training or evaluation. To enable the Flower " +"framework to create clients when necessary, we need to implement a " +"function called ``client_fn`` that creates a ``FlowerClient`` instance on" +" demand. Flower calls ``client_fn`` whenever it needs an instance of one " +"particular client to call ``fit`` or ``evaluate`` (those instances are " +"usually discarded after use, so they should not keep any local state). " +"Clients are identified by a client ID, or short ``cid``. The ``cid`` can " +"be used, for example, to load different local data partitions for " +"different clients, as can be seen below:" msgstr "" -"Le `Tutoriel d'apprentissage fédéré Flower - Partie 1 " -"`__ " -"montre comment construire un système d'apprentissage fédéré simple avec " -"PyTorch et Flower." +"In addition to the regular capabilities where server and clients run on " +"multiple machines, Flower, therefore, provides special simulation " +"capabilities that create ``FlowerClient`` instances only when they are " +"actually necessary for training or evaluation. To enable the Flower " +"framework to create clients when necessary, we need to implement a " +"function called ``client_fn`` that creates a ``FlowerClient`` instance on" +" demand. Flower calls ``client_fn`` whenever it needs an instance of one " +"particular client to call ``fit`` or ``evaluate`` (those instances are " +"usually discarded after use, so they should not keep any local state). " +"Clients are identified by a client ID, or short ``cid``. The ``cid`` can " +"be used, for example, to load different local data partitions for " +"different clients, as can be seen below:" -#~ msgid "Flower CLI commands" -#~ msgstr "Commandes CLI Flower" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:556 +msgid "Starting the training" +msgstr "Commencer la formation" -#~ msgid "Contributor guide" -#~ msgstr "Guide pour les contributeurs" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:558 +msgid "" +"We now have the class ``FlowerClient`` which defines client-side " +"training/evaluation and ``client_fn`` which allows Flower to create " +"``FlowerClient`` instances whenever it needs to call ``fit`` or " +"``evaluate`` on one particular client. The last step is to start the " +"actual simulation using ``flwr.simulation.start_simulation``." +msgstr "" +"Nous avons maintenant la classe ``FlowerClient`` qui définit " +"l'entraînement/évaluation côté client et ``client_fn`` qui permet à " +"Flower de créer des instances de ``FlowerClient`` chaque fois qu'il a " +"besoin d'appeler ``fit`` ou ``evaluate`` sur un client particulier. La " +"dernière étape consiste à démarrer la simulation réelle en utilisant " +"``flwr.simulation.start_simulation``." -#~ msgid "API Reference - Flower CLI commands" -#~ msgstr "Référence API - Commandes CLI pour Flower" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:560 +msgid "" +"The function ``start_simulation`` accepts a number of arguments, amongst " +"them the ``client_fn`` used to create ``FlowerClient`` instances, the " +"number of clients to simulate (``num_clients``), the number of federated " +"learning rounds (``num_rounds``), and the strategy. The strategy " +"encapsulates the federated learning approach/algorithm, for example, " +"*Federated Averaging* (FedAvg)." +msgstr "" +"La fonction ``start_simulation`` accepte un certain nombre d'arguments, " +"parmi lesquels le ``client_fn`` utilisé pour créer les instances " +"``FlowerClient``, le nombre de clients à simuler (``num_clients``), le " +"nombre de tours d'apprentissage fédéré (``num_rounds``), et la stratégie." +" La stratégie encapsule l'approche/algorithme d'apprentissage fédéré, par" +" exemple, *Federated Averaging* (FedAvg)." -#~ msgid "API Reference - flwr (Python package)" -#~ msgstr "Référence API - flwr (paquetage Python)" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:562 +msgid "" +"Flower has a number of built-in strategies, but we can also use our own " +"strategy implementations to customize nearly all aspects of the federated" +" learning approach. For this example, we use the built-in ``FedAvg`` " +"implementation and customize it using a few basic parameters. The last " +"step is the actual call to ``start_simulation`` which - you guessed it - " +"starts the simulation:" +msgstr "" +"Flower dispose d'un certain nombre de stratégies intégrées, mais nous " +"pouvons également utiliser nos propres implémentations de stratégies pour" +" personnaliser presque tous les aspects de l'approche de l'apprentissage " +"fédéré. Pour cet exemple, nous utilisons l'implémentation intégrée " +"``FedAvg`` et nous la personnalisons en utilisant quelques paramètres de " +"base. La dernière étape est l'appel à ``start_simulation`` qui - tu l'as " +"deviné - démarre la simulation :" -#~ msgid "Flower client." -#~ msgstr "Client de Flower" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:608 +msgid "Behind the scenes" +msgstr "Dans les coulisses" -#~ msgid "Abstract base class for Flower clients." -#~ msgstr "" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:610 +msgid "So how does this work? How does Flower execute this simulation?" +msgstr "" +"Alors, comment cela fonctionne-t-il ? Comment Flower exécute-t-il cette " +"simulation ?" -#~ msgid "Evaluate the provided parameters using the locally held dataset." -#~ msgstr "évaluer le modèle mis à jour sur l'ensemble de test local" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:612 +#, python-format +msgid "" +"When we call ``start_simulation``, we tell Flower that there are 10 " +"clients (``num_clients=10``). Flower then goes ahead an asks the " +"``FedAvg`` strategy to select clients. ``FedAvg`` knows that it should " +"select 100% of the available clients (``fraction_fit=1.0``), so it goes " +"ahead and selects 10 random clients (i.e., 100% of 10)." +msgstr "" +"Lorsque nous appelons ``start_simulation``, nous disons à Flower qu'il y " +"a 10 clients (``num_clients=10``). Flower demande alors à la stratégie " +"``FedAvg`` de sélectionner des clients. ``FedAvg` sait qu'il doit " +"sélectionner 100% des clients disponibles (``fraction_fit=1.0``), alors " +"il choisit 10 clients au hasard (c'est à dire 100% de 10)." -#~ msgid "Parameters" -#~ msgstr "Paramètres du modèle." +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:614 +msgid "" +"Flower then asks the selected 10 clients to train the model. When the " +"server receives the model parameter updates from the clients, it hands " +"those updates over to the strategy (*FedAvg*) for aggregation. The " +"strategy aggregates those updates and returns the new global model, which" +" then gets used in the next round of federated learning." +msgstr "" +"Flower demande ensuite aux 10 clients sélectionnés d'entraîner le modèle." +" Lorsque le serveur reçoit les mises à jour des paramètres du modèle de " +"la part des clients, il les transmet à la stratégie (*FedAvg*) pour " +"qu'elle les agrège. La stratégie agrège ces mises à jour et renvoie le " +"nouveau modèle global, qui est ensuite utilisé dans le prochain cycle " +"d'apprentissage fédéré." -#~ msgid "" -#~ "The evaluation instructions containing " -#~ "(global) model parameters received from " -#~ "the server and a dictionary of " -#~ "configuration values used to customize " -#~ "the local evaluation process." -#~ msgstr "" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:626 +msgid "Where's the accuracy?" +msgstr "Où est la précision ?" -#~ msgid "Returns" -#~ msgstr "Ressources" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:628 +msgid "" +"You may have noticed that all metrics except for ``losses_distributed`` " +"are empty. Where did the ``{\"accuracy\": float(accuracy)}`` go?" +msgstr "" +"Tu as peut-être remarqué que toutes les mesures, à l'exception de " +"``pertes_distribuées``, sont vides. Où est passée la ``{\"précision\" : " +"float(précision)}`` ?" -#~ msgid "" -#~ "The evaluation result containing the " -#~ "loss on the local dataset and " -#~ "other details such as the number " -#~ "of local data examples used for " -#~ "evaluation." -#~ msgstr "" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:630 +msgid "" +"Flower can automatically aggregate losses returned by individual clients," +" but it cannot do the same for metrics in the generic metrics dictionary " +"(the one with the ``accuracy`` key). Metrics dictionaries can contain " +"very different kinds of metrics and even key/value pairs that are not " +"metrics at all, so the framework does not (and can not) know how to " +"handle these automatically." +msgstr "" +"Flower peut automatiquement agréger les pertes renvoyées par les clients " +"individuels, mais il ne peut pas faire la même chose pour les mesures " +"dans le dictionnaire de mesures générique (celui avec la clé " +"``accuracy``). Les dictionnaires de mesures peuvent contenir des types de" +" mesures très différents et même des paires clé/valeur qui ne sont pas " +"des mesures du tout, donc le cadre ne sait pas (et ne peut pas) savoir " +"comment les gérer automatiquement." -#~ msgid "Return type" -#~ msgstr "" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:632 +msgid "" +"As users, we need to tell the framework how to handle/aggregate these " +"custom metrics, and we do so by passing metric aggregation functions to " +"the strategy. The strategy will then call these functions whenever it " +"receives fit or evaluate metrics from clients. The two possible functions" +" are ``fit_metrics_aggregation_fn`` and " +"``evaluate_metrics_aggregation_fn``." +msgstr "" +"En tant qu'utilisateurs, nous devons indiquer au framework comment " +"gérer/agréger ces métriques personnalisées, et nous le faisons en passant" +" des fonctions d'agrégation de métriques à la stratégie. La stratégie " +"appellera alors ces fonctions chaque fois qu'elle recevra des métriques " +"d'ajustement ou d'évaluation de la part des clients. Les deux fonctions " +"possibles sont ``fit_metrics_aggregation_fn`` et " +"``evaluate_metrics_aggregation_fn``." -#~ msgid "Refine the provided parameters using the locally held dataset." -#~ msgstr "" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:634 +msgid "" +"Let's create a simple weighted averaging function to aggregate the " +"``accuracy`` metric we return from ``evaluate``:" +msgstr "" +"Créons une simple fonction de calcul de la moyenne pondérée pour agréger " +"la mesure de \"précision\" que nous renvoie ``evaluate`` :" -#~ msgid "" -#~ "The training instructions containing (global)" -#~ " model parameters received from the " -#~ "server and a dictionary of configuration" -#~ " values used to customize the local" -#~ " training process." -#~ msgstr "" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:660 +msgid "" +"The only thing left to do is to tell the strategy to call this function " +"whenever it receives evaluation metric dictionaries from the clients:" +msgstr "" +"La seule chose qui reste à faire est d'indiquer à la stratégie d'appeler " +"cette fonction chaque fois qu'elle reçoit des dictionnaires de métriques " +"d'évaluation de la part des clients :" -#~ msgid "" -#~ "The training result containing updated " -#~ "parameters and other details such as " -#~ "the number of local training examples" -#~ " used for training." -#~ msgstr "" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:697 +msgid "" +"We now have a full system that performs federated training and federated " +"evaluation. It uses the ``weighted_average`` function to aggregate custom" +" evaluation metrics and calculates a single ``accuracy`` metric across " +"all clients on the server side." +msgstr "" +"Nous avons maintenant un système complet qui effectue la formation " +"fédérée et l'évaluation fédérée. Il utilise la fonction ``moyenne " +"pondérée`` pour agréger les mesures d'évaluation personnalisées et " +"calcule une seule mesure de ``précision`` pour tous les clients du côté " +"du serveur." -#~ msgid "Return the current local model parameters." -#~ msgstr "``get_parameters`` : renvoie les paramètres du modèle local actuel" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:699 +msgid "" +"The other two categories of metrics (``losses_centralized`` and " +"``metrics_centralized``) are still empty because they only apply when " +"centralized evaluation is being used. Part two of the Flower tutorial " +"will cover centralized evaluation." +msgstr "" +"Les deux autres catégories de mesures (``pertes_centralisées`` et " +"``métriques_centralisées``) sont toujours vides car elles ne s'appliquent" +" que lorsque l'évaluation centralisée est utilisée. La deuxième partie du" +" tutoriel sur les fleurs couvrira l'évaluation centralisée." -#~ msgid "" -#~ "The get parameters instructions received " -#~ "from the server containing a dictionary" -#~ " of configuration values." -#~ msgstr "" - -#~ msgid "The current local model parameters." -#~ msgstr "``get_parameters`` : renvoie les paramètres du modèle local actuel" - -#~ msgid "Return set of client's properties." -#~ msgstr "" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:711 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:351 +msgid "Final remarks" +msgstr "Remarques finales" -#~ msgid "" -#~ "The get properties instructions received " -#~ "from the server containing a dictionary" -#~ " of configuration values." -#~ msgstr "" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:713 +msgid "" +"Congratulations, you just trained a convolutional neural network, " +"federated over 10 clients! With that, you understand the basics of " +"federated learning with Flower. The same approach you've seen can be used" +" with other machine learning frameworks (not just PyTorch) and tasks (not" +" just CIFAR-10 images classification), for example NLP with Hugging Face " +"Transformers or speech with SpeechBrain." +msgstr "" +"Félicitations, tu viens d'entraîner un réseau neuronal convolutif, fédéré" +" sur 10 clients ! Avec ça, tu comprends les bases de l'apprentissage " +"fédéré avec Flower. La même approche que tu as vue peut être utilisée " +"avec d'autres cadres d'apprentissage automatique (pas seulement PyTorch) " +"et d'autres tâches (pas seulement la classification des images CIFAR-10)," +" par exemple le NLP avec Hugging Face Transformers ou la parole avec " +"SpeechBrain." -#~ msgid "The current client properties." -#~ msgstr "" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:715 +msgid "" +"In the next notebook, we're going to cover some more advanced concepts. " +"Want to customize your strategy? Initialize parameters on the server " +"side? Or evaluate the aggregated model on the server side? We'll cover " +"all this and more in the next tutorial." +msgstr "" +"Dans le prochain cahier, nous allons aborder des concepts plus avancés. " +"Tu veux personnaliser ta stratégie ? Initialiser des paramètres côté " +"serveur ? Ou évaluer le modèle agrégé côté serveur ? Nous aborderons tout" +" cela et bien plus encore dans le prochain tutoriel." -#~ msgid "Start a Flower client node which connects to a Flower server." -#~ msgstr "" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:733 +#, fuzzy +msgid "" +"The `Flower Federated Learning Tutorial - Part 2 " +"`__ goes into more depth about strategies and all " +"the advanced things you can build with them." +msgstr "" +"Le `Tutoriel d'apprentissage fédéré Flower - Partie 2 " +"`__ va plus en profondeur sur les stratégies et toutes les " +"choses avancées que tu peux construire avec elles." -#~ msgid "" -#~ "The IPv4 or IPv6 address of the" -#~ " server. If the Flower server runs" -#~ " on the same machine on port " -#~ "8080, then `server_address` would be " -#~ "`\"[::]:8080\"`." -#~ msgstr "" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:9 +#, fuzzy +msgid "Use a federated learning strategy" +msgstr "Stratégie de moyenne fédérée." -#~ msgid "An implementation of the abstract base class `flwr.client.Client`." -#~ msgstr "" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:11 +#, fuzzy +msgid "" +"Welcome to the next part of the federated learning tutorial. In previous " +"parts of this tutorial, we introduced federated learning with PyTorch and" +" Flower (`part 1 `__)." +msgstr "" +"Bienvenue dans la prochaine partie du tutoriel sur l'apprentissage " +"fédéré. Dans les parties précédentes de ce tutoriel, nous avons présenté " +"l'apprentissage fédéré avec PyTorch et Flower (`partie 1 " +"`__)." -#~ msgid "" -#~ "The maximum length of gRPC messages " -#~ "that can be exchanged with the " -#~ "Flower server. The default should be " -#~ "sufficient for most models. Users who" -#~ " train very large models might need" -#~ " to increase this value. Note that" -#~ " the Flower server needs to be " -#~ "started with the same value (see " -#~ "`flwr.server.start_server`), otherwise it will " -#~ "not know about the increased limit " -#~ "and block larger messages." -#~ msgstr "" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:13 +msgid "" +"In this notebook, we'll begin to customize the federated learning system " +"we built in the introductory notebook (again, using `Flower " +"`__ and `PyTorch `__)." +msgstr "" +"Dans ce carnet, nous allons commencer à personnaliser le système " +"d'apprentissage fédéré que nous avons construit dans le carnet " +"d'introduction (toujours en utilisant `Flower `__ et " +"`PyTorch `__)." -#~ msgid "" -#~ "The PEM-encoded root certificates as " -#~ "a byte string or a path string." -#~ " If provided, a secure connection " -#~ "using the certificates will be " -#~ "established to an SSL-enabled Flower " -#~ "server." -#~ msgstr "" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:17 +#, fuzzy +msgid "Let's move beyond FedAvg with Flower strategies!" +msgstr "Dépassons FedAvg avec les stratégies florales !" -#~ msgid "" -#~ "DEPRECATED - USE 'transport' INSTEAD. " -#~ "Defines whether or not the client " -#~ "is interacting with the server using " -#~ "the experimental REST API. This feature" -#~ " is experimental, it might change " -#~ "considerably in future versions of " -#~ "Flower." -#~ msgstr "" -#~ "DÉPRÉCIÉ - UTILISER 'transport' À LA " -#~ "PLACE Définit si le client interagit " -#~ "ou non avec le serveur à l'aide" -#~ " de l'API REST expérimentale. Cette " -#~ "fonctionnalité est expérimentale, elle " -#~ "pourrait changer considérablement dans les " -#~ "futures versions de Flower." +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:309 +msgid "Strategy customization" +msgstr "Personnalisation de la stratégie" -#~ msgid "" -#~ "Configure the transport layer. Allowed " -#~ "values: - 'grpc-bidi': gRPC, " -#~ "bidirectional streaming - 'grpc-rere': " -#~ "gRPC, request-response (experimental) - " -#~ "'rest': HTTP (experimental)" -#~ msgstr "" -#~ "Valeurs autorisées : - 'grpc-bidi' " -#~ ": gRPC, flux bidirectionnel - 'grpc-" -#~ "rere' : gRPC, requête-réponse " -#~ "(expérimental) - 'rest' : HTTP " -#~ "(expérimental)" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:311 +msgid "" +"So far, everything should look familiar if you've worked through the " +"introductory notebook. With that, we're ready to introduce a number of " +"new features." +msgstr "" +"Jusqu'à présent, tout devrait te sembler familier si tu as travaillé sur " +"le cahier d'introduction. Avec cela, nous sommes prêts à présenter un " +"certain nombre de nouvelles fonctionnalités." -#~ msgid "Starting a gRPC client with an insecure server connection:" -#~ msgstr "" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:323 +msgid "Server-side parameter **initialization**" +msgstr "Paramètres côté serveur **initialisation**" -#~ msgid "Starting an SSL-enabled gRPC client:" -#~ msgstr "" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:325 +msgid "" +"Flower, by default, initializes the global model by asking one random " +"client for the initial parameters. In many cases, we want more control " +"over parameter initialization though. Flower therefore allows you to " +"directly pass the initial parameters to the Strategy:" +msgstr "" +"Flower, par défaut, initialise le modèle global en demandant à un client " +"aléatoire les paramètres initiaux. Dans de nombreux cas, nous voulons " +"cependant avoir plus de contrôle sur l'initialisation des paramètres. " +"Flower te permet donc de passer directement les paramètres initiaux à la " +"Stratégie :" -#~ msgid "Abstract base class for Flower clients using NumPy." -#~ msgstr "" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:370 +msgid "" +"Passing ``initial_parameters`` to the ``FedAvg`` strategy prevents Flower" +" from asking one of the clients for the initial parameters. If we look " +"closely, we can see that the logs do not show any calls to the " +"``FlowerClient.get_parameters`` method." +msgstr "" +"Le fait de passer ``initial_parameters`` à la stratégie ``FedAvg`` " +"empêche Flower de demander les paramètres initiaux à l'un des clients. Si" +" nous regardons de près, nous pouvons voir que les journaux ne montrent " +"aucun appel à la méthode ``FlowerClient.get_parameters``." -#~ msgid "The current (global) model parameters." -#~ msgstr "``get_parameters`` : renvoie les paramètres du modèle local actuel" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:382 +msgid "Starting with a customized strategy" +msgstr "Commencer par une stratégie personnalisée" -#~ msgid "" -#~ "Configuration parameters which allow the " -#~ "server to influence evaluation on the" -#~ " client. It can be used to " -#~ "communicate arbitrary values from the " -#~ "server to the client, for example, " -#~ "to influence the number of examples " -#~ "used for evaluation." -#~ msgstr "" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:384 +msgid "" +"We've seen the function ``start_simulation`` before. It accepts a number " +"of arguments, amongst them the ``client_fn`` used to create " +"``FlowerClient`` instances, the number of clients to simulate " +"``num_clients``, the number of rounds ``num_rounds``, and the strategy." +msgstr "" +"Elle accepte un certain nombre d'arguments, parmi lesquels le " +"``client_fn`` utilisé pour créer les instances de ``FlowerClient``, le " +"nombre de clients à simuler ``num_clients``, le nombre de rounds " +"``num_rounds``, et la stratégie." -#~ msgid "" -#~ "* **loss** (*float*) -- The evaluation" -#~ " loss of the model on the local" -#~ " dataset. * **num_examples** (*int*) -- " -#~ "The number of examples used for " -#~ "evaluation. * **metrics** (*Dict[str, " -#~ "Scalar]*) -- A dictionary mapping " -#~ "arbitrary string keys to values of " -#~ "type bool, bytes, float, int, or " -#~ "str. It can be used to " -#~ "communicate arbitrary values back to the" -#~ " server." -#~ msgstr "" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:386 +msgid "" +"The strategy encapsulates the federated learning approach/algorithm, for " +"example, ``FedAvg`` or ``FedAdagrad``. Let's try to use a different " +"strategy this time:" +msgstr "" +"La stratégie englobe l'approche/l'algorithme d'apprentissage fédéré, par " +"exemple, ``FedAvg`` ou ``FedAdagrad``. Essayons d'utiliser une stratégie " +"différente cette fois-ci :" -#~ msgid "" -#~ "**loss** (*float*) -- The evaluation " -#~ "loss of the model on the local " -#~ "dataset." -#~ msgstr "" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:424 +msgid "Server-side parameter **evaluation**" +msgstr "Paramètre côté serveur **évaluation**" -#~ msgid "**num_examples** (*int*) -- The number of examples used for evaluation." -#~ msgstr "" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:426 +msgid "" +"Flower can evaluate the aggregated model on the server-side or on the " +"client-side. Client-side and server-side evaluation are similar in some " +"ways, but different in others." +msgstr "" +"Flower peut évaluer le modèle agrégé côté serveur ou côté client. Les " +"évaluations côté client et côté serveur sont similaires à certains " +"égards, mais différentes à d'autres." + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:428 +msgid "" +"**Centralized Evaluation** (or *server-side evaluation*) is conceptually " +"simple: it works the same way that evaluation in centralized machine " +"learning does. If there is a server-side dataset that can be used for " +"evaluation purposes, then that's great. We can evaluate the newly " +"aggregated model after each round of training without having to send the " +"model to clients. We're also fortunate in the sense that our entire " +"evaluation dataset is available at all times." +msgstr "" +"**L'évaluation centralisée** (ou *évaluation côté serveur*) est " +"conceptuellement simple : elle fonctionne de la même manière que " +"l'évaluation dans l'apprentissage automatique centralisé. S'il existe un " +"ensemble de données côté serveur qui peut être utilisé à des fins " +"d'évaluation, alors c'est parfait. Nous pouvons évaluer le modèle " +"nouvellement agrégé après chaque cycle de formation sans avoir à envoyer " +"le modèle aux clients. Nous avons également la chance que l'ensemble de " +"notre ensemble de données d'évaluation soit disponible à tout moment." + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:430 +msgid "" +"**Federated Evaluation** (or *client-side evaluation*) is more complex, " +"but also more powerful: it doesn't require a centralized dataset and " +"allows us to evaluate models over a larger set of data, which often " +"yields more realistic evaluation results. In fact, many scenarios require" +" us to use **Federated Evaluation** if we want to get representative " +"evaluation results at all. But this power comes at a cost: once we start " +"to evaluate on the client side, we should be aware that our evaluation " +"dataset can change over consecutive rounds of learning if those clients " +"are not always available. Moreover, the dataset held by each client can " +"also change over consecutive rounds. This can lead to evaluation results " +"that are not stable, so even if we would not change the model, we'd see " +"our evaluation results fluctuate over consecutive rounds." +msgstr "" +"**L'évaluation fédérée** (ou évaluation côté client) est plus complexe, " +"mais aussi plus puissante : elle ne nécessite pas d'ensemble de données " +"centralisé et nous permet d'évaluer les modèles sur un plus grand " +"ensemble de données, ce qui donne souvent des résultats d'évaluation plus" +" réalistes. En fait, de nombreux scénarios exigent que nous utilisions " +"l'évaluation fédérée** si nous voulons obtenir des résultats d'évaluation" +" représentatifs. Mais cette puissance a un coût : une fois que nous " +"commençons à évaluer côté client, nous devons savoir que notre ensemble " +"de données d'évaluation peut changer au cours des cycles d'apprentissage " +"consécutifs si ces clients ne sont pas toujours disponibles. De plus, " +"l'ensemble de données détenu par chaque client peut également changer au " +"cours des cycles consécutifs. Cela peut conduire à des résultats " +"d'évaluation qui ne sont pas stables, donc même si nous ne changions pas " +"le modèle, nous verrions nos résultats d'évaluation fluctuer au cours des" +" cycles consécutifs." + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:433 +msgid "" +"We've seen how federated evaluation works on the client side (i.e., by " +"implementing the ``evaluate`` method in ``FlowerClient``). Now let's see " +"how we can evaluate aggregated model parameters on the server-side:" +msgstr "" +"Nous avons vu comment l'évaluation fédérée fonctionne du côté client " +"(c'est-à-dire en implémentant la méthode ``evaluate`` dans " +"``FlowerClient``). Voyons maintenant comment nous pouvons évaluer les " +"paramètres du modèle agrégé du côté serveur :" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:490 +msgid "Sending/receiving arbitrary values to/from clients" +msgstr "Envoi/réception de valeurs arbitraires vers/depuis les clients" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:492 +#, fuzzy +msgid "" +"In some situations, we want to configure client-side execution (training," +" evaluation) from the server-side. One example for that is the server " +"asking the clients to train for a certain number of local epochs. Flower " +"provides a way to send configuration values from the server to the " +"clients using a dictionary. Let's look at an example where the clients " +"receive values from the server through the ``config`` parameter in " +"``fit`` (``config`` is also available in ``evaluate``). The ``fit`` " +"method receives the configuration dictionary through the ``config`` " +"parameter and can then read values from this dictionary. In this example," +" it reads ``server_round`` and ``local_epochs`` and uses those values to " +"improve the logging and configure the number of local training epochs:" +msgstr "" +"In some situations, we want to configure client-side execution (training," +" evaluation) from the server-side. One example for that is the server " +"asking the clients to train for a certain number of local epochs. Flower " +"provides a way to send configuration values from the server to the " +"clients using a dictionary. Let's look at an example where the clients " +"receive values from the server through the ``config`` parameter in " +"``fit`` (``config`` is also available in ``evaluate``). The ``fit`` " +"method receives the configuration dictionary through the ``config`` " +"parameter and can then read values from this dictionary. In this example," +" it reads ``server_round`` and ``local_epochs`` and uses those values to " +"improve the logging and configure the number of local training epochs:" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:546 +msgid "" +"So how can we send this config dictionary from server to clients? The " +"built-in Flower Strategies provide way to do this, and it works similarly" +" to the way server-side evaluation works. We provide a function to the " +"strategy, and the strategy calls this function for every round of " +"federated learning:" +msgstr "" +"Comment pouvons-nous donc envoyer ce dictionnaire de configuration du " +"serveur aux clients ? Les stratégies de Flower intégrées fournissent un " +"moyen de le faire, et cela fonctionne de la même façon que l'évaluation " +"côté serveur. Nous fournissons une fonction à la stratégie, et la " +"stratégie appelle cette fonction pour chaque cycle d'apprentissage fédéré" +" :" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:576 +msgid "" +"Next, we'll just pass this function to the FedAvg strategy before " +"starting the simulation:" +msgstr "" +"Ensuite, nous allons simplement passer cette fonction à la stratégie " +"FedAvg avant de commencer la simulation :" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:613 +msgid "" +"As we can see, the client logs now include the current round of federated" +" learning (which they read from the ``config`` dictionary). We can also " +"configure local training to run for one epoch during the first and second" +" round of federated learning, and then for two epochs during the third " +"round." +msgstr "" +"Comme nous pouvons le voir, les journaux des clients incluent maintenant " +"le cycle actuel d'apprentissage fédéré (qu'ils lisent depuis le " +"dictionnaire ``config``). Nous pouvons également configurer " +"l'apprentissage local pour qu'il s'exécute pendant une époque au cours du" +" premier et du deuxième cycle d'apprentissage fédéré, puis pendant deux " +"époques au cours du troisième cycle." + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:615 +msgid "" +"Clients can also return arbitrary values to the server. To do so, they " +"return a dictionary from ``fit`` and/or ``evaluate``. We have seen and " +"used this concept throughout this notebook without mentioning it " +"explicitly: our ``FlowerClient`` returns a dictionary containing a custom" +" key/value pair as the third return value in ``evaluate``." +msgstr "" +"Les clients peuvent également renvoyer des valeurs arbitraires au " +"serveur. Pour ce faire, ils renvoient un dictionnaire depuis ``fit`` " +"et/ou ``evaluate``. Nous avons vu et utilisé ce concept tout au long de " +"ce carnet sans le mentionner explicitement : notre ``FlowerClient`` " +"renvoie un dictionnaire contenant une paire clé/valeur personnalisée en " +"tant que troisième valeur de retour dans ``evaluate``." + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:627 +msgid "Scaling federated learning" +msgstr "Mise à l'échelle de l'apprentissage fédéré" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:629 +msgid "" +"As a last step in this notebook, let's see how we can use Flower to " +"experiment with a large number of clients." +msgstr "" +"Comme dernière étape de ce carnet, voyons comment nous pouvons utiliser " +"Flower pour expérimenter avec un grand nombre de clients." + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:651 +#, python-format +msgid "" +"We now have 1000 partitions, each holding 45 training and 5 validation " +"examples. Given that the number of training examples on each client is " +"quite small, we should probably train the model a bit longer, so we " +"configure the clients to perform 3 local training epochs. We should also " +"adjust the fraction of clients selected for training during each round " +"(we don't want all 1000 clients participating in every round), so we " +"adjust ``fraction_fit`` to ``0.05``, which means that only 5% of " +"available clients (so 50 clients) will be selected for training each " +"round:" +msgstr "" +"Nous avons maintenant 1000 partitions, chacune contenant 45 exemples " +"d'entraînement et 5 exemples de validation. Etant donné que le nombre " +"d'exemples d'entraînement sur chaque client est assez faible, nous " +"devrions probablement entraîner le modèle un peu plus longtemps, nous " +"configurons donc les clients pour qu'ils effectuent 3 époques " +"d'entraînement local. Nous devrions également ajuster la fraction de " +"clients sélectionnés pour l'entraînement à chaque tour (nous ne voulons " +"pas que les 1000 clients participent à chaque tour), nous ajustons donc " +"``fraction_fit`` à ``0.05``, ce qui signifie que seulement 5% des clients" +" disponibles (donc 50 clients) seront sélectionnés pour l'entraînement à " +"chaque tour :" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:699 +msgid "" +"In this notebook, we've seen how we can gradually enhance our system by " +"customizing the strategy, initializing parameters on the server side, " +"choosing a different strategy, and evaluating models on the server-side. " +"That's quite a bit of flexibility with so little code, right?" +msgstr "" +"Dans ce carnet, nous avons vu comment nous pouvons progressivement " +"améliorer notre système en personnalisant la stratégie, en initialisant " +"les paramètres côté serveur, en choisissant une stratégie différente et " +"en évaluant les modèles côté serveur. C'est une sacrée flexibilité avec " +"si peu de code, n'est-ce pas ?" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:701 +msgid "" +"In the later sections, we've seen how we can communicate arbitrary values" +" between server and clients to fully customize client-side execution. " +"With that capability, we built a large-scale Federated Learning " +"simulation using the Flower Virtual Client Engine and ran an experiment " +"involving 1000 clients in the same workload - all in a Jupyter Notebook!" +msgstr "" +"Dans les sections ultérieures, nous avons vu comment nous pouvons " +"communiquer des valeurs arbitraires entre le serveur et les clients pour " +"personnaliser entièrement l'exécution côté client. Grâce à cette " +"capacité, nous avons construit une simulation d'apprentissage fédéré à " +"grande échelle en utilisant le moteur de client virtuel Flower et nous " +"avons mené une expérience impliquant 1000 clients dans la même charge de " +"travail - le tout dans un carnet Jupyter !" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:719 +#, fuzzy +msgid "" +"The `Flower Federated Learning Tutorial - Part 3 " +"`__ shows how to build a fully custom ``Strategy`` from " +"scratch." +msgstr "" +"Le `Tutoriel d'apprentissage fédéré Flower - Partie 3 [WIP] " +"`__ montre comment construire une ``Stratégie`` entièrement " +"personnalisée à partir de zéro." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:9 +msgid "What is Federated Learning?" +msgstr "Qu'est-ce que l'apprentissage fédéré ?" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:13 +#, fuzzy +msgid "" +"In this tutorial, you will learn what federated learning is, build your " +"first system in Flower, and gradually extend it. If you work through all " +"parts of the tutorial, you will be able to build advanced federated " +"learning systems that approach the current state of the art in the field." +msgstr "" +"Dans ce tutoriel, tu apprendras ce qu'est l'apprentissage fédéré, tu " +"construiras ton premier système dans Flower, et tu l'étendras " +"progressivement. Si tu travailles sur toutes les parties du tutoriel, tu " +"seras capable de construire des systèmes d'apprentissage fédéré avancés " +"qui se rapprochent de l'état actuel de l'art dans le domaine." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:15 +msgid "" +"🧑‍🏫 This tutorial starts at zero and expects no familiarity with " +"federated learning. Only a basic understanding of data science and Python" +" programming is assumed." +msgstr "" +"🧑‍🏫 Ce tutoriel part de zéro et n'attend aucune familiarité avec " +"l'apprentissage fédéré. Seule une compréhension de base de la science des" +" données et de la programmation Python est supposée." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:17 +#, fuzzy +msgid "" +"`Star Flower on GitHub `__ ⭐️ and join " +"the open-source Flower community on Slack to connect, ask questions, and " +"get help: `Join Slack `__ 🌼 We'd love to " +"hear from you in the ``#introductions`` channel! And if anything is " +"unclear, head over to the ``#questions`` channel." +msgstr "" +"`Star Flower on GitHub `__ ⭐️ et " +"rejoignez la communauté Flower sur Slack pour vous connecter, poser des " +"questions et obtenir de l'aide : `Join Slack `__ 🌼 Nous serions ravis d'avoir de vos nouvelles dans le canal " +"``#introductions`` ! Et si quelque chose n'est pas clair, rendez-vous sur" +" le canal ``#questions``." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:31 +msgid "Classic machine learning" +msgstr "Apprentissage automatique classique" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:33 +msgid "" +"Before we begin to discuss federated learning, let us quickly recap how " +"most machine learning works today." +msgstr "" +"Avant de commencer à discuter de l'apprentissage fédéré, récapitulons " +"rapidement la façon dont la plupart des apprentissages automatiques " +"fonctionnent aujourd'hui." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:35 +msgid "" +"In machine learning, we have a model, and we have data. The model could " +"be a neural network (as depicted here), or something else, like classical" +" linear regression." +msgstr "" +"Dans l'apprentissage automatique, nous avons un modèle et des données. Le" +" modèle peut être un réseau neuronal (comme illustré ici), ou quelque " +"chose d'autre, comme la régression linéaire classique." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:41 +msgid "|d8bf04f23d9b46d8a23cc6f4887d7873|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:109 +msgid "Model and data" +msgstr "Modèle et données" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:47 +msgid "" +"We train the model using the data to perform a useful task. A task could " +"be to detect objects in images, transcribe an audio recording, or play a " +"game like Go." +msgstr "" +"Nous entraînons le modèle en utilisant les données pour effectuer une " +"tâche utile. Une tâche peut consister à détecter des objets dans des " +"images, à transcrire un enregistrement audio ou à jouer à un jeu comme le" +" Go." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:53 +msgid "|5aa1711387d74d0f8b9c499e1a51627e|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:111 +msgid "Train model using data" +msgstr "Entraîne le modèle à l'aide des données" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:59 +#, fuzzy +msgid "" +"Now, in practice, the training data we work with doesn't originate on the" +" machine we train the model on. It gets created somewhere else." +msgstr "" +"Dans la pratique, les données d'entraînement avec lesquelles nous " +"travaillons ne proviennent pas de la machine sur laquelle nous entraînons" +" le modèle. Elles sont créées ailleurs." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:61 +#, fuzzy +msgid "" +"It originates on a smartphone by the user interacting with an app, a car " +"collecting sensor data, a laptop receiving input via the keyboard, or a " +"smart speaker listening to someone trying to sing a song." +msgstr "" +"Elle prend naissance sur un smartphone par l'interaction de l'utilisateur" +" avec une application, une voiture qui collecte des données de capteurs, " +"un ordinateur portable qui reçoit des entrées via le clavier, ou un haut-" +"parleur intelligent qui écoute quelqu'un qui essaie de chanter une " +"chanson." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:67 +msgid "|2bc8e069228d4873804061ff4a95048c|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:113 +msgid "Data on a phone" +msgstr "Données sur un téléphone" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:73 +msgid "" +"What's also important to mention, this \"somewhere else\" is usually not " +"just one place, it's many places. It could be several devices all running" +" the same app. But it could also be several organizations, all generating" +" data for the same task." +msgstr "" +"Il est également important de mentionner que cet \"ailleurs\" n'est " +"généralement pas un seul endroit, mais plusieurs. Il peut s'agir de " +"plusieurs appareils fonctionnant tous avec la même application. Mais il " +"peut également s'agir de plusieurs organisations, qui génèrent toutes des" +" données pour la même tâche." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:79 +msgid "|c258488766324dc9a6807f0e7c4fd5f4|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:115 +msgid "Data is on many devices" +msgstr "Les données se trouvent sur de nombreux appareils" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:85 +msgid "" +"So to use machine learning, or any kind of data analysis, the approach " +"that has been used in the past was to collect all data on a central " +"server. This server can be somewhere in a data center, or somewhere in " +"the cloud." +msgstr "" +"Ainsi, pour utiliser l'apprentissage automatique, ou tout autre type " +"d'analyse de données, l'approche utilisée par le passé consistait à " +"collecter toutes les données sur un serveur central. Ce serveur peut se " +"trouver quelque part dans un centre de données, ou quelque part dans le " +"cloud." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:91 +msgid "|d5f962c3f4ec48529efda980868c14b0|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:117 +msgid "Central data collection" +msgstr "Collecte centralisée des données" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:97 +#, fuzzy +msgid "" +"Once all the data is collected in one place, we can finally use machine " +"learning algorithms to train our model on the data. This is the machine " +"learning approach that we've basically always relied on." +msgstr "" +"Une fois que toutes les données sont rassemblées en un seul endroit, nous" +" pouvons enfin utiliser des algorithmes d'apprentissage automatique pour " +"entraîner notre modèle sur les données. C'est l'approche d'apprentissage " +"automatique sur laquelle nous nous sommes fondamentalement toujours " +"appuyés." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:103 +msgid "|a5eccea18d4c43a68b54b65043cabef8|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:119 +msgid "Central model training" +msgstr "Formation au modèle central" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:130 +msgid "Challenges of classical machine learning" +msgstr "Les défis de l'apprentissage automatique classique" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:132 +msgid "" +"The classic machine learning approach we've just seen can be used in some" +" cases. Great examples include categorizing holiday photos, or analyzing " +"web traffic. Cases, where all the data is naturally available on a " +"centralized server." +msgstr "" +"L'approche classique de l'apprentissage automatique que nous venons de " +"voir peut être utilisée dans certains cas. Parmi les grands exemples, on " +"peut citer la catégorisation des photos de vacances, ou l'analyse du " +"trafic web. Des cas, où toutes les données sont naturellement disponibles" +" sur un serveur centralisé." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:138 +msgid "|f17662f7df2d42f68cac70a1fdeda8a7|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:173 +msgid "Centralized possible" +msgstr "Possibilité de centralisation" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:144 +msgid "" +"But the approach can not be used in many other cases. Cases, where the " +"data is not available on a centralized server, or cases where the data " +"available on one server is not enough to train a good model." +msgstr "" +"Mais cette approche ne peut pas être utilisée dans de nombreux autres cas" +" : lorsque les données ne sont pas disponibles sur un serveur centralisé," +" ou lorsque les données disponibles sur un serveur ne sont pas " +"suffisantes pour former un bon modèle." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:150 +msgid "|241fc906441a4f038c625a19d30d01b2|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:175 +msgid "Centralized impossible" +msgstr "Impossible de centraliser" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:156 +#, fuzzy +msgid "" +"There are many reasons why the classic centralized machine learning " +"approach does not work for a large number of highly important real-world " +"use cases. Those reasons include:" +msgstr "" +"Il existe de nombreuses raisons pour lesquelles l'approche classique " +"centralisée de l'apprentissage automatique ne fonctionne pas pour un " +"grand nombre de cas d'utilisation très importants dans le monde réel, " +"notamment :" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:158 +#, fuzzy +msgid "" +"**Regulations**: GDPR (Europe), CCPA (California), PIPEDA (Canada), LGPD " +"(Brazil), PDPL (Argentina), KVKK (Turkey), POPI (South Africa), FSS " +"(Russia), CDPR (China), PDPB (India), PIPA (Korea), APPI (Japan), PDP " +"(Indonesia), PDPA (Singapore), APP (Australia), and other regulations " +"protect sensitive data from being moved. In fact, those regulations " +"sometimes even prevent single organizations from combining their own " +"users' data for artificial intelligence training because those users live" +" in different parts of the world, and their data is governed by different" +" data protection regulations." +msgstr "" +"**Réglementations** : GDPR (Europe), CCPA (Californie), PIPEDA (Canada), " +"LGPD (Brésil), PDPL (Argentine), KVKK (Turquie), POPI (Afrique du Sud), " +"FSS (Russie), CDPR (Chine), PDPB (Inde), PIPA (Corée), APPI (Japon), PDP " +"(Indonésie), PDPA (Singapour), APP (Australie), et d'autres " +"réglementations protègent les données sensibles contre le déplacement. En" +" fait, ces réglementations empêchent même parfois des organisations " +"individuelles de combiner les données de leurs propres utilisateurs pour " +"la formation à l'intelligence artificielle parce que ces utilisateurs " +"vivent dans différentes parties du monde, et que leurs données sont " +"régies par des réglementations différentes en matière de protection des " +"données." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:160 +msgid "" +"**User preference**: In addition to regulation, there are use cases where" +" users just expect that no data leaves their device, ever. If you type " +"your passwords and credit card info into the digital keyboard of your " +"phone, you don't expect those passwords to end up on the server of the " +"company that developed that keyboard, do you? In fact, that use case was " +"the reason federated learning was invented in the first place." +msgstr "" +"**Préférence de l'utilisateur** : En plus de la réglementation, il existe" +" des cas d'utilisation où les utilisateurs s'attendent tout simplement à " +"ce qu'aucune donnée ne quitte leur appareil, jamais. Si tu tapes tes mots" +" de passe et tes informations de carte de crédit sur le clavier numérique" +" de ton téléphone, tu ne t'attends pas à ce que ces mots de passe " +"finissent sur le serveur de l'entreprise qui a développé ce clavier, n" +"'est-ce pas ? En fait, ce cas d'utilisation est la raison pour laquelle " +"l'apprentissage fédéré a été inventé en premier lieu." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:161 +#, fuzzy +msgid "" +"**Data volume**: Some sensors, like cameras, produce such a high data " +"volume that it is neither feasible nor economic to collect all the data " +"(due to, for example, bandwidth or communication efficiency). Think about" +" a national rail service with hundreds of train stations across the " +"country. If each of these train stations is outfitted with a number of " +"security cameras, the volume of raw on-device data they produce requires " +"incredibly powerful and exceedingly expensive infrastructure to process " +"and store. And most of the data isn't even useful." +msgstr "" +"**volume de données** : certains capteurs, comme les caméras, produisent " +"un volume de données si important qu'il n'est ni possible ni économique " +"de collecter toutes les données (en raison, par exemple, de la bande " +"passante ou de l'efficacité des communications). Pensez à un service " +"ferroviaire national comptant des centaines de gares à travers le pays. " +"Si chacune de ces gares est équipée d'un certain nombre de caméras de " +"sécurité, le volume de données brutes sur les appareils qu'elles " +"produisent nécessite une infrastructure incroyablement puissante et " +"excessivement coûteuse pour les traiter et les stocker. Et la plupart de " +"ces données ne sont même pas utiles." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:164 +msgid "Examples where centralized machine learning does not work include:" +msgstr "" +"Voici quelques exemples où l'apprentissage automatique centralisé ne " +"fonctionne pas :" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:166 +#, fuzzy +msgid "" +"Sensitive healthcare records from multiple hospitals to train cancer " +"detection models" +msgstr "" +"Des dossiers médicaux sensibles provenant de plusieurs hôpitaux pour " +"former des modèles de détection du cancer" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:167 +msgid "" +"Financial information from different organizations to detect financial " +"fraud" +msgstr "" +"Informations financières provenant de différentes organisations pour " +"détecter les fraudes financières" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:168 +msgid "Location data from your electric car to make better range prediction" +msgstr "" +"Les données de localisation de ta voiture électrique pour mieux prédire " +"l'autonomie" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:169 +msgid "End-to-end encrypted messages to train better auto-complete models" +msgstr "" +"Messages cryptés de bout en bout pour former de meilleurs modèles " +"d'autocomplétion" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:171 +#, fuzzy +msgid "" +"The popularity of privacy-enhancing systems like the `Brave " +"`__ browser or the `Signal `__ " +"messenger shows that users care about privacy. In fact, they choose the " +"privacy-enhancing version over other alternatives, if such an alternative" +" exists. But what can we do to apply machine learning and data science to" +" these cases to utilize private data? After all, these are all areas that" +" would benefit significantly from recent advances in AI." +msgstr "" +"La popularité des systèmes améliorant la confidentialité comme le " +"navigateur `Brave `__ ou le messager `Signal " +"`__ montre que les utilisateurs se soucient de la " +"confidentialité. En fait, ils choisissent la version améliorant la " +"confidentialité plutôt que d'autres alternatives, si une telle " +"alternative existe. Mais que pouvons-nous faire pour appliquer " +"l'apprentissage automatique et la science des données à ces cas afin " +"d'utiliser les données privées ? Après tout, ce sont tous des domaines " +"qui bénéficieraient de manière significative des récentes avancées en " +"matière d'IA." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:186 +msgid "Federated learning" +msgstr "Apprentissage fédéré" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:188 +msgid "" +"Federated learning simply reverses this approach. It enables machine " +"learning on distributed data by moving the training to the data, instead " +"of moving the data to the training. Here's the single-sentence " +"explanation:" +msgstr "" +"L'apprentissage fédéré inverse simplement cette approche. Il permet " +"l'apprentissage automatique sur des données distribuées en déplaçant la " +"formation vers les données, au lieu de déplacer les données vers la " +"formation. Voici l'explication en une seule phrase :" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:190 +msgid "Central machine learning: move the data to the computation" +msgstr "Apprentissage automatique central : déplace les données vers le calcul" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:191 +msgid "Federated (machine) learning: move the computation to the data" +msgstr "Apprentissage (machine) fédéré : déplacer le calcul vers les données" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:193 +msgid "" +"By doing so, it enables us to use machine learning (and other data " +"science approaches) in areas where it wasn't possible before. We can now " +"train excellent medical AI models by enabling different hospitals to work" +" together. We can solve financial fraud by training AI models on the data" +" of different financial institutions. We can build novel privacy-" +"enhancing applications (such as secure messaging) that have better built-" +"in AI than their non-privacy-enhancing alternatives. And those are just a" +" few of the examples that come to mind. As we deploy federated learning, " +"we discover more and more areas that can suddenly be reinvented because " +"they now have access to vast amounts of previously inaccessible data." +msgstr "" +"Ce faisant, il nous permet d'utiliser l'apprentissage automatique (et " +"d'autres approches de science des données) dans des domaines où cela " +"n'était pas possible auparavant. Nous pouvons désormais former " +"d'excellents modèles d'IA médicale en permettant à différents hôpitaux de" +" travailler ensemble. Nous pouvons résoudre les fraudes financières en " +"formant des modèles d'IA sur les données de différentes institutions " +"financières. Nous pouvons créer de nouvelles applications d'amélioration " +"de la confidentialité (telles que la messagerie sécurisée) qui ont une " +"meilleure IA intégrée que leurs alternatives d'amélioration de la " +"confidentialité. Et ce ne sont là que quelques exemples qui me viennent à" +" l'esprit. Au fur et à mesure que nous déployons l'apprentissage fédéré, " +"nous découvrons de plus en plus de domaines qui peuvent soudainement être" +" réinventés parce qu'ils ont maintenant accès à de vastes quantités de " +"données auparavant inaccessibles." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:196 +msgid "" +"So how does federated learning work, exactly? Let's start with an " +"intuitive explanation." +msgstr "" +"Comment fonctionne l'apprentissage fédéré ? Commençons par une " +"explication intuitive." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:199 +msgid "Federated learning in five steps" +msgstr "L'apprentissage fédéré en cinq étapes" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:202 +msgid "Step 0: Initialize global model" +msgstr "Étape 0 : Initialisation du modèle global" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:204 +msgid "" +"We start by initializing the model on the server. This is exactly the " +"same in classic centralized learning: we initialize the model parameters," +" either randomly or from a previously saved checkpoint." +msgstr "" +"Nous commençons par initialiser le modèle sur le serveur. C'est " +"exactement la même chose dans l'apprentissage centralisé classique : nous" +" initialisons les paramètres du modèle, soit de façon aléatoire, soit à " +"partir d'un point de contrôle précédemment sauvegardé." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:210 +msgid "|0aa5aa05810b44b6a835cecce28f3137|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:307 +msgid "Initialize global model" +msgstr "Initialise le modèle global" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:217 +msgid "" +"Step 1: Send model to a number of connected organizations/devices (client" +" nodes)" +msgstr "" +"Étape 1 : envoyer le modèle à un certain nombre d'organisations/appareils" +" connectés (nœuds clients)" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:219 +#, fuzzy +msgid "" +"Next, we send the parameters of the global model to the connected client " +"nodes (think: edge devices like smartphones or servers belonging to " +"organizations). This is to ensure that each participating node starts " +"their local training using the same model parameters. We often use only a" +" few of the connected nodes instead of all nodes. The reason for this is " +"that selecting more and more client nodes has diminishing returns." +msgstr "" +"Ensuite, nous envoyons les paramètres du modèle global aux nœuds clients " +"connectés (par exemple, les appareils périphériques comme les smartphones" +" ou les serveurs appartenant à des organisations). Cela permet de " +"s'assurer que chaque nœud participant commence sa formation locale en " +"utilisant les mêmes paramètres de modèle. Nous n'utilisons souvent que " +"quelques-uns des nœuds connectés au lieu de tous les nœuds. La raison en " +"est que la sélection d'un nombre croissant de nœuds clients a des " +"rendements décroissants." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:225 +msgid "|c742940dd4bf4de09d8d0d5e8d179638|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:309 +msgid "Send global model" +msgstr "Envoyer le modèle global" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:232 +msgid "" +"Step 2: Train model locally on the data of each organization/device " +"(client node)" +msgstr "" +"Étape 2 : Entraîne le modèle localement sur les données de chaque " +"organisation/appareil (nœud client)" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:234 +msgid "" +"Now that all (selected) client nodes have the latest version of the " +"global model parameters, they start the local training. They use their " +"own local dataset to train their own local model. They don't train the " +"model until full convergence, but they only train for a little while. " +"This could be as little as one epoch on the local data, or even just a " +"few steps (mini-batches)." +msgstr "" +"Maintenant que tous les nœuds clients (sélectionnés) disposent de la " +"dernière version des paramètres du modèle global, ils commencent " +"l'entraînement local. Ils utilisent leur propre ensemble de données " +"locales pour entraîner leur propre modèle local. Ils n'entraînent pas le " +"modèle jusqu'à la convergence totale, mais ils ne s'entraînent que " +"pendant un petit moment. Il peut s'agir d'une seule époque sur les " +"données locales, ou même de quelques étapes (mini-batchs)." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:240 +msgid "|1f169ab4601a47e1a226f1628f4ebddb|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:311 +msgid "Train on local data" +msgstr "Forme-toi aux données locales" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:247 +msgid "Step 3: Return model updates back to the server" +msgstr "Étape 3 : Renvoyer les mises à jour du modèle au serveur" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:249 +msgid "" +"After local training, each client node has a slightly different version " +"of the model parameters they originally received. The parameters are all " +"different because each client node has different examples in its local " +"dataset. The client nodes then send those model updates back to the " +"server. The model updates they send can either be the full model " +"parameters or just the gradients that were accumulated during local " +"training." +msgstr "" +"Après l'entraînement local, chaque nœud client possède une version " +"légèrement différente des paramètres du modèle qu'il a reçus à l'origine." +" Les paramètres sont tous différents parce que chaque nœud client a des " +"exemples différents dans son ensemble de données local. Les nœuds clients" +" renvoient ensuite ces mises à jour du modèle au serveur. Les mises à " +"jour du modèle qu'ils envoient peuvent être soit les paramètres complets " +"du modèle, soit seulement les gradients qui ont été accumulés au cours de" +" l'entraînement local." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:255 +msgid "|12cfa9cde14440ecb8c8f6c1d7185bec|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:313 +msgid "Send model updates" +msgstr "Envoyer les mises à jour du modèle" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:262 +msgid "Step 4: Aggregate model updates into a new global model" +msgstr "" +"Étape 4 : Agréger les mises à jour des modèles dans un nouveau modèle " +"global" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:264 +msgid "" +"The server receives model updates from the selected client nodes. If it " +"selected 100 client nodes, it now has 100 slightly different versions of " +"the original global model, each trained on the local data of one client. " +"But didn't we want to have one model that contains the learnings from the" +" data of all 100 client nodes?" +msgstr "" +"Le serveur reçoit les mises à jour du modèle des nœuds clients " +"sélectionnés. S'il a sélectionné 100 nœuds clients, il dispose maintenant" +" de 100 versions légèrement différentes du modèle global original, " +"chacune ayant été formée sur les données locales d'un client. Mais ne " +"voulions-nous pas avoir un seul modèle qui contienne les apprentissages " +"des données de l'ensemble des 100 nœuds clients ?" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:266 +msgid "" +"In order to get one single model, we have to combine all the model " +"updates we received from the client nodes. This process is called " +"*aggregation*, and there are many different ways to do it. The most basic" +" way to do it is called *Federated Averaging* (`McMahan et al., 2016 " +"`__), often abbreviated as *FedAvg*. " +"*FedAvg* takes the 100 model updates and, as the name suggests, averages " +"them. To be more precise, it takes the *weighted average* of the model " +"updates, weighted by the number of examples each client used for " +"training. The weighting is important to make sure that each data example " +"has the same \"influence\" on the resulting global model. If one client " +"has 10 examples, and another client has 100 examples, then - without " +"weighting - each of the 10 examples would influence the global model ten " +"times as much as each of the 100 examples." +msgstr "" +"In order to get one single model, we have to combine all the model " +"updates we received from the client nodes. This process is called " +"*aggregation*, and there are many different ways to do it. The most basic" +" way to do it is called *Federated Averaging* (`McMahan et al., 2016 " +"`__), often abbreviated as *FedAvg*. " +"*FedAvg* takes the 100 model updates and, as the name suggests, averages " +"them. To be more precise, it takes the *weighted average* of the model " +"updates, weighted by the number of examples each client used for " +"training. The weighting is important to make sure that each data example " +"has the same \"influence\" on the resulting global model. If one client " +"has 10 examples, and another client has 100 examples, then - without " +"weighting - each of the 10 examples would influence the global model ten " +"times as much as each of the 100 examples." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:273 +msgid "|72939caf6e294b0986fee6dde96614d7|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:315 +msgid "Aggregate model updates" +msgstr "Mises à jour globales du modèle" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:280 +msgid "Step 5: Repeat steps 1 to 4 until the model converges" +msgstr "Étape 5 : répète les étapes 1 à 4 jusqu'à ce que le modèle converge" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:282 +msgid "" +"Steps 1 to 4 are what we call a single round of federated learning. The " +"global model parameters get sent to the participating client nodes (step " +"1), the client nodes train on their local data (step 2), they send their " +"updated models to the server (step 3), and the server then aggregates the" +" model updates to get a new version of the global model (step 4)." +msgstr "" +"Les étapes 1 à 4 constituent ce que nous appelons un cycle unique " +"d'apprentissage fédéré. Les paramètres du modèle global sont envoyés aux " +"nœuds clients participants (étape 1), les nœuds clients s'entraînent sur " +"leurs données locales (étape 2), ils envoient leurs modèles mis à jour au" +" serveur (étape 3), et le serveur agrège ensuite les mises à jour du " +"modèle pour obtenir une nouvelle version du modèle global (étape 4)." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:284 +#, fuzzy +msgid "" +"During a single round, each client node that participates in that " +"iteration only trains for a little while. This means that after the " +"aggregation step (step 4), we have a model that has been trained on all " +"the data of all participating client nodes, but only for a little while. " +"We then have to repeat this training process over and over again to " +"eventually arrive at a fully trained model that performs well across the " +"data of all client nodes." +msgstr "" +"Au cours d'un seul tour, chaque nœud client qui participe à cette " +"itération ne s'entraîne que pendant un petit moment. Cela signifie " +"qu'après l'étape d'agrégation (étape 4), nous avons un modèle qui a été " +"entraîné sur toutes les données de tous les nœuds clients participants, " +"mais seulement pendant un petit moment. Nous devons ensuite répéter ce " +"processus d'entraînement encore et encore pour finalement arriver à un " +"modèle entièrement entraîné qui fonctionne bien sur l'ensemble des " +"données de tous les nœuds clients." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:289 +msgid "" +"Congratulations, you now understand the basics of federated learning. " +"There's a lot more to discuss, of course, but that was federated learning" +" in a nutshell. In later parts of this tutorial, we will go into more " +"detail. Interesting questions include: How can we select the best client " +"nodes that should participate in the next round? What's the best way to " +"aggregate model updates? How can we handle failing client nodes " +"(stragglers)?" +msgstr "" +"Félicitations, tu comprends maintenant les bases de l'apprentissage " +"fédéré. Il y a bien sûr beaucoup plus à discuter, mais c'était " +"l'apprentissage fédéré en quelques mots. Dans les parties suivantes de ce" +" tutoriel, nous irons plus en détail. Les questions intéressantes " +"comprennent : comment pouvons-nous sélectionner les meilleurs nœuds " +"clients qui devraient participer au prochain tour ? Quelle est la " +"meilleure façon d'agréger les mises à jour du modèle ? Comment pouvons-" +"nous gérer les nœuds clients qui échouent (stragglers) ?" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:294 +#, fuzzy +msgid "" +"Just like we can train a model on the decentralized data of different " +"client nodes, we can also evaluate the model on that data to receive " +"valuable metrics. This is called federated evaluation, sometimes " +"abbreviated as FE. In fact, federated evaluation is an integral part of " +"most federated learning systems." +msgstr "" +"Tout comme nous pouvons former un modèle sur les données décentralisées " +"de différents nœuds clients, nous pouvons également évaluer le modèle sur" +" ces données pour recevoir des mesures précieuses. C'est ce qu'on appelle" +" l'évaluation fédérée, parfois abrégée en FE. En fait, l'évaluation " +"fédérée fait partie intégrante de la plupart des systèmes d'apprentissage" +" fédéré." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:297 +msgid "Federated analytics" +msgstr "Analyses fédérées" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:299 +msgid "" +"In many cases, machine learning isn't necessary to derive value from " +"data. Data analysis can yield valuable insights, but again, there's often" +" not enough data to get a clear answer. What's the average age at which " +"people develop a certain type of health condition? Federated analytics " +"enables such queries over multiple client nodes. It is usually used in " +"conjunction with other privacy-enhancing technologies like secure " +"aggregation to prevent the server from seeing the results submitted by " +"individual client nodes." +msgstr "" +"Dans de nombreux cas, l'apprentissage automatique n'est pas nécessaire " +"pour tirer de la valeur des données. L'analyse des données peut donner " +"des indications précieuses, mais là encore, il n'y a souvent pas assez de" +" données pour obtenir une réponse claire. Quel est l'âge moyen auquel les" +" gens développent un certain type de problème de santé ? L'analyse " +"fédérée permet de telles requêtes sur plusieurs nœuds clients. Elle est " +"généralement utilisée en conjonction avec d'autres technologies de " +"renforcement de la confidentialité, comme l'agrégation sécurisée, pour " +"empêcher le serveur de voir les résultats soumis par les nœuds clients " +"individuels." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:305 +msgid "" +"Differential privacy (DP) is often mentioned in the context of Federated " +"Learning. It is a privacy-preserving method used when analyzing and " +"sharing statistical data, ensuring the privacy of individual " +"participants. DP achieves this by adding statistical noise to the model " +"updates, ensuring any individual participants’ information cannot be " +"distinguished or re-identified. This technique can be considered an " +"optimization that provides a quantifiable privacy protection measure." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:326 +msgid "Flower" +msgstr "Fleur" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:328 +msgid "" +"Federated learning, federated evaluation, and federated analytics require" +" infrastructure to move machine learning models back and forth, train and" +" evaluate them on local data, and then aggregate the updated models. " +"Flower provides the infrastructure to do exactly that in an easy, " +"scalable, and secure way. In short, Flower presents a unified approach to" +" federated learning, analytics, and evaluation. It allows the user to " +"federate any workload, any ML framework, and any programming language." +msgstr "" +"L'apprentissage fédéré, l'évaluation fédérée et l'analyse fédérée " +"nécessitent une infrastructure pour déplacer les modèles d'apprentissage " +"automatique dans les deux sens, les entraîner et les évaluer sur des " +"données locales, puis agréger les modèles mis à jour. Flower fournit " +"l'infrastructure pour faire exactement cela de manière simple, évolutive " +"et sécurisée. En bref, Flower présente une approche unifiée de " +"l'apprentissage, de l'analyse et de l'évaluation fédérés. Il permet à " +"l'utilisateur de fédérer n'importe quelle charge de travail, n'importe " +"quel cadre de ML et n'importe quel langage de programmation." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:334 +msgid "|83a8daee45da4a98b8d6f24ae098fc50|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:340 +msgid "" +"Flower federated learning server and client nodes (car, scooter, personal" +" computer, roomba, and phone)" +msgstr "" +"Serveur d'apprentissage fédéré de Flower et nœuds clients (voiture, " +"scooter, ordinateur personnel, roomba et téléphone)" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:353 +msgid "" +"Congratulations, you just learned the basics of federated learning and " +"how it relates to the classic (centralized) machine learning!" +msgstr "" +"Félicitations, tu viens d'apprendre les bases de l'apprentissage fédéré " +"et son rapport avec l'apprentissage automatique classique (centralisé) !" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:355 +msgid "" +"In the next part of this tutorial, we are going to build a first " +"federated learning system with Flower." +msgstr "" +"Dans la prochaine partie de ce tutoriel, nous allons construire un " +"premier système d'apprentissage fédéré avec Flower." + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:373 +#, fuzzy +msgid "" +"The `Flower Federated Learning Tutorial - Part 1 " +"`__ shows how to build a simple federated learning system " +"with PyTorch and Flower." +msgstr "" +"Le `Tutoriel d'apprentissage fédéré Flower - Partie 1 " +"`__ " +"montre comment construire un système d'apprentissage fédéré simple avec " +"PyTorch et Flower." + +#~ msgid "Flower CLI commands" +#~ msgstr "Commandes CLI Flower" + +#~ msgid "Contributor guide" +#~ msgstr "Guide pour les contributeurs" + +#~ msgid "API Reference - Flower CLI commands" +#~ msgstr "Référence API - Commandes CLI pour Flower" + +#~ msgid "API Reference - flwr (Python package)" +#~ msgstr "Référence API - flwr (paquetage Python)" + +#~ msgid "Flower client." +#~ msgstr "Client de Flower" + +#~ msgid "Abstract base class for Flower clients." +#~ msgstr "" + +#~ msgid "Evaluate the provided parameters using the locally held dataset." +#~ msgstr "évaluer le modèle mis à jour sur l'ensemble de test local" + +#~ msgid "Parameters" +#~ msgstr "Paramètres du modèle." + +#~ msgid "" +#~ "The evaluation instructions containing " +#~ "(global) model parameters received from " +#~ "the server and a dictionary of " +#~ "configuration values used to customize " +#~ "the local evaluation process." +#~ msgstr "" + +#~ msgid "Returns" +#~ msgstr "Ressources" + +#~ msgid "" +#~ "The evaluation result containing the " +#~ "loss on the local dataset and " +#~ "other details such as the number " +#~ "of local data examples used for " +#~ "evaluation." +#~ msgstr "" + +#~ msgid "Return type" +#~ msgstr "" + +#~ msgid "Refine the provided parameters using the locally held dataset." +#~ msgstr "" + +#~ msgid "" +#~ "The training instructions containing (global)" +#~ " model parameters received from the " +#~ "server and a dictionary of configuration" +#~ " values used to customize the local" +#~ " training process." +#~ msgstr "" + +#~ msgid "" +#~ "The training result containing updated " +#~ "parameters and other details such as " +#~ "the number of local training examples" +#~ " used for training." +#~ msgstr "" + +#~ msgid "Return the current local model parameters." +#~ msgstr "``get_parameters`` : renvoie les paramètres du modèle local actuel" + +#~ msgid "" +#~ "The get parameters instructions received " +#~ "from the server containing a dictionary" +#~ " of configuration values." +#~ msgstr "" + +#~ msgid "The current local model parameters." +#~ msgstr "``get_parameters`` : renvoie les paramètres du modèle local actuel" + +#~ msgid "Return set of client's properties." +#~ msgstr "" + +#~ msgid "" +#~ "The get properties instructions received " +#~ "from the server containing a dictionary" +#~ " of configuration values." +#~ msgstr "" + +#~ msgid "The current client properties." +#~ msgstr "" + +#~ msgid "Start a Flower client node which connects to a Flower server." +#~ msgstr "" + +#~ msgid "" +#~ "The IPv4 or IPv6 address of the" +#~ " server. If the Flower server runs" +#~ " on the same machine on port " +#~ "8080, then `server_address` would be " +#~ "`\"[::]:8080\"`." +#~ msgstr "" + +#~ msgid "An implementation of the abstract base class `flwr.client.Client`." +#~ msgstr "" + +#~ msgid "" +#~ "The maximum length of gRPC messages " +#~ "that can be exchanged with the " +#~ "Flower server. The default should be " +#~ "sufficient for most models. Users who" +#~ " train very large models might need" +#~ " to increase this value. Note that" +#~ " the Flower server needs to be " +#~ "started with the same value (see " +#~ "`flwr.server.start_server`), otherwise it will " +#~ "not know about the increased limit " +#~ "and block larger messages." +#~ msgstr "" + +#~ msgid "" +#~ "The PEM-encoded root certificates as " +#~ "a byte string or a path string." +#~ " If provided, a secure connection " +#~ "using the certificates will be " +#~ "established to an SSL-enabled Flower " +#~ "server." +#~ msgstr "" + +#~ msgid "" +#~ "DEPRECATED - USE 'transport' INSTEAD. " +#~ "Defines whether or not the client " +#~ "is interacting with the server using " +#~ "the experimental REST API. This feature" +#~ " is experimental, it might change " +#~ "considerably in future versions of " +#~ "Flower." +#~ msgstr "" +#~ "DÉPRÉCIÉ - UTILISER 'transport' À LA " +#~ "PLACE Définit si le client interagit " +#~ "ou non avec le serveur à l'aide" +#~ " de l'API REST expérimentale. Cette " +#~ "fonctionnalité est expérimentale, elle " +#~ "pourrait changer considérablement dans les " +#~ "futures versions de Flower." + +#~ msgid "" +#~ "Configure the transport layer. Allowed " +#~ "values: - 'grpc-bidi': gRPC, " +#~ "bidirectional streaming - 'grpc-rere': " +#~ "gRPC, request-response (experimental) - " +#~ "'rest': HTTP (experimental)" +#~ msgstr "" +#~ "Valeurs autorisées : - 'grpc-bidi' " +#~ ": gRPC, flux bidirectionnel - 'grpc-" +#~ "rere' : gRPC, requête-réponse " +#~ "(expérimental) - 'rest' : HTTP " +#~ "(expérimental)" + +#~ msgid "Starting a gRPC client with an insecure server connection:" +#~ msgstr "" + +#~ msgid "Starting an SSL-enabled gRPC client:" +#~ msgstr "" + +#~ msgid "Abstract base class for Flower clients using NumPy." +#~ msgstr "" + +#~ msgid "The current (global) model parameters." +#~ msgstr "``get_parameters`` : renvoie les paramètres du modèle local actuel" + +#~ msgid "" +#~ "Configuration parameters which allow the " +#~ "server to influence evaluation on the" +#~ " client. It can be used to " +#~ "communicate arbitrary values from the " +#~ "server to the client, for example, " +#~ "to influence the number of examples " +#~ "used for evaluation." +#~ msgstr "" + +#~ msgid "" +#~ "* **loss** (*float*) -- The evaluation" +#~ " loss of the model on the local" +#~ " dataset. * **num_examples** (*int*) -- " +#~ "The number of examples used for " +#~ "evaluation. * **metrics** (*Dict[str, " +#~ "Scalar]*) -- A dictionary mapping " +#~ "arbitrary string keys to values of " +#~ "type bool, bytes, float, int, or " +#~ "str. It can be used to " +#~ "communicate arbitrary values back to the" +#~ " server." +#~ msgstr "" + +#~ msgid "" +#~ "**loss** (*float*) -- The evaluation " +#~ "loss of the model on the local " +#~ "dataset." +#~ msgstr "" + +#~ msgid "**num_examples** (*int*) -- The number of examples used for evaluation." +#~ msgstr "" #~ msgid "" #~ "**metrics** (*Dict[str, Scalar]*) -- A " @@ -24953,7 +26231,7 @@ msgstr "" #~ msgstr "Contribuer sur GitHub" #~ msgid "How to write a good PR title" -#~ msgstr "Comment rédiger un bon titre de RP" +#~ msgstr "Comment écrire un bon titre de PR" #~ msgid "" #~ "A well-crafted PR title helps team" @@ -24962,12 +26240,12 @@ msgstr "" #~ "proposed. Here's a guide to help " #~ "you write a good GitHub PR title:" #~ msgstr "" -#~ "Un titre de RP bien rédigé aide" -#~ " les membres de l'équipe à comprendre" -#~ " rapidement l'objectif et la portée " -#~ "des changements proposés. Voici un guide" -#~ " pour t'aider à rédiger un bon " -#~ "titre de RP GitHub :" +#~ "Un titre de PR bien choisi permet" +#~ " aux autres développeurs de rapidement " +#~ "comprendre l'intérêt et le scope des " +#~ "changements proposés. Voici un guide " +#~ "pour vous aider à écrire des bons" +#~ " titres de PR :" #~ msgid "" #~ "1. Be Clear and Concise: Provide a" @@ -24983,22 +26261,20 @@ msgstr "" #~ " and Punctuation: Follow grammar rules " #~ "for clarity." #~ msgstr "" -#~ "1. Être clair et concis : fournir" +#~ "1. Soyez clair et concis : Donnez" #~ " un résumé clair des changements de" -#~ " manière concise. 1. Utiliser des " -#~ "verbes d'action : commencer par des " -#~ "verbes comme \" Ajouter \", \" " -#~ "Mettre à jour \" ou \" Réparer " -#~ "\" pour indiquer l'objectif. 1. Inclure" -#~ " des informations pertinentes : mentionner" -#~ " la fonctionnalité ou le module " -#~ "affecté pour le contexte. 1. Faire " -#~ "court : éviter les titres trop " -#~ "longs pour faciliter la lecture. 1. " -#~ "Utiliser des majuscules et une " -#~ "ponctuation correctes : suivre les " -#~ "règles de grammaire pour plus de " -#~ "clarté." +#~ " manière concise. 1. Utilisez des " +#~ "verbes actionnables : Commencez par des" +#~ " verbes comme \"Add\", \"Update\", ou " +#~ "\"Fix\" pour indiquer le but. 1. " +#~ "Inclure des renseignements pertinents : " +#~ "Mentionner la caractéristique ou le " +#~ "module concerné pour le contexte. 1. " +#~ "Gardez le court : Évitez les longs" +#~ " titres pour une lisibilité facile. " +#~ "1. Utiliser une bonne capitalisation et" +#~ " une ponctuation : Suivre les règles" +#~ " de grammaire pour la clarté." #~ msgid "" #~ "Let's start with a few examples " @@ -25007,21 +26283,21 @@ msgstr "" #~ " information:" #~ msgstr "" #~ "Commençons par quelques exemples de " -#~ "titres à éviter car ils ne " -#~ "fournissent pas d'informations significatives " -#~ ":" +#~ "titres qui devraient être évités parce" +#~ " qu'ils ne fournissent pas d'information" +#~ " significative :" #~ msgid "Implement Algorithm" -#~ msgstr "Implémenter l'algorithme" +#~ msgstr "Implement Algorithm" #~ msgid "Database" #~ msgstr "Base de données" #~ msgid "Add my_new_file.py to codebase" -#~ msgstr "Ajoute my_new_file.py à la base de code" +#~ msgstr "Add my_new_file.py to codebase" #~ msgid "Improve code in module" -#~ msgstr "Améliore le code dans le module" +#~ msgstr "Improve code in module" #~ msgid "Change SomeModule" #~ msgstr "Change SomeModule" @@ -25033,25 +26309,21 @@ msgstr "" #~ " is already visible in the \"Files" #~ " changed\" section of the PR:" #~ msgstr "" -#~ "Voici quelques exemples positifs qui " -#~ "fournissent des informations utiles sans " -#~ "répéter comment ils procèdent, car cela" -#~ " est déjà visible dans la section " -#~ "\"Fichiers modifiés\" du RP :" +#~ "Voici quelques bons exemples qui " +#~ "fournissent de l'information utile sans " +#~ "répéter comment ils le font, comme " +#~ "cela est déjà visible dans la " +#~ "section \"Files changed\" de la PR " +#~ ":" #~ msgid "Update docs banner to mention Flower Summit 2023" -#~ msgstr "" -#~ "Mettre à jour la bannière des docs" -#~ " pour mentionner le sommet des fleurs" -#~ " 2023" +#~ msgstr "Update docs banner to mention Flower Summit 2023" #~ msgid "Remove unnecessary XGBoost dependency" -#~ msgstr "Supprime les dépendances inutiles de XGBoost" +#~ msgstr "Remove unnecessary XGBoost dependency" #~ msgid "Remove redundant attributes in strategies subclassing FedAvg" -#~ msgstr "" -#~ "Supprimer les attributs redondants dans " -#~ "les stratégies sous-classant FedAvg" +#~ msgstr "Remove redundant attributes in strategies subclassing FedAvg" #~ msgid "Add CI job to deploy the staging system when the `main` branch changes" #~ msgstr "" @@ -25063,10 +26335,10 @@ msgstr "" #~ "Add new amazing library which will " #~ "be used to improve the simulation " #~ "engine" -#~ msgstr "" -#~ "Ajoute une nouvelle bibliothèque étonnante " -#~ "qui sera utilisée pour améliorer le " -#~ "moteur de simulation" +#~ msgstr "" +#~ "Add new amazing library which will " +#~ "be used to improve the simulation " +#~ "engine" #~ msgid "Differential privacy" #~ msgstr "Confidentialité différentielle" @@ -26628,2027 +27900,3316 @@ msgstr "" #~ "execute the experiment." #~ msgstr "" -#~ msgid "" -#~ "Please make sure to add your " -#~ "baseline or experiment to the " -#~ "corresponding directory as explained in " -#~ "`Executing Baseline `_. Give your baseline the " -#~ "unique identifier. For example, :code:`fedbn`" -#~ " refers to the paper \"FedBN: " -#~ "Federated Learning on non-IID Features" -#~ " via Local Batch Normalization\" and " -#~ "creates the corresponding directory " -#~ ":code:`flower/baselines/flwr_baselines/publications/fedbn`. Then" -#~ " you create the experiment directory " -#~ "with the experiment name. For example," -#~ " the experiment that measures the " -#~ "convergence has the directory " -#~ ":code:`flower/baselines/flwr_baselines/publications/fedbn/convergence_rate`." -#~ " This directory contains all your " -#~ "code and a :code:`README.md` with a " -#~ "link to the paper, the paper's " -#~ "abstract, and a detailed description of" -#~ " how to execute the experiments." +#~ msgid "" +#~ "Please make sure to add your " +#~ "baseline or experiment to the " +#~ "corresponding directory as explained in " +#~ "`Executing Baseline `_. Give your baseline the " +#~ "unique identifier. For example, :code:`fedbn`" +#~ " refers to the paper \"FedBN: " +#~ "Federated Learning on non-IID Features" +#~ " via Local Batch Normalization\" and " +#~ "creates the corresponding directory " +#~ ":code:`flower/baselines/flwr_baselines/publications/fedbn`. Then" +#~ " you create the experiment directory " +#~ "with the experiment name. For example," +#~ " the experiment that measures the " +#~ "convergence has the directory " +#~ ":code:`flower/baselines/flwr_baselines/publications/fedbn/convergence_rate`." +#~ " This directory contains all your " +#~ "code and a :code:`README.md` with a " +#~ "link to the paper, the paper's " +#~ "abstract, and a detailed description of" +#~ " how to execute the experiments." +#~ msgstr "" + +#~ msgid "" +#~ "Please also check if :code:`pyproject.toml`" +#~ " and :code:`requirements.txt` (all in the" +#~ " directory `baselines " +#~ "`_ contain" +#~ " all required Python packages (libraries," +#~ " frameworks, ...). If the required " +#~ "Python package is not yet listed, " +#~ "please add it to :code:`pyproject.toml`. " +#~ "If you need a different version of" +#~ " a package already listed, please try" +#~ " to ensure your experiment runs with" +#~ " the existing version listed in " +#~ ":code:`pyproject.toml` (or :code:`requirements.txt`). " +#~ "If that doesn't work, open a " +#~ "GitHub Issue and request the version " +#~ "change." +#~ msgstr "" + +#~ msgid "" +#~ "The experiment also needs to contain " +#~ "a file with a downloader for the" +#~ " dataset - if possible automatic. " +#~ "This can be included in one of " +#~ "the files or as an extra file." +#~ msgstr "" + +#~ msgid "" +#~ "Finally, please add plots for all " +#~ "experimental results your code is " +#~ "running to the :code:`experiment` directory" +#~ " and include them in :code:`README.md`. " +#~ "Doing this helps others and enables " +#~ "them to recognize your contributions " +#~ "quickly." +#~ msgstr "" + +#~ msgid "" +#~ "We are aware that a few libraries" +#~ " are available only via Conda. " +#~ "However, we want to encourage you " +#~ "to ensure that your code also runs" +#~ " well outside of Conda to make " +#~ "it more accessible to the broader " +#~ "research community." +#~ msgstr "" + +#~ msgid "Here is a checklist for adding a new baseline:" +#~ msgstr "" + +#~ msgid "" +#~ "add required Python packages to " +#~ ":code:`pyproject.toml` or :code:`requirements.txt`" +#~ msgstr "" + +#~ msgid "" +#~ "add all required code under " +#~ ":code:`baselines/flwr_baselines/publications/[new_publication]`" +#~ msgstr "" + +#~ msgid "add a dataset downloader" +#~ msgstr "" + +#~ msgid "add an experiment plot" +#~ msgstr "" + +#~ msgid "add a :code:`README.md`" +#~ msgstr "" + +#~ msgid "Usability" +#~ msgstr "" + +#~ msgid "" +#~ "Flower is known and loved for its" +#~ " usability. Therefore, make sure that " +#~ "your baseline or experiment can be " +#~ "executed with a single command such " +#~ "as :code:`./run.sh` or :code:`python3 " +#~ "main.py`. How you organize the " +#~ "experiments and the related code " +#~ "structure is up to you as an " +#~ "author, but please keep in mind to" +#~ " make sure that other users can " +#~ "easily understand and execute your " +#~ "baseline." +#~ msgstr "" + +#~ msgid "We look forward to your contribution!" +#~ msgstr "Exemple de première contribution" + +#~ msgid "flwr" +#~ msgstr "Fleur" + +#~ msgid "binaries" +#~ msgstr "" + +#~ msgid "Flower Baselines" +#~ msgstr "Demande pour une nouvelle Flower Baseline" + +#~ msgid "" +#~ "Flower Baselines are a collection of " +#~ "organised scripts used to reproduce " +#~ "results from well-known publications or" +#~ " benchmarks. You can check which " +#~ "baselines already exist and/or contribute " +#~ "your own baseline." +#~ msgstr "" + +#~ msgid "Flower requires `Python 3.7 `_ or above." +#~ msgstr "`Python 3.7 `_ ou plus" + +#~ msgid "|9e234df38403464899ad3aee36bf1b95|" +#~ msgstr "" + +#~ msgid "|081158351506446f9f772cb45ee68523|" +#~ msgstr "" + +#~ msgid "|e9325042b79c45ed96b5a8d2f6f3cdc9|" +#~ msgstr "" + +#~ msgid "|11b83bb107344db78a37266e080c4a7a|" +#~ msgstr "" + +#~ msgid "|cd764bcf6d174a9cb62880ace9a8a6bd|" +#~ msgstr "" + +#~ msgid "|5c520984cced41e38f6bb4af416c3f84|" +#~ msgstr "" + +#~ msgid "|66941b0608644cf1a2269a194d3bc0dd|" +#~ msgstr "" + +#~ msgid "|4b149f3a095b402bb8890275aabc9298|" +#~ msgstr "" + +#~ msgid "|675cf7d3d53a4817b5d47529c0758158|" +#~ msgstr "" + +#~ msgid "|7ca594e16ae7477790c2e3cf096ec7cd|" +#~ msgstr "" + +#~ msgid "|d669336577b545a081d5d74169a9bc4d|" +#~ msgstr "" + +#~ msgid "|00b3d6cde1ff410ba54eff58da4e033a|" +#~ msgstr "" + +#~ msgid "|29a11f5353084c1995c538f7edef71a5|" +#~ msgstr "" + +#~ msgid "|d62eda312fd44726bb5db2b761fe7e0d|" +#~ msgstr "" + +#~ msgid "Using Baselines" +#~ msgstr "" + +#~ msgid "Structure" +#~ msgstr "" + +#~ msgid "" +#~ "All baselines are available in the " +#~ "directory `baselines " +#~ "`_. This " +#~ "directory has two different files:" +#~ msgstr "" + +#~ msgid "" +#~ "Both files contain all the information" +#~ " about required Python packages (libraries," +#~ " frameworks, ...) and their versions. " +#~ "You can install each library separately" +#~ " by using :code: `pip install` or " +#~ "you can use Poetry and run " +#~ "code:`poetry install` in the directory " +#~ "where you find the :code:`pyproject.toml` " +#~ "file. After installing all requirements, " +#~ "you can start to run your " +#~ "baseline." +#~ msgstr "" + +#~ msgid "" +#~ "Go to the baseline that you want" +#~ " to execute. The directories and " +#~ "files are structured so that you " +#~ "can first find the paper with " +#~ "their unique identifier such that, for" +#~ " example, :code:`FedProx` refers to the " +#~ "paper \"Federated Optimization in " +#~ "Heterogeneous Networks\". The :code:`fedprox` " +#~ "section contains all available experiments " +#~ "from that paper." +#~ msgstr "" + +#~ msgid "" +#~ "The experiment area contains a " +#~ ":code:`README.md` covering the corresponding " +#~ "paper, its abstract, and goal as " +#~ "well as a detailed description of " +#~ "how to run the baseline. Please " +#~ "use the :code:`README.md` to see how " +#~ "to execute each individual baseline." +#~ msgstr "" + +#~ msgid "Available Baselines" +#~ msgstr "" + +#~ msgid "" +#~ "The following table lists all currently" +#~ " available baselines and the corresponding" +#~ " papers. If you want to add a" +#~ " new baseline or experiment, please " +#~ "check the `Contributing Baselines " +#~ "`_ section." +#~ msgstr "" + +#~ msgid "Paper" +#~ msgstr "" + +#~ msgid "Experiment" +#~ msgstr "" + +#~ msgid "Directory" +#~ msgstr "" + +#~ msgid "`FedAvg `_" +#~ msgstr "" + +#~ msgid "MNIST" +#~ msgstr "" + +#~ msgid ":code:`flower/baselines/flwr_baselines/publications/fedavg_mnist/`" #~ msgstr "" -#~ msgid "" -#~ "Please also check if :code:`pyproject.toml`" -#~ " and :code:`requirements.txt` (all in the" -#~ " directory `baselines " -#~ "`_ contain" -#~ " all required Python packages (libraries," -#~ " frameworks, ...). If the required " -#~ "Python package is not yet listed, " -#~ "please add it to :code:`pyproject.toml`. " -#~ "If you need a different version of" -#~ " a package already listed, please try" -#~ " to ensure your experiment runs with" -#~ " the existing version listed in " -#~ ":code:`pyproject.toml` (or :code:`requirements.txt`). " -#~ "If that doesn't work, open a " -#~ "GitHub Issue and request the version " -#~ "change." +#~ msgid "`FedProx `_" #~ msgstr "" -#~ msgid "" -#~ "The experiment also needs to contain " -#~ "a file with a downloader for the" -#~ " dataset - if possible automatic. " -#~ "This can be included in one of " -#~ "the files or as an extra file." +#~ msgid ":code:`flower/baselines/flwr_baselines/publications/fedprox_mnist/`" #~ msgstr "" -#~ msgid "" -#~ "Finally, please add plots for all " -#~ "experimental results your code is " -#~ "running to the :code:`experiment` directory" -#~ " and include them in :code:`README.md`. " -#~ "Doing this helps others and enables " -#~ "them to recognize your contributions " -#~ "quickly." +#~ msgid "`FedOpt `_" #~ msgstr "" -#~ msgid "" -#~ "We are aware that a few libraries" -#~ " are available only via Conda. " -#~ "However, we want to encourage you " -#~ "to ensure that your code also runs" -#~ " well outside of Conda to make " -#~ "it more accessible to the broader " -#~ "research community." +#~ msgid "sparse gradient task" #~ msgstr "" -#~ msgid "Here is a checklist for adding a new baseline:" +#~ msgid ":code:`flower/baselines/flwr_baselines/publications/adaptive_federated_optimization`" #~ msgstr "" -#~ msgid "" -#~ "add required Python packages to " -#~ ":code:`pyproject.toml` or :code:`requirements.txt`" +#~ msgid "`FedBN `_" #~ msgstr "" -#~ msgid "" -#~ "add all required code under " -#~ ":code:`baselines/flwr_baselines/publications/[new_publication]`" +#~ msgid "convergence rate" #~ msgstr "" -#~ msgid "add a dataset downloader" +#~ msgid ":code:`flower/baselines/flwr_baselines/publications/fedbn/convergence_rate`" #~ msgstr "" -#~ msgid "add an experiment plot" +#~ msgid "" +#~ "Flower requires `Python 3.7 " +#~ "`_ or above, we " +#~ "recommend `Python 3.8 " +#~ "`_." #~ msgstr "" +#~ "Flower nécessite `Python 3.7 " +#~ "`_ ou plus, nous " +#~ "recommandons `Python 3.8 " +#~ "`_." -#~ msgid "add a :code:`README.md`" +#~ msgid "|6baade94cd14454e82ead34fcc29a182|" #~ msgstr "" -#~ msgid "Usability" +#~ msgid "|1209ecd819104c458d396cf665c7ed4f|" #~ msgstr "" -#~ msgid "" -#~ "Flower is known and loved for its" -#~ " usability. Therefore, make sure that " -#~ "your baseline or experiment can be " -#~ "executed with a single command such " -#~ "as :code:`./run.sh` or :code:`python3 " -#~ "main.py`. How you organize the " -#~ "experiments and the related code " -#~ "structure is up to you as an " -#~ "author, but please keep in mind to" -#~ " make sure that other users can " -#~ "easily understand and execute your " -#~ "baseline." +#~ msgid "|c088b02349304344a53f3ce1464225fb|" #~ msgstr "" -#~ msgid "We look forward to your contribution!" -#~ msgstr "Exemple de première contribution" +#~ msgid "|b54d50afc82a4a57a55997a9eaeb735b|" +#~ msgstr "" -#~ msgid "flwr" -#~ msgstr "Fleur" +#~ msgid "|d17b57e97b714a25b43790d4b832fd87|" +#~ msgstr "" -#~ msgid "binaries" +#~ msgid "|38966d05301a4854aa73c8c5033bfaab|" #~ msgstr "" -#~ msgid "Flower Baselines" -#~ msgstr "Demande pour une nouvelle Flower Baseline" +#~ msgid "|231d55f7926d4a5db02dcd724ec62529|" +#~ msgstr "" -#~ msgid "" -#~ "Flower Baselines are a collection of " -#~ "organised scripts used to reproduce " -#~ "results from well-known publications or" -#~ " benchmarks. You can check which " -#~ "baselines already exist and/or contribute " -#~ "your own baseline." +#~ msgid "|fb44f2e13a1b4b69b7a72234eedd13f4|" #~ msgstr "" -#~ msgid "Flower requires `Python 3.7 `_ or above." -#~ msgstr "`Python 3.7 `_ ou plus" +#~ msgid "|1cfc77af5d164030942e84d14268c256|" +#~ msgstr "" -#~ msgid "|9e234df38403464899ad3aee36bf1b95|" +#~ msgid "|0d50828231a64bc08223544a2d2fa216|" #~ msgstr "" -#~ msgid "|081158351506446f9f772cb45ee68523|" +#~ msgid "|904387757ceb42fbaa1875f3e8061113|" #~ msgstr "" -#~ msgid "|e9325042b79c45ed96b5a8d2f6f3cdc9|" +#~ msgid "|68608e1b7c4842458c528b431c715f5a|" #~ msgstr "" -#~ msgid "|11b83bb107344db78a37266e080c4a7a|" +#~ msgid "|2adb106bda97480bb4b33eac472e321e|" #~ msgstr "" -#~ msgid "|cd764bcf6d174a9cb62880ace9a8a6bd|" +#~ msgid "|025f0a6f7a6145cba4bf8fa0e2495851|" #~ msgstr "" -#~ msgid "|5c520984cced41e38f6bb4af416c3f84|" +#~ msgid "Before the release" +#~ msgstr "Avant la sortie" + +#~ msgid "" +#~ "Update the changelog (``changelog.md``) with" +#~ " all relevant changes that happened " +#~ "after the last release. If the " +#~ "last release was tagged ``v1.2.0``, you" +#~ " can use the following URL to " +#~ "see all commits that got merged " +#~ "into ``main`` since then:" #~ msgstr "" +#~ "Mettez à jour le journal des " +#~ "modifications (``changelog.md``) avec tous les" +#~ " changements pertinents qui se sont " +#~ "produits après la dernière version. Si" +#~ " la dernière version a été étiquetée" +#~ " ``v1.2.0``, vous pouvez utiliser l'URL " +#~ "suivante pour voir tous les commits " +#~ "qui ont été fusionnés dans ``main`` " +#~ "depuis lors :" -#~ msgid "|66941b0608644cf1a2269a194d3bc0dd|" +#~ msgid "" +#~ "`GitHub: Compare v1.2.0...main " +#~ "`_" #~ msgstr "" +#~ "`GitHub : Compare v1.2.0...main " +#~ "`_" -#~ msgid "|4b149f3a095b402bb8890275aabc9298|" +#~ msgid "" +#~ "Thank the authors who contributed since" +#~ " the last release. This command helps" +#~ " extract them: ``git log --format='%aN' " +#~ "v1.1.0..HEAD | sort -u``. The command" +#~ " has the same order as ``git " +#~ "shortlog``." #~ msgstr "" +#~ "Remerciez les auteurs qui ont contribué" +#~ " depuis la dernière version. Cette " +#~ "commande permet de les extraire : " +#~ "``git log --format='%aN' v1.1.0..HEAD | " +#~ "sort -u``. La commande a le même" +#~ " ordre que ``git shortlog``." -#~ msgid "|675cf7d3d53a4817b5d47529c0758158|" +#~ msgid "" +#~ "Update the ``changelog.md`` section header " +#~ "``Unreleased`` to contain the version " +#~ "number and date for the release " +#~ "you are building. Create a pull " +#~ "request with the change." #~ msgstr "" +#~ "Mettez à jour l'en-tête de section" +#~ " ``changelog.md`` ``Unreleased`` pour qu'il " +#~ "contienne le numéro de version et " +#~ "la date de la version que vous " +#~ "construisez. Créez une demande de " +#~ "traction avec le changement." -#~ msgid "|7ca594e16ae7477790c2e3cf096ec7cd|" +#~ msgid "" +#~ "Tag the release commit with the " +#~ "version number as soon as the PR" +#~ " is merged: ``git tag v0.12.3``, then" +#~ " ``git push --tags``" #~ msgstr "" +#~ "Marquez le commit de la version " +#~ "avec le numéro de version dès que" +#~ " le PR est fusionné : ``git tag" +#~ " v0.12.3``, puis ``git push --tags``" -#~ msgid "|d669336577b545a081d5d74169a9bc4d|" +#~ msgid "" +#~ "Build the release with ``./dev/build.sh``, " +#~ "then publish it with ``./dev/publish.sh``" #~ msgstr "" +#~ "Construisez la version avec " +#~ "``./dev/build.sh``, puis publiez-la avec " +#~ "``./dev/publish.sh``" -#~ msgid "|00b3d6cde1ff410ba54eff58da4e033a|" +#~ msgid "" +#~ "Create an entry in GitHub releases " +#~ "with the release notes for the " +#~ "previously tagged commit and attach the" +#~ " build artifacts (:code:`.whl` and " +#~ ":code:`.tar.gz`)." #~ msgstr "" +#~ "Crée une entrée dans GitHub releases " +#~ "avec les notes de version pour le" +#~ " commit précédemment étiqueté et attache" +#~ " les artefacts de construction " +#~ "(:code:`.whl` et :code:`.tar.gz`)." -#~ msgid "|29a11f5353084c1995c538f7edef71a5|" +#~ msgid "" +#~ "Second, create a virtual environment " +#~ "(and activate it). If you chose to" +#~ " use :code:`pyenv` (with the :code" +#~ ":`pyenv-virtualenv` plugin) and already " +#~ "have it installed , you can use" +#~ " the following convenience script (by " +#~ "default it will use :code:`Python " +#~ "3.8.17`, but you can change it by" +#~ " providing a specific :code:``)::" #~ msgstr "" +#~ "Deuxièmement, créer un environnement virtuel" +#~ " (et l'activer). Si vous choisissez " +#~ "d'utiliser :code:`pyenv` (avec le plugin " +#~ ":code:`pyenv-virtualenv`) et que vous " +#~ "l'avez déjà installé, vous pouvez " +#~ "utiliser le script suivant (par défaut" +#~ " il utilisera :code:`Python 3.8.17`, mais" +#~ " vous pouvez le changer en " +#~ "fournissant une :code:`` spécifique)::" + +#~ msgid "server.strategy.FedAvg" +#~ msgstr "serveur.stratégie.FedAvg" + +#~ msgid "server.strategy.FedAvgM" +#~ msgstr "stratégie.serveur.FedAvgM" + +#~ msgid "server.strategy.QFedAvg" +#~ msgstr "server.strategy.QFedAvg" + +#~ msgid "server.strategy.FedOpt" +#~ msgstr "serveur.stratégie.FedOpt" -#~ msgid "|d62eda312fd44726bb5db2b761fe7e0d|" -#~ msgstr "" +#~ msgid "server.strategy.FedProx" +#~ msgstr "serveur.stratégie.FedProx" -#~ msgid "Using Baselines" -#~ msgstr "" +#~ msgid "server.strategy.FedAdagrad" +#~ msgstr "serveur.stratégie.FedAdagrad" -#~ msgid "Structure" -#~ msgstr "" +#~ msgid "server.strategy.FedAdam" +#~ msgstr "serveur.stratégie.FedAdam" -#~ msgid "" -#~ "All baselines are available in the " -#~ "directory `baselines " -#~ "`_. This " -#~ "directory has two different files:" -#~ msgstr "" +#~ msgid "server.strategy.FedYogi" +#~ msgstr "serveur.stratégie.FedYogi" #~ msgid "" -#~ "Both files contain all the information" -#~ " about required Python packages (libraries," -#~ " frameworks, ...) and their versions. " -#~ "You can install each library separately" -#~ " by using :code: `pip install` or " -#~ "you can use Poetry and run " -#~ "code:`poetry install` in the directory " -#~ "where you find the :code:`pyproject.toml` " -#~ "file. After installing all requirements, " -#~ "you can start to run your " -#~ "baseline." +#~ "`achiverram28`, `Adam Narozniak`, `Anass " +#~ "Anhari`, `Charles Beauville`, `Dana-Farber`," +#~ " `Daniel J. Beutel`, `Daniel Nata " +#~ "Nugraha`, `Edoardo Gabrielli`, `eunchung`, " +#~ "`Gustavo Bertoli`, `Heng Pan`, `Javier`, " +#~ "`Mahdi`, `Ruth Galindo`, `Steven Hé " +#~ "(Sīchàng)`, `Taner Topal`" #~ msgstr "" #~ msgid "" -#~ "Go to the baseline that you want" -#~ " to execute. The directories and " -#~ "files are structured so that you " -#~ "can first find the paper with " -#~ "their unique identifier such that, for" -#~ " example, :code:`FedProx` refers to the " -#~ "paper \"Federated Optimization in " -#~ "Heterogeneous Networks\". The :code:`fedprox` " -#~ "section contains all available experiments " -#~ "from that paper." +#~ "Let's now load the CIFAR-10 training " +#~ "and test set, partition them into " +#~ "ten smaller datasets (each split into" +#~ " training and validation set), and " +#~ "wrap the resulting partitions by " +#~ "creating a PyTorch ``DataLoader`` for " +#~ "each of them:" #~ msgstr "" +#~ "Chargeons maintenant l'ensemble de formation" +#~ " et de test CIFAR-10, partitionnons-" +#~ "les en dix ensembles de données " +#~ "plus petits (chacun divisé en ensemble" +#~ " de formation et de validation), et" +#~ " enveloppons les partitions résultantes en" +#~ " créant un PyTorch ``DataLoader`` pour " +#~ "chacun d'entre eux :" #~ msgid "" -#~ "The experiment area contains a " -#~ ":code:`README.md` covering the corresponding " -#~ "paper, its abstract, and goal as " -#~ "well as a detailed description of " -#~ "how to run the baseline. Please " -#~ "use the :code:`README.md` to see how " -#~ "to execute each individual baseline." -#~ msgstr "" - -#~ msgid "Available Baselines" +#~ "Let's build a horizontal federated " +#~ "learning system using XGBoost and " +#~ "Flower!" #~ msgstr "" +#~ "Construisons un système d'apprentissage fédéré" +#~ " horizontal en utilisant XGBoost et " +#~ "Flower !" #~ msgid "" -#~ "The following table lists all currently" -#~ " available baselines and the corresponding" -#~ " papers. If you want to add a" -#~ " new baseline or experiment, please " -#~ "check the `Contributing Baselines " -#~ "`_ section." +#~ "Please refer to the `full code " +#~ "example `_ to learn " +#~ "more." #~ msgstr "" +#~ "Réfère-toi à l'exemple de code " +#~ "complet `_ pour en " +#~ "savoir plus." -#~ msgid "Paper" +#~ msgid "|3ff4c820a01d4a5abb022617de537c54|" #~ msgstr "" -#~ msgid "Experiment" +#~ msgid "|7f1889391ad448e2a65920165f0d798c|" #~ msgstr "" -#~ msgid "Directory" +#~ msgid "|a171dc4a0d044e70b5d585cc10ace0e0|" #~ msgstr "" -#~ msgid "`FedAvg `_" +#~ msgid "|fe518aa0d86341f7b2fc87bd6e3bbf0c|" #~ msgstr "" -#~ msgid "MNIST" +#~ msgid "|6abfdf0dade44469ae9f08c8dc7d148c|" #~ msgstr "" -#~ msgid ":code:`flower/baselines/flwr_baselines/publications/fedavg_mnist/`" +#~ msgid "|b4f147db24bb4da9a786e1d6676a1c2d|" #~ msgstr "" -#~ msgid "`FedProx `_" +#~ msgid "|5c62032f589a457bb37b5fee5b2adbde|" #~ msgstr "" -#~ msgid ":code:`flower/baselines/flwr_baselines/publications/fedprox_mnist/`" +#~ msgid "|f154df1846dd44f79a94f1dc3ae8b088|" #~ msgstr "" -#~ msgid "`FedOpt `_" +#~ msgid "|9d20be8160f7451fb0f33b194506503f|" #~ msgstr "" -#~ msgid "sparse gradient task" +#~ msgid "|3d949f76988443c59990d2e64f05c386|" #~ msgstr "" -#~ msgid ":code:`flower/baselines/flwr_baselines/publications/adaptive_federated_optimization`" +#~ msgid "|526c6d9140f6404f8a226d9056327b3b|" #~ msgstr "" -#~ msgid "`FedBN `_" +#~ msgid "|a5f6af14cd7c4550929b17f83b4f63c7|" #~ msgstr "" -#~ msgid "convergence rate" +#~ msgid "|bcd571c4f4ee4803a54f71b5c20448cb|" #~ msgstr "" -#~ msgid ":code:`flower/baselines/flwr_baselines/publications/fedbn/convergence_rate`" +#~ msgid "|c76452ae1ed84965be7ef23c72b95845|" #~ msgstr "" #~ msgid "" -#~ "Flower requires `Python 3.7 " -#~ "`_ or above, we " -#~ "recommend `Python 3.8 " -#~ "`_." -#~ msgstr "" -#~ "Flower nécessite `Python 3.7 " -#~ "`_ ou plus, nous " -#~ "recommandons `Python 3.8 " -#~ "`_." - -#~ msgid "|6baade94cd14454e82ead34fcc29a182|" -#~ msgstr "" - -#~ msgid "|1209ecd819104c458d396cf665c7ed4f|" -#~ msgstr "" - -#~ msgid "|c088b02349304344a53f3ce1464225fb|" -#~ msgstr "" - -#~ msgid "|b54d50afc82a4a57a55997a9eaeb735b|" -#~ msgstr "" - -#~ msgid "|d17b57e97b714a25b43790d4b832fd87|" -#~ msgstr "" - -#~ msgid "|38966d05301a4854aa73c8c5033bfaab|" +#~ "Please follow the first section on " +#~ "`Run Flower using Docker " +#~ "`_ which covers this" +#~ " step in more detail." #~ msgstr "" -#~ msgid "|231d55f7926d4a5db02dcd724ec62529|" +#~ msgid "" +#~ "Since `Flower 1.5 `_ we have " +#~ "introduced translations to our doc " +#~ "pages, but, as you might have " +#~ "noticed, the translations are often " +#~ "imperfect. If you speak languages other" +#~ " than English, you might be able " +#~ "to help us in our effort to " +#~ "make Federated Learning accessible to as" +#~ " many people as possible by " +#~ "contributing to those translations! This " +#~ "might also be a great opportunity " +#~ "for those wanting to become open " +#~ "source contributors with little prerequistes." #~ msgstr "" -#~ msgid "|fb44f2e13a1b4b69b7a72234eedd13f4|" +#~ msgid "" +#~ "You input your translation in the " +#~ "textbox at the top and then, once" +#~ " you are happy with it, you " +#~ "either press ``Save and continue`` (to" +#~ " save the translation and go to " +#~ "the next untranslated string), ``Save " +#~ "and stay`` (to save the translation " +#~ "and stay on the same page), " +#~ "``Suggest`` (to add your translation to" +#~ " suggestions for other users to " +#~ "view), or ``Skip`` (to go to the" +#~ " next untranslated string without saving" +#~ " anything)." #~ msgstr "" -#~ msgid "|1cfc77af5d164030942e84d14268c256|" +#~ msgid "" +#~ "If the section is completely empty " +#~ "(without any token) or non-existant, " +#~ "the changelog will just contain the " +#~ "title of the PR for the changelog" +#~ " entry, without any description." #~ msgstr "" -#~ msgid "|0d50828231a64bc08223544a2d2fa216|" -#~ msgstr "" +#~ msgid "Example: Walk-Through PyTorch & MNIST" +#~ msgstr "Exemple : PyTorch et MNIST" -#~ msgid "|904387757ceb42fbaa1875f3e8061113|" +#~ msgid "" +#~ "In this tutorial we will learn, " +#~ "how to train a Convolutional Neural " +#~ "Network on MNIST using Flower and " +#~ "PyTorch." #~ msgstr "" +#~ "Dans ce tutoriel, nous allons apprendre," +#~ " comment former un réseau neuronal " +#~ "convolutif sur MNIST en utilisant Flower" +#~ " et PyTorch." -#~ msgid "|68608e1b7c4842458c528b431c715f5a|" +#~ msgid "" +#~ "Since we want to use PyTorch to" +#~ " solve a computer vision task, let's" +#~ " go ahead an install PyTorch and " +#~ "the **torchvision** library:" #~ msgstr "" +#~ "Puisque nous voulons utiliser PyTorch " +#~ "pour résoudre une tâche de vision " +#~ "par ordinateur, installons PyTorch et la" +#~ " bibliothèque **torchvision** :" -#~ msgid "|2adb106bda97480bb4b33eac472e321e|" -#~ msgstr "" +#~ msgid "Ready... Set... Train!" +#~ msgstr "Prêts... prêts... entraînez-vous !" -#~ msgid "|025f0a6f7a6145cba4bf8fa0e2495851|" +#~ msgid "" +#~ "Now that we have all our " +#~ "dependencies installed, let's run a " +#~ "simple distributed training with two " +#~ "clients and one server. Our training " +#~ "procedure and network architecture are " +#~ "based on PyTorch's `Basic MNIST Example" +#~ " `_. " +#~ "This will allow you see how easy" +#~ " it is to wrap your code with" +#~ " Flower and begin training in a " +#~ "federated way. We provide you with " +#~ "two helper scripts, namely *run-" +#~ "server.sh*, and *run-clients.sh*. Don't " +#~ "be afraid to look inside, they are" +#~ " simple enough =)." #~ msgstr "" - -#~ msgid "Before the release" -#~ msgstr "Avant la sortie" +#~ "Maintenant que nous avons installé " +#~ "toutes nos dépendances, lançons un " +#~ "simple entraînement distribué avec deux " +#~ "clients et un serveur. Notre procédure" +#~ " d'entraînement et l'architecture de notre" +#~ " réseau sont basées sur l'exemple " +#~ "MNIST de base de PyTorch " +#~ "`_. Cela" +#~ " te permettra de voir à quel " +#~ "point il est facile d'envelopper ton " +#~ "code avec Flower et de commencer " +#~ "l'entraînement de manière fédérée. Nous " +#~ "te fournissons deux scripts d'aide, à" +#~ " savoir *run-server.sh*, et *run-" +#~ "clients.sh*. N'aie pas peur de regarder" +#~ " à l'intérieur, ils sont assez " +#~ "simples =)." #~ msgid "" -#~ "Update the changelog (``changelog.md``) with" -#~ " all relevant changes that happened " -#~ "after the last release. If the " -#~ "last release was tagged ``v1.2.0``, you" -#~ " can use the following URL to " -#~ "see all commits that got merged " -#~ "into ``main`` since then:" -#~ msgstr "" -#~ "Mettez à jour le journal des " -#~ "modifications (``changelog.md``) avec tous les" -#~ " changements pertinents qui se sont " -#~ "produits après la dernière version. Si" -#~ " la dernière version a été étiquetée" -#~ " ``v1.2.0``, vous pouvez utiliser l'URL " -#~ "suivante pour voir tous les commits " -#~ "qui ont été fusionnés dans ``main`` " -#~ "depuis lors :" +#~ "Go ahead and launch on a terminal" +#~ " the *run-server.sh* script first as" +#~ " follows:" +#~ msgstr "Lance sur un terminal le script *run-server.sh* d'abord comme suit :" -#~ msgid "" -#~ "`GitHub: Compare v1.2.0...main " -#~ "`_" -#~ msgstr "" -#~ "`GitHub : Compare v1.2.0...main " -#~ "`_" +#~ msgid "Now that the server is up and running, go ahead and launch the clients." +#~ msgstr "Maintenant que le serveur est opérationnel, vas-y et lance les clients." #~ msgid "" -#~ "Thank the authors who contributed since" -#~ " the last release. This command helps" -#~ " extract them: ``git log --format='%aN' " -#~ "v1.1.0..HEAD | sort -u``. The command" -#~ " has the same order as ``git " -#~ "shortlog``." +#~ "Et voilà! You should be seeing the" +#~ " training procedure and, after a few" +#~ " iterations, the test accuracy for " +#~ "each client." #~ msgstr "" -#~ "Remerciez les auteurs qui ont contribué" -#~ " depuis la dernière version. Cette " -#~ "commande permet de les extraire : " -#~ "``git log --format='%aN' v1.1.0..HEAD | " -#~ "sort -u``. La commande a le même" -#~ " ordre que ``git shortlog``." +#~ "Et voilà ! Tu devrais voir la " +#~ "procédure d'entraînement et, après quelques" +#~ " itérations, la précision du test " +#~ "pour chaque client." -#~ msgid "" -#~ "Update the ``changelog.md`` section header " -#~ "``Unreleased`` to contain the version " -#~ "number and date for the release " -#~ "you are building. Create a pull " -#~ "request with the change." -#~ msgstr "" -#~ "Mettez à jour l'en-tête de section" -#~ " ``changelog.md`` ``Unreleased`` pour qu'il " -#~ "contienne le numéro de version et " -#~ "la date de la version que vous " -#~ "construisez. Créez une demande de " -#~ "traction avec le changement." +#~ msgid "Now, let's see what is really happening inside." +#~ msgstr "Maintenant, voyons ce qui se passe réellement à l'intérieur." #~ msgid "" -#~ "Tag the release commit with the " -#~ "version number as soon as the PR" -#~ " is merged: ``git tag v0.12.3``, then" -#~ " ``git push --tags``" +#~ "Inside the server helper script *run-" +#~ "server.sh* you will find the following" +#~ " code that basically runs the " +#~ ":code:`server.py`" #~ msgstr "" -#~ "Marquez le commit de la version " -#~ "avec le numéro de version dès que" -#~ " le PR est fusionné : ``git tag" -#~ " v0.12.3``, puis ``git push --tags``" +#~ "Dans le script d'aide au serveur " +#~ "*run-server.sh*, tu trouveras le code " +#~ "suivant qui exécute le fichier " +#~ ":code:`server.py`" #~ msgid "" -#~ "Build the release with ``./dev/build.sh``, " -#~ "then publish it with ``./dev/publish.sh``" +#~ "We can go a bit deeper and " +#~ "see that :code:`server.py` simply launches " +#~ "a server that will coordinate three " +#~ "rounds of training. Flower Servers are" +#~ " very customizable, but for simple " +#~ "workloads, we can start a server " +#~ "using the :ref:`start_server ` function and leave " +#~ "all the configuration possibilities at " +#~ "their default values, as seen below." #~ msgstr "" -#~ "Construisez la version avec " -#~ "``./dev/build.sh``, puis publiez-la avec " -#~ "``./dev/publish.sh``" +#~ "Nous pouvons aller un peu plus " +#~ "loin et voir que :code:`server.py` lance" +#~ " simplement un serveur qui coordonnera " +#~ "trois tours de formation. Flower Les " +#~ "serveurs sont très personnalisables, mais " +#~ "pour les charges de travail simples, " +#~ "nous pouvons démarrer un serveur à " +#~ "l'aide de la fonction :ref:`start_server " +#~ "` et " +#~ "laisser toutes les possibilités de " +#~ "configuration à leurs valeurs par " +#~ "défaut, comme on peut le voir " +#~ "ci-dessous." #~ msgid "" -#~ "Create an entry in GitHub releases " -#~ "with the release notes for the " -#~ "previously tagged commit and attach the" -#~ " build artifacts (:code:`.whl` and " -#~ ":code:`.tar.gz`)." +#~ "Next, let's take a look at the " +#~ "*run-clients.sh* file. You will see " +#~ "that it contains the main loop " +#~ "that starts a set of *clients*." #~ msgstr "" -#~ "Crée une entrée dans GitHub releases " -#~ "avec les notes de version pour le" -#~ " commit précédemment étiqueté et attache" -#~ " les artefacts de construction " -#~ "(:code:`.whl` et :code:`.tar.gz`)." +#~ "Ensuite, jetons un coup d'œil au " +#~ "fichier *run-clients.sh*. Tu verras " +#~ "qu'il contient la boucle principale qui" +#~ " démarre un ensemble de *clients*." #~ msgid "" -#~ "Second, create a virtual environment " -#~ "(and activate it). If you chose to" -#~ " use :code:`pyenv` (with the :code" -#~ ":`pyenv-virtualenv` plugin) and already " -#~ "have it installed , you can use" -#~ " the following convenience script (by " -#~ "default it will use :code:`Python " -#~ "3.8.17`, but you can change it by" -#~ " providing a specific :code:``)::" +#~ "**cid**: is the client ID. It is" +#~ " an integer that uniquely identifies " +#~ "client identifier." #~ msgstr "" -#~ "Deuxièmement, créer un environnement virtuel" -#~ " (et l'activer). Si vous choisissez " -#~ "d'utiliser :code:`pyenv` (avec le plugin " -#~ ":code:`pyenv-virtualenv`) et que vous " -#~ "l'avez déjà installé, vous pouvez " -#~ "utiliser le script suivant (par défaut" -#~ " il utilisera :code:`Python 3.8.17`, mais" -#~ " vous pouvez le changer en " -#~ "fournissant une :code:`` spécifique)::" - -#~ msgid "server.strategy.FedAvg" -#~ msgstr "serveur.stratégie.FedAvg" - -#~ msgid "server.strategy.FedAvgM" -#~ msgstr "stratégie.serveur.FedAvgM" - -#~ msgid "server.strategy.QFedAvg" -#~ msgstr "server.strategy.QFedAvg" - -#~ msgid "server.strategy.FedOpt" -#~ msgstr "serveur.stratégie.FedOpt" - -#~ msgid "server.strategy.FedProx" -#~ msgstr "serveur.stratégie.FedProx" - -#~ msgid "server.strategy.FedAdagrad" -#~ msgstr "serveur.stratégie.FedAdagrad" +#~ "**cid** : c'est l'identifiant du client." +#~ " C'est un nombre entier qui identifie" +#~ " de façon unique l'identifiant du " +#~ "client." -#~ msgid "server.strategy.FedAdam" -#~ msgstr "serveur.stratégie.FedAdam" +#~ msgid "**sever_address**: String that identifies IP and port of the server." +#~ msgstr "**sever_address** : Chaîne qui identifie l'IP et le port du serveur." -#~ msgid "server.strategy.FedYogi" -#~ msgstr "serveur.stratégie.FedYogi" +#~ msgid "" +#~ "**nb_clients**: This defines the number " +#~ "of clients being created. This piece " +#~ "of information is not required by " +#~ "the client, but it helps us " +#~ "partition the original MNIST dataset to" +#~ " make sure that every client is " +#~ "working on unique subsets of both " +#~ "*training* and *test* sets." +#~ msgstr "" +#~ "**Cette information n'est pas requise " +#~ "par le client, mais elle nous aide" +#~ " à partitionner l'ensemble de données " +#~ "MNIST original pour nous assurer que " +#~ "chaque client travaille sur des sous-" +#~ "ensembles uniques des ensembles *formation*" +#~ " et *test*." #~ msgid "" -#~ "`achiverram28`, `Adam Narozniak`, `Anass " -#~ "Anhari`, `Charles Beauville`, `Dana-Farber`," -#~ " `Daniel J. Beutel`, `Daniel Nata " -#~ "Nugraha`, `Edoardo Gabrielli`, `eunchung`, " -#~ "`Gustavo Bertoli`, `Heng Pan`, `Javier`, " -#~ "`Mahdi`, `Ruth Galindo`, `Steven Hé " -#~ "(Sīchàng)`, `Taner Topal`" +#~ "Again, we can go deeper and look" +#~ " inside :code:`flwr_example/quickstart-" +#~ "pytorch/client.py`. After going through the" +#~ " argument parsing code at the " +#~ "beginning of our :code:`main` function, " +#~ "you will find a call to " +#~ ":code:`mnist.load_data`. This function is " +#~ "responsible for partitioning the original " +#~ "MNIST datasets (*training* and *test*) " +#~ "and returning a :code:`torch.utils.data.DataLoader`" +#~ " s for each of them. We then" +#~ " instantiate a :code:`PytorchMNISTClient` object" +#~ " with our client ID, our DataLoaders," +#~ " the number of epochs in each " +#~ "round, and which device we want to" +#~ " use for training (CPU or GPU)." #~ msgstr "" +#~ "Encore une fois, nous pouvons aller " +#~ "plus loin et regarder dans " +#~ ":code:`flwr_example/quickstart-pytorch/client.py`. Après" +#~ " avoir parcouru le code d'analyse des" +#~ " arguments au début de notre fonction" +#~ " :code:`main`, tu trouveras un appel " +#~ "à :code:`mnist.load_data`. Cette fonction est" +#~ " responsable du partitionnement des " +#~ "ensembles de données MNIST originaux " +#~ "(*training* et *test*) et renvoie un " +#~ ":code:`torch.utils.data.DataLoader` s pour chacun" +#~ " d'entre eux. Nous instancions ensuite " +#~ "un objet :code:`PytorchMNISTClient` avec notre" +#~ " ID client, nos DataLoaders, le " +#~ "nombre d'époques dans chaque tour et " +#~ "le périphérique que nous voulons " +#~ "utiliser pour l'entraînement (CPU ou " +#~ "GPU)." #~ msgid "" -#~ "Let's now load the CIFAR-10 training " -#~ "and test set, partition them into " -#~ "ten smaller datasets (each split into" -#~ " training and validation set), and " -#~ "wrap the resulting partitions by " -#~ "creating a PyTorch ``DataLoader`` for " -#~ "each of them:" +#~ "The :code:`PytorchMNISTClient` object when " +#~ "finally passed to :code:`fl.client.start_client` " +#~ "along with the server's address as " +#~ "the training process begins." #~ msgstr "" -#~ "Chargeons maintenant l'ensemble de formation" -#~ " et de test CIFAR-10, partitionnons-" -#~ "les en dix ensembles de données " -#~ "plus petits (chacun divisé en ensemble" -#~ " de formation et de validation), et" -#~ " enveloppons les partitions résultantes en" -#~ " créant un PyTorch ``DataLoader`` pour " -#~ "chacun d'entre eux :" +#~ "L'objet :code:`PytorchMNISTClient` est finalement" +#~ " transmis à :code:`fl.client.start_client` avec" +#~ " l'adresse du serveur lorsque le " +#~ "processus de formation commence." + +#~ msgid "A Closer Look" +#~ msgstr "Regarder de plus près" #~ msgid "" -#~ "Let's build a horizontal federated " -#~ "learning system using XGBoost and " -#~ "Flower!" +#~ "Now, let's look closely into the " +#~ ":code:`PytorchMNISTClient` inside :code:`flwr_example" +#~ ".quickstart-pytorch.mnist` and see what it" +#~ " is doing:" #~ msgstr "" -#~ "Construisons un système d'apprentissage fédéré" -#~ " horizontal en utilisant XGBoost et " -#~ "Flower !" +#~ "Maintenant, examinons de près le " +#~ ":code:`PytorchMNISTClient` à l'intérieur du " +#~ ":code:`flwr_example.quickstart-pytorch.mnist` et " +#~ "voyons ce qu'il fait :" #~ msgid "" -#~ "Please refer to the `full code " -#~ "example `_ to learn " -#~ "more." +#~ "The first thing to notice is that" +#~ " :code:`PytorchMNISTClient` instantiates a CNN" +#~ " model inside its constructor" #~ msgstr "" -#~ "Réfère-toi à l'exemple de code " -#~ "complet `_ pour en " -#~ "savoir plus." +#~ "La première chose à remarquer est " +#~ "que :code:`PytorchMNISTClient` instancie un " +#~ "modèle CNN dans son constructeur" -#~ msgid "|3ff4c820a01d4a5abb022617de537c54|" +#~ msgid "" +#~ "The code for the CNN is available" +#~ " under :code:`quickstart-pytorch.mnist` and " +#~ "it is reproduced below. It is the" +#~ " same network found in `Basic MNIST" +#~ " Example " +#~ "`_." #~ msgstr "" +#~ "Le code du CNN est disponible sous" +#~ " :code:`quickstart-pytorch.mnist` et il est" +#~ " reproduit ci-dessous. Il s'agit du" +#~ " même réseau que celui que l'on " +#~ "trouve dans `Exemple basique de MNIST" +#~ " `_." -#~ msgid "|7f1889391ad448e2a65920165f0d798c|" +#~ msgid "" +#~ "The second thing to notice is that" +#~ " :code:`PytorchMNISTClient` class inherits from" +#~ " the :code:`fl.client.Client`, and hence it" +#~ " must implement the following methods:" #~ msgstr "" +#~ "La deuxième chose à noter est que" +#~ " la classe :code:`PytorchMNISTClient` hérite " +#~ "de :code:`fl.client.Client`, et qu'elle doit" +#~ " donc implémenter les méthodes suivantes" +#~ " :" -#~ msgid "|a171dc4a0d044e70b5d585cc10ace0e0|" +#~ msgid "" +#~ "When comparing the abstract class to " +#~ "its derived class :code:`PytorchMNISTClient` " +#~ "you will notice that :code:`fit` calls" +#~ " a :code:`train` function and that " +#~ ":code:`evaluate` calls a :code:`test`: " +#~ "function." #~ msgstr "" +#~ "En comparant la classe abstraite à " +#~ "sa classe dérivée :code:`PytorchMNISTClient`, " +#~ "tu remarqueras que :code:`fit` appelle " +#~ "une fonction :code:`train` et que " +#~ ":code:`evaluate` appelle une fonction " +#~ ":code:`test` :." -#~ msgid "|fe518aa0d86341f7b2fc87bd6e3bbf0c|" +#~ msgid "" +#~ "These functions can both be found " +#~ "inside the same :code:`quickstart-" +#~ "pytorch.mnist` module:" #~ msgstr "" +#~ "Ces fonctions se trouvent toutes deux" +#~ " dans le même module :code:`quickstart-" +#~ "pytorch.mnist` :" -#~ msgid "|6abfdf0dade44469ae9f08c8dc7d148c|" +#~ msgid "" +#~ "Observe that these functions encapsulate " +#~ "regular training and test loops and " +#~ "provide :code:`fit` and :code:`evaluate` with" +#~ " final statistics for each round. You" +#~ " could substitute them with your " +#~ "custom train and test loops and " +#~ "change the network architecture, and the" +#~ " entire example would still work " +#~ "flawlessly. As a matter of fact, " +#~ "why not try and modify the code" +#~ " to an example of your liking?" #~ msgstr "" +#~ "Observe que ces fonctions encapsulent " +#~ "les boucles d'entraînement et de test" +#~ " habituelles et fournissent à :code:`fit`" +#~ " et :code:`evaluate` les statistiques " +#~ "finales pour chaque tour. Tu pourrais" +#~ " les remplacer par tes boucles " +#~ "d'entraînement et de test personnalisées " +#~ "et changer l'architecture du réseau, et" +#~ " l'ensemble de l'exemple fonctionnerait " +#~ "toujours parfaitement. En fait, pourquoi " +#~ "ne pas essayer de modifier le code" +#~ " pour en faire un exemple qui " +#~ "te plairait ?" -#~ msgid "|b4f147db24bb4da9a786e1d6676a1c2d|" -#~ msgstr "" +#~ msgid "Give It a Try" +#~ msgstr "Fais un essai" -#~ msgid "|5c62032f589a457bb37b5fee5b2adbde|" +#~ msgid "" +#~ "Looking through the quickstart code " +#~ "description above will have given a " +#~ "good understanding of how *clients* and" +#~ " *servers* work in Flower, how to " +#~ "run a simple experiment, and the " +#~ "internals of a client wrapper. Here " +#~ "are a few things you could try " +#~ "on your own and get more " +#~ "experience with Flower:" #~ msgstr "" +#~ "En parcourant la description du code " +#~ "de démarrage rapide ci-dessus, tu " +#~ "auras acquis une bonne compréhension du" +#~ " fonctionnement des *clients* et des " +#~ "*serveurs* dans Flower, de l'exécution " +#~ "d'une expérience simple et de la " +#~ "structure interne d'un wrapper client. " +#~ "Voici quelques exemples que tu peux " +#~ "essayer par toi-même pour acquérir " +#~ "plus d'expérience avec Flower :" -#~ msgid "|f154df1846dd44f79a94f1dc3ae8b088|" +#~ msgid "" +#~ "Try and change :code:`PytorchMNISTClient` so" +#~ " it can accept different architectures." #~ msgstr "" +#~ "Essaie de modifier :code:`PytorchMNISTClient` " +#~ "pour qu'il puisse accepter différentes " +#~ "architectures." -#~ msgid "|9d20be8160f7451fb0f33b194506503f|" +#~ msgid "" +#~ "Modify the :code:`train` function so " +#~ "that it accepts different optimizers" #~ msgstr "" +#~ "Modifie la fonction :code:`train` pour " +#~ "qu'elle accepte différents optimiseurs" -#~ msgid "|3d949f76988443c59990d2e64f05c386|" +#~ msgid "" +#~ "Modify the :code:`test` function so that" +#~ " it proves not only the top-1 " +#~ "(regular accuracy) but also the top-5" +#~ " accuracy?" #~ msgstr "" +#~ "Modifie la fonction :code:`test` pour " +#~ "qu'elle prouve non seulement le top-1" +#~ " (précision normale) mais aussi le " +#~ "top-5 ?" -#~ msgid "|526c6d9140f6404f8a226d9056327b3b|" +#~ msgid "" +#~ "Go larger! Try to adapt the code" +#~ " to larger images and datasets. Why" +#~ " not try training on ImageNet with" +#~ " a ResNet-50?" #~ msgstr "" +#~ "Essaie d'adapter le code à des " +#~ "images et à des ensembles de " +#~ "données plus grands. Pourquoi ne pas " +#~ "essayer de s'entraîner sur ImageNet avec" +#~ " un ResNet-50 ?" -#~ msgid "|a5f6af14cd7c4550929b17f83b4f63c7|" -#~ msgstr "" +#~ msgid "You are ready now. Enjoy learning in a federated way!" +#~ msgstr "Tu es prêt maintenant. Profite de l'apprentissage de manière fédérée !" -#~ msgid "|bcd571c4f4ee4803a54f71b5c20448cb|" +#~ msgid "" +#~ "Flower provides differential privacy (DP) " +#~ "wrapper classes for the easy integration" +#~ " of the central DP guarantees " +#~ "provided by DP-FedAvg into training " +#~ "pipelines defined in any of the " +#~ "various ML frameworks that Flower is " +#~ "compatible with." #~ msgstr "" +#~ "Flower fournit des classes d'enveloppe " +#~ "de confidentialité différentielle (DP) pour" +#~ " l'intégration facile des garanties " +#~ "centrales de DP fournies par DP-" +#~ "FedAvg dans les pipelines de formation" +#~ " définis dans n'importe lequel des " +#~ "divers cadres de ML avec lesquels " +#~ "Flower est compatible." -#~ msgid "|c76452ae1ed84965be7ef23c72b95845|" +#~ msgid "" +#~ "Please note that these components are" +#~ " still experimental; the correct " +#~ "configuration of DP for a specific " +#~ "task is still an unsolved problem." #~ msgstr "" +#~ "Note que ces composants sont encore " +#~ "expérimentaux, la configuration correcte du" +#~ " DP pour une tâche spécifique est " +#~ "encore un problème non résolu." #~ msgid "" -#~ "Please follow the first section on " -#~ "`Run Flower using Docker " -#~ "`_ which covers this" -#~ " step in more detail." +#~ "The name DP-FedAvg is misleading " +#~ "since it can be applied on top " +#~ "of any FL algorithm that conforms " +#~ "to the general structure prescribed by" +#~ " the FedOpt family of algorithms." #~ msgstr "" +#~ "Le nom DP-FedAvg est trompeur car" +#~ " il peut être appliqué à n'importe" +#~ " quel algorithme FL qui se conforme" +#~ " à la structure générale prescrite " +#~ "par la famille d'algorithmes FedOpt." + +#~ msgid "DP-FedAvg" +#~ msgstr "DP-FedAvg" #~ msgid "" -#~ "Since `Flower 1.5 `_ we have " -#~ "introduced translations to our doc " -#~ "pages, but, as you might have " -#~ "noticed, the translations are often " -#~ "imperfect. If you speak languages other" -#~ " than English, you might be able " -#~ "to help us in our effort to " -#~ "make Federated Learning accessible to as" -#~ " many people as possible by " -#~ "contributing to those translations! This " -#~ "might also be a great opportunity " -#~ "for those wanting to become open " -#~ "source contributors with little prerequistes." +#~ "DP-FedAvg, originally proposed by " +#~ "McMahan et al. [mcmahan]_ and extended" +#~ " by Andrew et al. [andrew]_, is " +#~ "essentially FedAvg with the following " +#~ "modifications." #~ msgstr "" +#~ "DP-FedAvg, proposé à l'origine par " +#~ "McMahan et al. [mcmahan]_ et étendu " +#~ "par Andrew et al. [andrew]_, est " +#~ "essentiellement FedAvg avec les modifications" +#~ " suivantes." #~ msgid "" -#~ "You input your translation in the " -#~ "textbox at the top and then, once" -#~ " you are happy with it, you " -#~ "either press ``Save and continue`` (to" -#~ " save the translation and go to " -#~ "the next untranslated string), ``Save " -#~ "and stay`` (to save the translation " -#~ "and stay on the same page), " -#~ "``Suggest`` (to add your translation to" -#~ " suggestions for other users to " -#~ "view), or ``Skip`` (to go to the" -#~ " next untranslated string without saving" -#~ " anything)." +#~ "**Clipping** : The influence of each " +#~ "client's update is bounded by clipping" +#~ " it. This is achieved by enforcing" +#~ " a cap on the L2 norm of " +#~ "the update, scaling it down if " +#~ "needed." #~ msgstr "" +#~ "**Clipping** : L'influence de la mise" +#~ " à jour de chaque client est " +#~ "limitée en l'écrêtant. Ceci est réalisé" +#~ " en imposant un plafond à la " +#~ "norme L2 de la mise à jour, " +#~ "en la réduisant si nécessaire." #~ msgid "" -#~ "If the section is completely empty " -#~ "(without any token) or non-existant, " -#~ "the changelog will just contain the " -#~ "title of the PR for the changelog" -#~ " entry, without any description." +#~ "**Noising** : Gaussian noise, calibrated " +#~ "to the clipping threshold, is added " +#~ "to the average computed at the " +#~ "server." #~ msgstr "" - -#~ msgid "Example: Walk-Through PyTorch & MNIST" -#~ msgstr "Exemple : PyTorch et MNIST" +#~ "**Bruit** : un bruit gaussien, calibré" +#~ " sur le seuil d'écrêtage, est ajouté" +#~ " à la moyenne calculée au niveau " +#~ "du serveur." #~ msgid "" -#~ "In this tutorial we will learn, " -#~ "how to train a Convolutional Neural " -#~ "Network on MNIST using Flower and " -#~ "PyTorch." +#~ "The distribution of the update norm " +#~ "has been shown to vary from " +#~ "task-to-task and to evolve as " +#~ "training progresses. This variability is " +#~ "crucial in understanding its impact on" +#~ " differential privacy guarantees, emphasizing " +#~ "the need for an adaptive approach " +#~ "[andrew]_ that continuously adjusts the " +#~ "clipping threshold to track a " +#~ "prespecified quantile of the update norm" +#~ " distribution." #~ msgstr "" -#~ "Dans ce tutoriel, nous allons apprendre," -#~ " comment former un réseau neuronal " -#~ "convolutif sur MNIST en utilisant Flower" -#~ " et PyTorch." +#~ "Il a été démontré que la " +#~ "distribution de la norme de mise à" +#~ " jour varie d'une tâche à l'autre " +#~ "et évolue au fur et à mesure " +#~ "de la formation. C'est pourquoi nous " +#~ "utilisons une approche adaptative [andrew]_" +#~ " qui ajuste continuellement le seuil " +#~ "d'écrêtage pour suivre un quantile " +#~ "prédéfini de la distribution de la " +#~ "norme de mise à jour." + +#~ msgid "Simplifying Assumptions" +#~ msgstr "Simplifier les hypothèses" #~ msgid "" -#~ "Since we want to use PyTorch to" -#~ " solve a computer vision task, let's" -#~ " go ahead an install PyTorch and " -#~ "the **torchvision** library:" +#~ "We make (and attempt to enforce) a" +#~ " number of assumptions that must be" +#~ " satisfied to ensure that the " +#~ "training process actually realizes the " +#~ ":math:`(\\epsilon, \\delta)` guarantees the " +#~ "user has in mind when configuring " +#~ "the setup." #~ msgstr "" -#~ "Puisque nous voulons utiliser PyTorch " -#~ "pour résoudre une tâche de vision " -#~ "par ordinateur, installons PyTorch et la" -#~ " bibliothèque **torchvision** :" +#~ "Nous formulons (et tentons d'appliquer) " +#~ "un certain nombre d'hypothèses qui " +#~ "doivent être satisfaites pour que le " +#~ "processus de formation réalise réellement " +#~ "les garanties :math:`(\\epsilon, \\delta)` que" +#~ " l'utilisateur a à l'esprit lorsqu'il " +#~ "configure l'installation." -#~ msgid "Ready... Set... Train!" -#~ msgstr "Prêts... prêts... entraînez-vous !" +#~ msgid "" +#~ "**Fixed-size subsampling** :Fixed-size " +#~ "subsamples of the clients must be " +#~ "taken at each round, as opposed to" +#~ " variable-sized Poisson subsamples." +#~ msgstr "" +#~ "**Sous-échantillonnage de taille fixe** " +#~ ":Des sous-échantillons de taille fixe" +#~ " des clients doivent être prélevés à" +#~ " chaque tour, par opposition aux " +#~ "sous-échantillons de Poisson de taille " +#~ "variable." #~ msgid "" -#~ "Now that we have all our " -#~ "dependencies installed, let's run a " -#~ "simple distributed training with two " -#~ "clients and one server. Our training " -#~ "procedure and network architecture are " -#~ "based on PyTorch's `Basic MNIST Example" -#~ " `_. " -#~ "This will allow you see how easy" -#~ " it is to wrap your code with" -#~ " Flower and begin training in a " -#~ "federated way. We provide you with " -#~ "two helper scripts, namely *run-" -#~ "server.sh*, and *run-clients.sh*. Don't " -#~ "be afraid to look inside, they are" -#~ " simple enough =)." +#~ "**Unweighted averaging** : The contributions" +#~ " from all the clients must weighted" +#~ " equally in the aggregate to " +#~ "eliminate the requirement for the server" +#~ " to know in advance the sum of" +#~ " the weights of all clients available" +#~ " for selection." #~ msgstr "" -#~ "Maintenant que nous avons installé " -#~ "toutes nos dépendances, lançons un " -#~ "simple entraînement distribué avec deux " -#~ "clients et un serveur. Notre procédure" -#~ " d'entraînement et l'architecture de notre" -#~ " réseau sont basées sur l'exemple " -#~ "MNIST de base de PyTorch " -#~ "`_. Cela" -#~ " te permettra de voir à quel " -#~ "point il est facile d'envelopper ton " -#~ "code avec Flower et de commencer " -#~ "l'entraînement de manière fédérée. Nous " -#~ "te fournissons deux scripts d'aide, à" -#~ " savoir *run-server.sh*, et *run-" -#~ "clients.sh*. N'aie pas peur de regarder" -#~ " à l'intérieur, ils sont assez " -#~ "simples =)." +#~ "**Moyenne non pondérée** : Les " +#~ "contributions de tous les clients " +#~ "doivent être pondérées de façon égale" +#~ " dans l'ensemble afin que le serveur" +#~ " n'ait pas à connaître à l'avance " +#~ "la somme des poids de tous les " +#~ "clients disponibles pour la sélection." #~ msgid "" -#~ "Go ahead and launch on a terminal" -#~ " the *run-server.sh* script first as" -#~ " follows:" -#~ msgstr "Lance sur un terminal le script *run-server.sh* d'abord comme suit :" +#~ "**No client failures** : The set " +#~ "of available clients must stay constant" +#~ " across all rounds of training. In" +#~ " other words, clients cannot drop out" +#~ " or fail." +#~ msgstr "" +#~ "**Aucune défaillance de client** : " +#~ "L'ensemble des clients disponibles doit " +#~ "rester constant pendant toutes les " +#~ "séries de formation. En d'autres termes," +#~ " les clients ne peuvent pas " +#~ "abandonner ou échouer." -#~ msgid "Now that the server is up and running, go ahead and launch the clients." -#~ msgstr "Maintenant que le serveur est opérationnel, vas-y et lance les clients." +#~ msgid "" +#~ "The first two are useful for " +#~ "eliminating a multitude of complications " +#~ "associated with calibrating the noise to" +#~ " the clipping threshold, while the " +#~ "third one is required to comply " +#~ "with the assumptions of the privacy " +#~ "analysis." +#~ msgstr "" +#~ "Les deux premiers sont utiles pour " +#~ "éliminer une multitude de complications " +#~ "liées au calibrage du bruit en " +#~ "fonction du seuil d'écrêtage, tandis que" +#~ " le troisième est nécessaire pour se" +#~ " conformer aux hypothèses de l'analyse " +#~ "de la vie privée." #~ msgid "" -#~ "Et voilà! You should be seeing the" -#~ " training procedure and, after a few" -#~ " iterations, the test accuracy for " -#~ "each client." +#~ "These restrictions are in line with " +#~ "constraints imposed by Andrew et al. " +#~ "[andrew]_." #~ msgstr "" -#~ "Et voilà ! Tu devrais voir la " -#~ "procédure d'entraînement et, après quelques" -#~ " itérations, la précision du test " -#~ "pour chaque client." +#~ "Ces restrictions sont conformes aux " +#~ "contraintes imposées par Andrew et al." +#~ " [andrew]_." -#~ msgid "Now, let's see what is really happening inside." -#~ msgstr "Maintenant, voyons ce qui se passe réellement à l'intérieur." +#~ msgid "Customizable Responsibility for Noise injection" +#~ msgstr "Responsabilité personnalisable pour l'injection de bruit" #~ msgid "" -#~ "Inside the server helper script *run-" -#~ "server.sh* you will find the following" -#~ " code that basically runs the " -#~ ":code:`server.py`" +#~ "In contrast to other implementations " +#~ "where the addition of noise is " +#~ "performed at the server, you can " +#~ "configure the site of noise injection" +#~ " to better match your threat model." +#~ " We provide users with the " +#~ "flexibility to set up the training " +#~ "such that each client independently adds" +#~ " a small amount of noise to the" +#~ " clipped update, with the result that" +#~ " simply aggregating the noisy updates " +#~ "is equivalent to the explicit addition" +#~ " of noise to the non-noisy " +#~ "aggregate at the server." #~ msgstr "" -#~ "Dans le script d'aide au serveur " -#~ "*run-server.sh*, tu trouveras le code " -#~ "suivant qui exécute le fichier " -#~ ":code:`server.py`" +#~ "Contrairement à d'autres implémentations où" +#~ " l'ajout de bruit est effectué au " +#~ "niveau du serveur, tu peux configurer" +#~ " le site d'injection de bruit pour" +#~ " qu'il corresponde mieux à ton modèle" +#~ " de menace. Nous offrons aux " +#~ "utilisateurs la possibilité de configurer " +#~ "l'entraînement de telle sorte que chaque" +#~ " client ajoute indépendamment une petite" +#~ " quantité de bruit à la mise à" +#~ " jour écrêtée, ce qui fait que " +#~ "le simple fait d'agréger les mises " +#~ "à jour bruyantes équivaut à l'ajout " +#~ "explicite de bruit à l'agrégat non " +#~ "bruyant au niveau du serveur." #~ msgid "" -#~ "We can go a bit deeper and " -#~ "see that :code:`server.py` simply launches " -#~ "a server that will coordinate three " -#~ "rounds of training. Flower Servers are" -#~ " very customizable, but for simple " -#~ "workloads, we can start a server " -#~ "using the :ref:`start_server ` function and leave " -#~ "all the configuration possibilities at " -#~ "their default values, as seen below." -#~ msgstr "" -#~ "Nous pouvons aller un peu plus " -#~ "loin et voir que :code:`server.py` lance" -#~ " simplement un serveur qui coordonnera " -#~ "trois tours de formation. Flower Les " -#~ "serveurs sont très personnalisables, mais " -#~ "pour les charges de travail simples, " -#~ "nous pouvons démarrer un serveur à " -#~ "l'aide de la fonction :ref:`start_server " -#~ "` et " -#~ "laisser toutes les possibilités de " -#~ "configuration à leurs valeurs par " -#~ "défaut, comme on peut le voir " -#~ "ci-dessous." +#~ "To be precise, if we let :math:`m`" +#~ " be the number of clients sampled " +#~ "each round and :math:`\\sigma_\\Delta` be " +#~ "the scale of the total Gaussian " +#~ "noise that needs to be added to" +#~ " the sum of the model updates, " +#~ "we can use simple maths to show" +#~ " that this is equivalent to each " +#~ "client adding noise with scale " +#~ ":math:`\\sigma_\\Delta/\\sqrt{m}`." +#~ msgstr "" +#~ "Pour être précis, si nous laissons " +#~ ":math:`m` être le nombre de clients " +#~ "échantillonnés à chaque tour et " +#~ ":math:\\sigma_\\Delta` être l'échelle du bruit" +#~ " gaussien total qui doit être ajouté" +#~ " à la somme des mises à jour" +#~ " du modèle, nous pouvons utiliser des" +#~ " mathématiques simples pour montrer que " +#~ "cela équivaut à ce que chaque " +#~ "client ajoute du bruit avec l'échelle" +#~ " :math:\\sigma_\\Delta/\\sqrt{m}`." + +#~ msgid "Wrapper-based approach" +#~ msgstr "Approche basée sur l'enveloppe" #~ msgid "" -#~ "Next, let's take a look at the " -#~ "*run-clients.sh* file. You will see " -#~ "that it contains the main loop " -#~ "that starts a set of *clients*." +#~ "Introducing DP to an existing workload" +#~ " can be thought of as adding an" +#~ " extra layer of security around it." +#~ " This inspired us to provide the " +#~ "additional server and client-side logic" +#~ " needed to make the training process" +#~ " differentially private as wrappers for " +#~ "instances of the :code:`Strategy` and " +#~ ":code:`NumPyClient` abstract classes respectively." +#~ " This wrapper-based approach has the" +#~ " advantage of being easily composable " +#~ "with other wrappers that someone might" +#~ " contribute to the Flower library in" +#~ " the future, e.g., for secure " +#~ "aggregation. Using Inheritance instead can " +#~ "be tedious because that would require" +#~ " the creation of new sub- classes " +#~ "every time a new class implementing " +#~ ":code:`Strategy` or :code:`NumPyClient` is " +#~ "defined." #~ msgstr "" -#~ "Ensuite, jetons un coup d'œil au " -#~ "fichier *run-clients.sh*. Tu verras " -#~ "qu'il contient la boucle principale qui" -#~ " démarre un ensemble de *clients*." +#~ "L'introduction du DP dans une charge " +#~ "de travail existante peut être " +#~ "considérée comme l'ajout d'une couche de" +#~ " sécurité supplémentaire autour d'elle. " +#~ "Cela nous a incités à fournir la" +#~ " logique supplémentaire côté serveur et " +#~ "côté client nécessaire pour rendre le" +#~ " processus de formation différentiellement " +#~ "privé en tant qu'enveloppes pour les " +#~ "instances des classes abstraites " +#~ ":code:`Strategy` et :code:`NumPyClient` " +#~ "respectivement. Cette approche basée sur " +#~ "l'enveloppe a l'avantage d'être facilement " +#~ "composable avec d'autres enveloppes que " +#~ "quelqu'un pourrait contribuer à la " +#~ "bibliothèque Flower à l'avenir, par " +#~ "exemple, pour l'agrégation sécurisée. " +#~ "L'utilisation de l'héritage à la place" +#~ " peut être fastidieuse car cela " +#~ "nécessiterait la création de nouvelles " +#~ "sous-classes chaque fois qu'une nouvelle" +#~ " classe mettant en œuvre :code:`Strategy`" +#~ " ou :code:`NumPyClient` est définie." #~ msgid "" -#~ "**cid**: is the client ID. It is" -#~ " an integer that uniquely identifies " -#~ "client identifier." +#~ "The first version of our solution " +#~ "was to define a decorator whose " +#~ "constructor accepted, among other things, " +#~ "a boolean-valued variable indicating " +#~ "whether adaptive clipping was to be " +#~ "enabled or not. We quickly realized " +#~ "that this would clutter its " +#~ ":code:`__init__()` function with variables " +#~ "corresponding to hyperparameters of adaptive" +#~ " clipping that would remain unused " +#~ "when it was disabled. A cleaner " +#~ "implementation could be achieved by " +#~ "splitting the functionality into two " +#~ "decorators, :code:`DPFedAvgFixed` and " +#~ ":code:`DPFedAvgAdaptive`, with the latter sub-" +#~ " classing the former. The constructors " +#~ "for both classes accept a boolean " +#~ "parameter :code:`server_side_noising`, which, as " +#~ "the name suggests, determines where " +#~ "noising is to be performed." #~ msgstr "" -#~ "**cid** : c'est l'identifiant du client." -#~ " C'est un nombre entier qui identifie" -#~ " de façon unique l'identifiant du " -#~ "client." +#~ "La première version de notre solution" +#~ " consistait à définir un décorateur " +#~ "dont le constructeur acceptait, entre " +#~ "autres, une variable à valeur booléenne" +#~ " indiquant si l'écrêtage adaptatif devait" +#~ " être activé ou non. Nous nous " +#~ "sommes rapidement rendu compte que cela" +#~ " encombrerait sa fonction :code:`__init__()` " +#~ "avec des variables correspondant aux " +#~ "hyperparamètres de l'écrêtage adaptatif qui" +#~ " resteraient inutilisées lorsque celui-ci" +#~ " était désactivé. Une implémentation plus" +#~ " propre pourrait être obtenue en " +#~ "divisant la fonctionnalité en deux " +#~ "décorateurs, :code:`DPFedAvgFixed` et " +#~ ":code:`DPFedAvgAdaptive`, le second sous-" +#~ "classant le premier. Les constructeurs " +#~ "des deux classes acceptent un paramètre" +#~ " booléen :code:`server_side_noising` qui, comme" +#~ " son nom l'indique, détermine l'endroit " +#~ "où le noising doit être effectué." -#~ msgid "**sever_address**: String that identifies IP and port of the server." -#~ msgstr "**sever_address** : Chaîne qui identifie l'IP et le port du serveur." +#~ msgid "" +#~ "The server-side capabilities required " +#~ "for the original version of DP-" +#~ "FedAvg, i.e., the one which performed" +#~ " fixed clipping, can be completely " +#~ "captured with the help of wrapper " +#~ "logic for just the following two " +#~ "methods of the :code:`Strategy` abstract " +#~ "class." +#~ msgstr "" +#~ "Les capacités côté serveur requises pour" +#~ " la version originale de DP-FedAvg," +#~ " c'est-à-dire celle qui effectue un " +#~ "écrêtage fixe, peuvent être entièrement " +#~ "capturées à l'aide d'une logique " +#~ "d'enveloppement pour les deux méthodes " +#~ "suivantes de la classe abstraite " +#~ ":code:`Strategy`." #~ msgid "" -#~ "**nb_clients**: This defines the number " -#~ "of clients being created. This piece " -#~ "of information is not required by " -#~ "the client, but it helps us " -#~ "partition the original MNIST dataset to" -#~ " make sure that every client is " -#~ "working on unique subsets of both " -#~ "*training* and *test* sets." +#~ ":code:`configure_fit()` : The config " +#~ "dictionary being sent by the wrapped " +#~ ":code:`Strategy` to each client needs to" +#~ " be augmented with an additional " +#~ "value equal to the clipping threshold" +#~ " (keyed under :code:`dpfedavg_clip_norm`) and," +#~ " if :code:`server_side_noising=true`, another one" +#~ " equal to the scale of the " +#~ "Gaussian noise that needs to be " +#~ "added at the client (keyed under " +#~ ":code:`dpfedavg_noise_stddev`). This entails " +#~ "*post*-processing of the results returned " +#~ "by the wrappee's implementation of " +#~ ":code:`configure_fit()`." #~ msgstr "" -#~ "**Cette information n'est pas requise " -#~ "par le client, mais elle nous aide" -#~ " à partitionner l'ensemble de données " -#~ "MNIST original pour nous assurer que " -#~ "chaque client travaille sur des sous-" -#~ "ensembles uniques des ensembles *formation*" -#~ " et *test*." +#~ ":code:`configure_fit()` : Le dictionnaire de" +#~ " configuration envoyé par la " +#~ ":code:`Strategy` enveloppée à chaque client" +#~ " doit être augmenté d'une valeur " +#~ "supplémentaire égale au seuil d'écrêtage " +#~ "(indiqué sous :code:`dpfedavg_clip_norm`) et, " +#~ "si :code:`server_side_noising=true`, d'une autre " +#~ "égale à l'échelle du bruit gaussien " +#~ "qui doit être ajouté au client " +#~ "(indiqué sous :code:`dpfedavg_noise_stddev`)." #~ msgid "" -#~ "Again, we can go deeper and look" -#~ " inside :code:`flwr_example/quickstart-" -#~ "pytorch/client.py`. After going through the" -#~ " argument parsing code at the " -#~ "beginning of our :code:`main` function, " -#~ "you will find a call to " -#~ ":code:`mnist.load_data`. This function is " -#~ "responsible for partitioning the original " -#~ "MNIST datasets (*training* and *test*) " -#~ "and returning a :code:`torch.utils.data.DataLoader`" -#~ " s for each of them. We then" -#~ " instantiate a :code:`PytorchMNISTClient` object" -#~ " with our client ID, our DataLoaders," -#~ " the number of epochs in each " -#~ "round, and which device we want to" -#~ " use for training (CPU or GPU)." +#~ ":code:`aggregate_fit()`: We check whether any" +#~ " of the sampled clients dropped out" +#~ " or failed to upload an update " +#~ "before the round timed out. In " +#~ "that case, we need to abort the" +#~ " current round, discarding any successful" +#~ " updates that were received, and move" +#~ " on to the next one. On the " +#~ "other hand, if all clients responded " +#~ "successfully, we must force the " +#~ "averaging of the updates to happen " +#~ "in an unweighted manner by intercepting" +#~ " the :code:`parameters` field of " +#~ ":code:`FitRes` for each received update " +#~ "and setting it to 1. Furthermore, " +#~ "if :code:`server_side_noising=true`, each update " +#~ "is perturbed with an amount of " +#~ "noise equal to what it would have" +#~ " been subjected to had client-side" +#~ " noising being enabled. This entails " +#~ "*pre*-processing of the arguments to " +#~ "this method before passing them on " +#~ "to the wrappee's implementation of " +#~ ":code:`aggregate_fit()`." #~ msgstr "" -#~ "Encore une fois, nous pouvons aller " -#~ "plus loin et regarder dans " -#~ ":code:`flwr_example/quickstart-pytorch/client.py`. Après" -#~ " avoir parcouru le code d'analyse des" -#~ " arguments au début de notre fonction" -#~ " :code:`main`, tu trouveras un appel " -#~ "à :code:`mnist.load_data`. Cette fonction est" -#~ " responsable du partitionnement des " -#~ "ensembles de données MNIST originaux " -#~ "(*training* et *test*) et renvoie un " -#~ ":code:`torch.utils.data.DataLoader` s pour chacun" -#~ " d'entre eux. Nous instancions ensuite " -#~ "un objet :code:`PytorchMNISTClient` avec notre" -#~ " ID client, nos DataLoaders, le " -#~ "nombre d'époques dans chaque tour et " -#~ "le périphérique que nous voulons " -#~ "utiliser pour l'entraînement (CPU ou " -#~ "GPU)." +#~ ":code:`aggregate_fit()`: We check whether any" +#~ " of the sampled clients dropped out" +#~ " or failed to upload an update " +#~ "before the round timed out. In " +#~ "that case, we need to abort the" +#~ " current round, discarding any successful" +#~ " updates that were received, and move" +#~ " on to the next one. On the " +#~ "other hand, if all clients responded " +#~ "successfully, we must force the " +#~ "averaging of the updates to happen " +#~ "in an unweighted manner by intercepting" +#~ " the :code:`parameters` field of " +#~ ":code:`FitRes` for each received update " +#~ "and setting it to 1. Furthermore, " +#~ "if :code:`server_side_noising=true`, each update " +#~ "is perturbed with an amount of " +#~ "noise equal to what it would have" +#~ " been subjected to had client-side" +#~ " noising being enabled. This entails " +#~ "*pre*-processing of the arguments to " +#~ "this method before passing them on " +#~ "to the wrappee's implementation of " +#~ ":code:`aggregate_fit()`." #~ msgid "" -#~ "The :code:`PytorchMNISTClient` object when " -#~ "finally passed to :code:`fl.client.start_client` " -#~ "along with the server's address as " -#~ "the training process begins." +#~ "We can't directly change the aggregation" +#~ " function of the wrapped strategy to" +#~ " force it to add noise to the" +#~ " aggregate, hence we simulate client-" +#~ "side noising to implement server-side" +#~ " noising." #~ msgstr "" -#~ "L'objet :code:`PytorchMNISTClient` est finalement" -#~ " transmis à :code:`fl.client.start_client` avec" -#~ " l'adresse du serveur lorsque le " -#~ "processus de formation commence." +#~ "Nous ne pouvons pas modifier directement" +#~ " la fonction d'agrégation de la " +#~ "stratégie enveloppée pour la forcer à" +#~ " ajouter du bruit à l'agrégat, c'est" +#~ " pourquoi nous simulons le bruit côté" +#~ " client pour mettre en œuvre le " +#~ "bruit côté serveur." -#~ msgid "A Closer Look" -#~ msgstr "Regarder de plus près" +#~ msgid "" +#~ "These changes have been put together " +#~ "into a class called :code:`DPFedAvgFixed`, " +#~ "whose constructor accepts the strategy " +#~ "being decorated, the clipping threshold " +#~ "and the number of clients sampled " +#~ "every round as compulsory arguments. The" +#~ " user is expected to specify the " +#~ "clipping threshold since the order of" +#~ " magnitude of the update norms is " +#~ "highly dependent on the model being " +#~ "trained and providing a default value" +#~ " would be misleading. The number of" +#~ " clients sampled at every round is" +#~ " required to calculate the amount of" +#~ " noise that must be added to " +#~ "each individual update, either by the" +#~ " server or the clients." +#~ msgstr "" +#~ "Ces modifications ont été regroupées " +#~ "dans une classe appelée :code:`DPFedAvgFixed`," +#~ " dont le constructeur accepte la " +#~ "stratégie décorée, le seuil d'écrêtage " +#~ "et le nombre de clients échantillonnés" +#~ " à chaque tour comme arguments " +#~ "obligatoires. L'utilisateur est censé " +#~ "spécifier le seuil d'écrêtage car " +#~ "l'ordre de grandeur des normes de " +#~ "mise à jour dépend fortement du " +#~ "modèle formé et fournir une valeur " +#~ "par défaut serait trompeur. Le nombre" +#~ " de clients échantillonnés à chaque " +#~ "tour est nécessaire pour calculer la " +#~ "quantité de bruit qui doit être " +#~ "ajoutée à chaque mise à jour " +#~ "individuelle, que ce soit par le " +#~ "serveur ou par les clients." #~ msgid "" -#~ "Now, let's look closely into the " -#~ ":code:`PytorchMNISTClient` inside :code:`flwr_example" -#~ ".quickstart-pytorch.mnist` and see what it" -#~ " is doing:" +#~ "The additional functionality required to " +#~ "facilitate adaptive clipping has been " +#~ "provided in :code:`DPFedAvgAdaptive`, a " +#~ "subclass of :code:`DPFedAvgFixed`. It " +#~ "overrides the above-mentioned methods to" +#~ " do the following." #~ msgstr "" -#~ "Maintenant, examinons de près le " -#~ ":code:`PytorchMNISTClient` à l'intérieur du " -#~ ":code:`flwr_example.quickstart-pytorch.mnist` et " -#~ "voyons ce qu'il fait :" +#~ "La fonctionnalité supplémentaire nécessaire " +#~ "pour faciliter l'écrêtage adaptatif a " +#~ "été fournie dans :code:`DPFedAvgAdaptive`, une" +#~ " sous-classe de :code:`DPFedAvgFixed`. Elle" +#~ " remplace les méthodes mentionnées ci-" +#~ "dessus pour effectuer les opérations " +#~ "suivantes." #~ msgid "" -#~ "The first thing to notice is that" -#~ " :code:`PytorchMNISTClient` instantiates a CNN" -#~ " model inside its constructor" +#~ ":code:`configure_fit()` : It intercepts the" +#~ " config dict returned by " +#~ ":code:`super.configure_fit()` to add the " +#~ "key-value pair " +#~ ":code:`dpfedavg_adaptive_clip_enabled:True` to it, " +#~ "which the client interprets as an " +#~ "instruction to include an indicator bit" +#~ " (1 if update norm <= clipping " +#~ "threshold, 0 otherwise) in the results" +#~ " returned by it." #~ msgstr "" -#~ "La première chose à remarquer est " -#~ "que :code:`PytorchMNISTClient` instancie un " -#~ "modèle CNN dans son constructeur" +#~ ":code:`configure_fit()` : Il intercepte le " +#~ "dict de configuration renvoyé par " +#~ ":code:`super.configure_fit()` pour y ajouter " +#~ "la paire clé-valeur " +#~ ":code:`dpfedavg_adaptive_clip_enabled:True`, que le " +#~ "client interprète comme une instruction " +#~ "d'inclure un bit indicateur (1 si " +#~ "la norme de mise à jour <= " +#~ "seuil d'écrêtage, 0 sinon) dans les " +#~ "résultats qu'il renvoie." #~ msgid "" -#~ "The code for the CNN is available" -#~ " under :code:`quickstart-pytorch.mnist` and " -#~ "it is reproduced below. It is the" -#~ " same network found in `Basic MNIST" -#~ " Example " -#~ "`_." +#~ ":code:`aggregate_fit()` : It follows a " +#~ "call to :code:`super.aggregate_fit()` with one" +#~ " to :code:`__update_clip_norm__()`, a procedure" +#~ " which adjusts the clipping threshold " +#~ "on the basis of the indicator bits" +#~ " received from the sampled clients." #~ msgstr "" -#~ "Le code du CNN est disponible sous" -#~ " :code:`quickstart-pytorch.mnist` et il est" -#~ " reproduit ci-dessous. Il s'agit du" -#~ " même réseau que celui que l'on " -#~ "trouve dans `Exemple basique de MNIST" -#~ " `_." +#~ ":code:`aggregate_fit()` : Il fait suivre " +#~ "un appel à :code:`super.aggregate_fit()` d'un" +#~ " appel à :code:`__update_clip_norm__()`, une " +#~ "procédure qui ajuste le seuil d'écrêtage" +#~ " sur la base des bits indicateurs " +#~ "reçus des clients échantillonnés." #~ msgid "" -#~ "The second thing to notice is that" -#~ " :code:`PytorchMNISTClient` class inherits from" -#~ " the :code:`fl.client.Client`, and hence it" -#~ " must implement the following methods:" +#~ "The client-side capabilities required " +#~ "can be completely captured through " +#~ "wrapper logic for just the :code:`fit()`" +#~ " method of the :code:`NumPyClient` abstract" +#~ " class. To be precise, we need " +#~ "to *post-process* the update computed" +#~ " by the wrapped client to clip " +#~ "it, if necessary, to the threshold " +#~ "value supplied by the server as " +#~ "part of the config dictionary. In " +#~ "addition to this, it may need to" +#~ " perform some extra work if either" +#~ " (or both) of the following keys " +#~ "are also present in the dict." #~ msgstr "" -#~ "La deuxième chose à noter est que" -#~ " la classe :code:`PytorchMNISTClient` hérite " -#~ "de :code:`fl.client.Client`, et qu'elle doit" -#~ " donc implémenter les méthodes suivantes" -#~ " :" +#~ "Les capacités requises côté client " +#~ "peuvent être entièrement capturées par " +#~ "une logique de wrapper pour la " +#~ "seule méthode :code:`fit()` de la classe" +#~ " abstraite :code:`NumPyClient`. Pour être " +#~ "précis, nous devons *post-traiter* la" +#~ " mise à jour calculée par le " +#~ "client wrapped pour l'écrêter, si " +#~ "nécessaire, à la valeur seuil fournie" +#~ " par le serveur dans le cadre " +#~ "du dictionnaire de configuration. En " +#~ "plus de cela, il peut avoir besoin" +#~ " d'effectuer un travail supplémentaire si" +#~ " l'une des clés suivantes (ou les " +#~ "deux) est également présente dans le " +#~ "dict." #~ msgid "" -#~ "When comparing the abstract class to " -#~ "its derived class :code:`PytorchMNISTClient` " -#~ "you will notice that :code:`fit` calls" -#~ " a :code:`train` function and that " -#~ ":code:`evaluate` calls a :code:`test`: " -#~ "function." +#~ ":code:`dpfedavg_noise_stddev` : Generate and " +#~ "add the specified amount of noise " +#~ "to the clipped update." #~ msgstr "" -#~ "En comparant la classe abstraite à " -#~ "sa classe dérivée :code:`PytorchMNISTClient`, " -#~ "tu remarqueras que :code:`fit` appelle " -#~ "une fonction :code:`train` et que " -#~ ":code:`evaluate` appelle une fonction " -#~ ":code:`test` :." +#~ ":code:`dpfedavg_noise_stddev` : Génère et " +#~ "ajoute la quantité de bruit spécifiée" +#~ " à la mise à jour de " +#~ "l'écrêtage." #~ msgid "" -#~ "These functions can both be found " -#~ "inside the same :code:`quickstart-" -#~ "pytorch.mnist` module:" +#~ ":code:`dpfedavg_adaptive_clip_enabled` : Augment the" +#~ " metrics dict in the :code:`FitRes` " +#~ "object being returned to the server " +#~ "with an indicator bit, calculated as " +#~ "described earlier." #~ msgstr "" -#~ "Ces fonctions se trouvent toutes deux" -#~ " dans le même module :code:`quickstart-" -#~ "pytorch.mnist` :" +#~ ":code:`dpfedavg_adaptive_clip_enabled` : Complète " +#~ "les métriques dict dans l'objet " +#~ ":code:`FitRes` renvoyé au serveur avec " +#~ "un bit indicateur, calculé comme décrit" +#~ " précédemment." + +#~ msgid "Performing the :math:`(\\epsilon, \\delta)` analysis" +#~ msgstr "Effectuer l'analyse :math:`(\\epsilon, \\delta)`" #~ msgid "" -#~ "Observe that these functions encapsulate " -#~ "regular training and test loops and " -#~ "provide :code:`fit` and :code:`evaluate` with" -#~ " final statistics for each round. You" -#~ " could substitute them with your " -#~ "custom train and test loops and " -#~ "change the network architecture, and the" -#~ " entire example would still work " -#~ "flawlessly. As a matter of fact, " -#~ "why not try and modify the code" -#~ " to an example of your liking?" +#~ "Assume you have trained for :math:`n`" +#~ " rounds with sampling fraction :math:`q`" +#~ " and noise multiplier :math:`z`. In " +#~ "order to calculate the :math:`\\epsilon` " +#~ "value this would result in for a" +#~ " particular :math:`\\delta`, the following " +#~ "script may be used." #~ msgstr "" -#~ "Observe que ces fonctions encapsulent " -#~ "les boucles d'entraînement et de test" -#~ " habituelles et fournissent à :code:`fit`" -#~ " et :code:`evaluate` les statistiques " -#~ "finales pour chaque tour. Tu pourrais" -#~ " les remplacer par tes boucles " -#~ "d'entraînement et de test personnalisées " -#~ "et changer l'architecture du réseau, et" -#~ " l'ensemble de l'exemple fonctionnerait " -#~ "toujours parfaitement. En fait, pourquoi " -#~ "ne pas essayer de modifier le code" -#~ " pour en faire un exemple qui " -#~ "te plairait ?" +#~ "Supposons que tu te sois entraîné " +#~ "pendant :math:`n` tours avec la fraction" +#~ " d'échantillonnage :math:`q` et le " +#~ "multiplicateur de bruit :math:`z`. Afin " +#~ "de calculer la valeur :math:`epsilon` " +#~ "qui en résulterait pour un " +#~ ":math:`\\delta` particulier, le script suivant" +#~ " peut être utilisé." -#~ msgid "Give It a Try" -#~ msgstr "Fais un essai" +#~ msgid "" +#~ "`How to run Flower using Docker " +#~ "`_" +#~ msgstr "" -#~ msgid "" -#~ "Looking through the quickstart code " -#~ "description above will have given a " -#~ "good understanding of how *clients* and" -#~ " *servers* work in Flower, how to " -#~ "run a simple experiment, and the " -#~ "internals of a client wrapper. Here " -#~ "are a few things you could try " -#~ "on your own and get more " -#~ "experience with Flower:" +#~ msgid "Enjoy building more robust and flexible ``ClientApp``s with mods!" #~ msgstr "" -#~ "En parcourant la description du code " -#~ "de démarrage rapide ci-dessus, tu " -#~ "auras acquis une bonne compréhension du" -#~ " fonctionnement des *clients* et des " -#~ "*serveurs* dans Flower, de l'exécution " -#~ "d'une expérience simple et de la " -#~ "structure interne d'un wrapper client. " -#~ "Voici quelques exemples que tu peux " -#~ "essayer par toi-même pour acquérir " -#~ "plus d'expérience avec Flower :" #~ msgid "" -#~ "Try and change :code:`PytorchMNISTClient` so" -#~ " it can accept different architectures." +#~ ":py:obj:`ClientApp `\\ " +#~ "\\(client\\_fn\\[\\, mods\\]\\)" #~ msgstr "" -#~ "Essaie de modifier :code:`PytorchMNISTClient` " -#~ "pour qu'il puisse accepter différentes " -#~ "architectures." + +#~ msgid ":py:obj:`flwr.server.driver `\\" +#~ msgstr "" + +#~ msgid "Flower driver SDK." +#~ msgstr "Serveur de Flower" + +#~ msgid "driver" +#~ msgstr "serveur" #~ msgid "" -#~ "Modify the :code:`train` function so " -#~ "that it accepts different optimizers" +#~ ":py:obj:`start_driver `\\ " +#~ "\\(\\*\\[\\, server\\_address\\, server\\, ...\\]\\)" #~ msgstr "" -#~ "Modifie la fonction :code:`train` pour " -#~ "qu'elle accepte différents optimiseurs" #~ msgid "" -#~ "Modify the :code:`test` function so that" -#~ " it proves not only the top-1 " -#~ "(regular accuracy) but also the top-5" -#~ " accuracy?" +#~ ":py:obj:`Driver `\\ " +#~ "\\(\\[driver\\_service\\_address\\, ...\\]\\)" #~ msgstr "" -#~ "Modifie la fonction :code:`test` pour " -#~ "qu'elle prouve non seulement le top-1" -#~ " (précision normale) mais aussi le " -#~ "top-5 ?" #~ msgid "" -#~ "Go larger! Try to adapt the code" -#~ " to larger images and datasets. Why" -#~ " not try training on ImageNet with" -#~ " a ResNet-50?" +#~ ":py:obj:`GrpcDriver `\\ " +#~ "\\(\\[driver\\_service\\_address\\, ...\\]\\)" #~ msgstr "" -#~ "Essaie d'adapter le code à des " -#~ "images et à des ensembles de " -#~ "données plus grands. Pourquoi ne pas " -#~ "essayer de s'entraîner sur ImageNet avec" -#~ " un ResNet-50 ?" -#~ msgid "You are ready now. Enjoy learning in a federated way!" -#~ msgstr "Tu es prêt maintenant. Profite de l'apprentissage de manière fédérée !" +#~ msgid "`GrpcDriver` provides access to the gRPC Driver API/service." +#~ msgstr "" + +#~ msgid ":py:obj:`get_nodes `\\ \\(\\)" +#~ msgstr "" #~ msgid "" -#~ "Flower provides differential privacy (DP) " -#~ "wrapper classes for the easy integration" -#~ " of the central DP guarantees " -#~ "provided by DP-FedAvg into training " -#~ "pipelines defined in any of the " -#~ "various ML frameworks that Flower is " -#~ "compatible with." +#~ ":py:obj:`pull_task_res " +#~ "`\\ \\(task\\_ids\\)" +#~ msgstr "" + +#~ msgid "Get task results." #~ msgstr "" -#~ "Flower fournit des classes d'enveloppe " -#~ "de confidentialité différentielle (DP) pour" -#~ " l'intégration facile des garanties " -#~ "centrales de DP fournies par DP-" -#~ "FedAvg dans les pipelines de formation" -#~ " définis dans n'importe lequel des " -#~ "divers cadres de ML avec lesquels " -#~ "Flower est compatible." #~ msgid "" -#~ "Please note that these components are" -#~ " still experimental; the correct " -#~ "configuration of DP for a specific " -#~ "task is still an unsolved problem." +#~ ":py:obj:`push_task_ins " +#~ "`\\ " +#~ "\\(task\\_ins\\_list\\)" +#~ msgstr "" + +#~ msgid "Schedule tasks." +#~ msgstr "" + +#~ msgid "GrpcDriver" +#~ msgstr "" + +#~ msgid ":py:obj:`connect `\\ \\(\\)" +#~ msgstr "" + +#~ msgid "Connect to the Driver API." #~ msgstr "" -#~ "Note que ces composants sont encore " -#~ "expérimentaux, la configuration correcte du" -#~ " DP pour une tâche spécifique est " -#~ "encore un problème non résolu." #~ msgid "" -#~ "The name DP-FedAvg is misleading " -#~ "since it can be applied on top " -#~ "of any FL algorithm that conforms " -#~ "to the general structure prescribed by" -#~ " the FedOpt family of algorithms." +#~ ":py:obj:`create_run " +#~ "`\\ \\(req\\)" #~ msgstr "" -#~ "Le nom DP-FedAvg est trompeur car" -#~ " il peut être appliqué à n'importe" -#~ " quel algorithme FL qui se conforme" -#~ " à la structure générale prescrite " -#~ "par la famille d'algorithmes FedOpt." -#~ msgid "DP-FedAvg" -#~ msgstr "DP-FedAvg" +#~ msgid "Request for run ID." +#~ msgstr "Demande pour une nouvelle Flower Baseline" #~ msgid "" -#~ "DP-FedAvg, originally proposed by " -#~ "McMahan et al. [mcmahan]_ and extended" -#~ " by Andrew et al. [andrew]_, is " -#~ "essentially FedAvg with the following " -#~ "modifications." +#~ ":py:obj:`disconnect " +#~ "`\\ \\(\\)" +#~ msgstr "" + +#~ msgid "Disconnect from the Driver API." #~ msgstr "" -#~ "DP-FedAvg, proposé à l'origine par " -#~ "McMahan et al. [mcmahan]_ et étendu " -#~ "par Andrew et al. [andrew]_, est " -#~ "essentiellement FedAvg avec les modifications" -#~ " suivantes." #~ msgid "" -#~ "**Clipping** : The influence of each " -#~ "client's update is bounded by clipping" -#~ " it. This is achieved by enforcing" -#~ " a cap on the L2 norm of " -#~ "the update, scaling it down if " -#~ "needed." +#~ ":py:obj:`get_nodes `\\" +#~ " \\(req\\)" #~ msgstr "" -#~ "**Clipping** : L'influence de la mise" -#~ " à jour de chaque client est " -#~ "limitée en l'écrêtant. Ceci est réalisé" -#~ " en imposant un plafond à la " -#~ "norme L2 de la mise à jour, " -#~ "en la réduisant si nécessaire." + +#~ msgid "Get client IDs." +#~ msgstr "Moteur client Edge" #~ msgid "" -#~ "**Noising** : Gaussian noise, calibrated " -#~ "to the clipping threshold, is added " -#~ "to the average computed at the " -#~ "server." +#~ ":py:obj:`pull_task_res " +#~ "`\\ \\(req\\)" #~ msgstr "" -#~ "**Bruit** : un bruit gaussien, calibré" -#~ " sur le seuil d'écrêtage, est ajouté" -#~ " à la moyenne calculée au niveau " -#~ "du serveur." #~ msgid "" -#~ "The distribution of the update norm " -#~ "has been shown to vary from " -#~ "task-to-task and to evolve as " -#~ "training progresses. This variability is " -#~ "crucial in understanding its impact on" -#~ " differential privacy guarantees, emphasizing " -#~ "the need for an adaptive approach " -#~ "[andrew]_ that continuously adjusts the " -#~ "clipping threshold to track a " -#~ "prespecified quantile of the update norm" -#~ " distribution." +#~ ":py:obj:`push_task_ins " +#~ "`\\ \\(req\\)" #~ msgstr "" -#~ "Il a été démontré que la " -#~ "distribution de la norme de mise à" -#~ " jour varie d'une tâche à l'autre " -#~ "et évolue au fur et à mesure " -#~ "de la formation. C'est pourquoi nous " -#~ "utilisons une approche adaptative [andrew]_" -#~ " qui ajuste continuellement le seuil " -#~ "d'écrêtage pour suivre un quantile " -#~ "prédéfini de la distribution de la " -#~ "norme de mise à jour." -#~ msgid "Simplifying Assumptions" -#~ msgstr "Simplifier les hypothèses" +#~ msgid "" +#~ "Optionally specify the type of actor " +#~ "to use. The actor object, which " +#~ "persists throughout the simulation, will " +#~ "be the process in charge of " +#~ "running the clients' jobs (i.e. their" +#~ " `fit()` method)." +#~ msgstr "" #~ msgid "" -#~ "We make (and attempt to enforce) a" -#~ " number of assumptions that must be" -#~ " satisfied to ensure that the " -#~ "training process actually realizes the " -#~ ":math:`(\\epsilon, \\delta)` guarantees the " -#~ "user has in mind when configuring " -#~ "the setup." +#~ "Much effort went into a completely " +#~ "restructured Flower docs experience. The " +#~ "documentation on [flower.ai/docs](flower.ai/docs) is" +#~ " now divided into Flower Framework, " +#~ "Flower Baselines, Flower Android SDK, " +#~ "Flower iOS SDK, and code example " +#~ "projects." #~ msgstr "" -#~ "Nous formulons (et tentons d'appliquer) " -#~ "un certain nombre d'hypothèses qui " -#~ "doivent être satisfaites pour que le " -#~ "processus de formation réalise réellement " -#~ "les garanties :math:`(\\epsilon, \\delta)` que" -#~ " l'utilisateur a à l'esprit lorsqu'il " -#~ "configure l'installation." #~ msgid "" -#~ "**Fixed-size subsampling** :Fixed-size " -#~ "subsamples of the clients must be " -#~ "taken at each round, as opposed to" -#~ " variable-sized Poisson subsamples." +#~ "Flower usage examples used to be " +#~ "bundled with Flower in a package " +#~ "called ``flwr_example``. We are migrating " +#~ "those examples to standalone projects to" +#~ " make them easier to use. All " +#~ "new examples are based in the " +#~ "directory `examples " +#~ "`_." #~ msgstr "" -#~ "**Sous-échantillonnage de taille fixe** " -#~ ":Des sous-échantillons de taille fixe" -#~ " des clients doivent être prélevés à" -#~ " chaque tour, par opposition aux " -#~ "sous-échantillons de Poisson de taille " -#~ "variable." +#~ "Les exemples d'utilisation de Flower " +#~ "étaient auparavant regroupés avec Flower " +#~ "dans un paquet appelé ``flwr_example``. " +#~ "Nous migrons ces exemples vers des " +#~ "projets autonomes pour les rendre plus" +#~ " faciles à utiliser. Tous les " +#~ "nouveaux exemples sont basés dans le " +#~ "répertoire ``examples " +#~ "`_." + +#~ msgid "Quickstart TensorFlow/Keras" +#~ msgstr "Démarrage rapide de TensorFlow/Keras" + +#~ msgid "Legacy Examples (`flwr_example`)" +#~ msgstr "Exemples hérités (`flwr_example`)" #~ msgid "" -#~ "**Unweighted averaging** : The contributions" -#~ " from all the clients must weighted" -#~ " equally in the aggregate to " -#~ "eliminate the requirement for the server" -#~ " to know in advance the sum of" -#~ " the weights of all clients available" -#~ " for selection." +#~ "The useage examples in `flwr_example` " +#~ "are deprecated and will be removed " +#~ "in the future. New examples are " +#~ "provided as standalone projects in " +#~ "`examples `_." #~ msgstr "" -#~ "**Moyenne non pondérée** : Les " -#~ "contributions de tous les clients " -#~ "doivent être pondérées de façon égale" -#~ " dans l'ensemble afin que le serveur" -#~ " n'ait pas à connaître à l'avance " -#~ "la somme des poids de tous les " -#~ "clients disponibles pour la sélection." +#~ "Les exemples d'utilisation dans `flwr_example`" +#~ " sont obsolètes et seront supprimés à" +#~ " l'avenir. De nouveaux exemples sont " +#~ "fournis en tant que projets autonomes" +#~ " dans `examples " +#~ "`_." + +#~ msgid "Extra Dependencies" +#~ msgstr "Dépendances supplémentaires" #~ msgid "" -#~ "**No client failures** : The set " -#~ "of available clients must stay constant" -#~ " across all rounds of training. In" -#~ " other words, clients cannot drop out" -#~ " or fail." +#~ "The core Flower framework keeps a " +#~ "minimal set of dependencies. The " +#~ "examples demonstrate Flower in the " +#~ "context of different machine learning " +#~ "frameworks, so additional dependencies need" +#~ " to be installed before an example" +#~ " can be run." #~ msgstr "" -#~ "**Aucune défaillance de client** : " -#~ "L'ensemble des clients disponibles doit " -#~ "rester constant pendant toutes les " -#~ "séries de formation. En d'autres termes," -#~ " les clients ne peuvent pas " -#~ "abandonner ou échouer." +#~ "Le noyau du framework Flower conserve" +#~ " un ensemble minimal de dépendances. " +#~ "Les exemples démontrent Flower dans le" +#~ " contexte de différents frameworks " +#~ "d'apprentissage automatique, de sorte que " +#~ "des dépendances supplémentaires doivent être" +#~ " installées avant qu'un exemple puisse " +#~ "être exécuté." + +#~ msgid "For PyTorch examples::" +#~ msgstr "Pour les exemples de PyTorch: :" + +#~ msgid "For TensorFlow examples::" +#~ msgstr "Pour les exemples de TensorFlow : :" + +#~ msgid "For both PyTorch and TensorFlow examples::" +#~ msgstr "Pour les exemples PyTorch et TensorFlow: :" #~ msgid "" -#~ "The first two are useful for " -#~ "eliminating a multitude of complications " -#~ "associated with calibrating the noise to" -#~ " the clipping threshold, while the " -#~ "third one is required to comply " -#~ "with the assumptions of the privacy " -#~ "analysis." +#~ "Please consult :code:`pyproject.toml` for a" +#~ " full list of possible extras " +#~ "(section :code:`[tool.poetry.extras]`)." #~ msgstr "" -#~ "Les deux premiers sont utiles pour " -#~ "éliminer une multitude de complications " -#~ "liées au calibrage du bruit en " -#~ "fonction du seuil d'écrêtage, tandis que" -#~ " le troisième est nécessaire pour se" -#~ " conformer aux hypothèses de l'analyse " -#~ "de la vie privée." +#~ "Tu peux consulter :code:`pyproject.toml` pour" +#~ " une liste complète des extras " +#~ "possibles (section :code:`[tool.poetry.extras]`)." + +#~ msgid "PyTorch Examples" +#~ msgstr "Exemples de PyTorch" #~ msgid "" -#~ "These restrictions are in line with " -#~ "constraints imposed by Andrew et al. " -#~ "[andrew]_." +#~ "Our PyTorch examples are based on " +#~ "PyTorch 1.7. They should work with " +#~ "other releases as well. So far, we" +#~ " provide the following examples." #~ msgstr "" -#~ "Ces restrictions sont conformes aux " -#~ "contraintes imposées par Andrew et al." -#~ " [andrew]_." +#~ "Nos exemples PyTorch sont basés sur " +#~ "PyTorch 1.7. Ils devraient fonctionner " +#~ "avec d'autres versions également. Jusqu'à " +#~ "présent, nous fournissons les exemples " +#~ "suivants." -#~ msgid "Customizable Responsibility for Noise injection" -#~ msgstr "Responsabilité personnalisable pour l'injection de bruit" +#~ msgid "CIFAR-10 Image Classification" +#~ msgstr "Classification d'images CIFAR-10" #~ msgid "" -#~ "In contrast to other implementations " -#~ "where the addition of noise is " -#~ "performed at the server, you can " -#~ "configure the site of noise injection" -#~ " to better match your threat model." -#~ " We provide users with the " -#~ "flexibility to set up the training " -#~ "such that each client independently adds" -#~ " a small amount of noise to the" -#~ " clipped update, with the result that" -#~ " simply aggregating the noisy updates " -#~ "is equivalent to the explicit addition" -#~ " of noise to the non-noisy " -#~ "aggregate at the server." +#~ "`CIFAR-10 and CIFAR-100 " +#~ "`_ are " +#~ "popular RGB image datasets. The Flower" +#~ " CIFAR-10 example uses PyTorch to " +#~ "train a simple CNN classifier in a" +#~ " federated learning setup with two " +#~ "clients." #~ msgstr "" -#~ "Contrairement à d'autres implémentations où" -#~ " l'ajout de bruit est effectué au " -#~ "niveau du serveur, tu peux configurer" -#~ " le site d'injection de bruit pour" -#~ " qu'il corresponde mieux à ton modèle" -#~ " de menace. Nous offrons aux " -#~ "utilisateurs la possibilité de configurer " -#~ "l'entraînement de telle sorte que chaque" -#~ " client ajoute indépendamment une petite" -#~ " quantité de bruit à la mise à" -#~ " jour écrêtée, ce qui fait que " -#~ "le simple fait d'agréger les mises " -#~ "à jour bruyantes équivaut à l'ajout " -#~ "explicite de bruit à l'agrégat non " -#~ "bruyant au niveau du serveur." +#~ "`CIFAR-10 et CIFAR-100 " +#~ "`_ sont des" +#~ " ensembles de données d'images RVB " +#~ "populaires. L'exemple Flower CIFAR-10 utilise" +#~ " PyTorch pour former un classificateur " +#~ "CNN simple dans une configuration " +#~ "d'apprentissage fédéré avec deux clients." -#~ msgid "" -#~ "To be precise, if we let :math:`m`" -#~ " be the number of clients sampled " -#~ "each round and :math:`\\sigma_\\Delta` be " -#~ "the scale of the total Gaussian " -#~ "noise that needs to be added to" -#~ " the sum of the model updates, " -#~ "we can use simple maths to show" -#~ " that this is equivalent to each " -#~ "client adding noise with scale " -#~ ":math:`\\sigma_\\Delta/\\sqrt{m}`." +#~ msgid "First, start a Flower server:" +#~ msgstr "Tout d'abord, démarre un serveur Flower :" + +#~ msgid "$ ./src/py/flwr_example/pytorch_cifar/run-server.sh" +#~ msgstr "$ ./src/py/flwr_example/pytorch_cifar/run-server.sh" + +#~ msgid "Then, start the two clients in a new terminal window:" #~ msgstr "" -#~ "Pour être précis, si nous laissons " -#~ ":math:`m` être le nombre de clients " -#~ "échantillonnés à chaque tour et " -#~ ":math:\\sigma_\\Delta` être l'échelle du bruit" -#~ " gaussien total qui doit être ajouté" -#~ " à la somme des mises à jour" -#~ " du modèle, nous pouvons utiliser des" -#~ " mathématiques simples pour montrer que " -#~ "cela équivaut à ce que chaque " -#~ "client ajoute du bruit avec l'échelle" -#~ " :math:\\sigma_\\Delta/\\sqrt{m}`." +#~ "Ensuite, démarre les deux clients dans" +#~ " une nouvelle fenêtre de terminal :" -#~ msgid "Wrapper-based approach" -#~ msgstr "Approche basée sur l'enveloppe" +#~ msgid "$ ./src/py/flwr_example/pytorch_cifar/run-clients.sh" +#~ msgstr "$ ./src/py/flwr_example/pytorch_cifar/run-clients.sh" + +#~ msgid "For more details, see :code:`src/py/flwr_example/pytorch_cifar`." +#~ msgstr "Pour plus de détails, voir :code:`src/py/flwr_example/pytorch_cifar`." + +#~ msgid "ImageNet-2012 Image Classification" +#~ msgstr "ImageNet-2012 Classification des images" #~ msgid "" -#~ "Introducing DP to an existing workload" -#~ " can be thought of as adding an" -#~ " extra layer of security around it." -#~ " This inspired us to provide the " -#~ "additional server and client-side logic" -#~ " needed to make the training process" -#~ " differentially private as wrappers for " -#~ "instances of the :code:`Strategy` and " -#~ ":code:`NumPyClient` abstract classes respectively." -#~ " This wrapper-based approach has the" -#~ " advantage of being easily composable " -#~ "with other wrappers that someone might" -#~ " contribute to the Flower library in" -#~ " the future, e.g., for secure " -#~ "aggregation. Using Inheritance instead can " -#~ "be tedious because that would require" -#~ " the creation of new sub- classes " -#~ "every time a new class implementing " -#~ ":code:`Strategy` or :code:`NumPyClient` is " -#~ "defined." +#~ "`ImageNet-2012 `_ is " +#~ "one of the major computer vision " +#~ "datasets. The Flower ImageNet example " +#~ "uses PyTorch to train a ResNet-18 " +#~ "classifier in a federated learning setup" +#~ " with ten clients." #~ msgstr "" -#~ "L'introduction du DP dans une charge " -#~ "de travail existante peut être " -#~ "considérée comme l'ajout d'une couche de" -#~ " sécurité supplémentaire autour d'elle. " -#~ "Cela nous a incités à fournir la" -#~ " logique supplémentaire côté serveur et " -#~ "côté client nécessaire pour rendre le" -#~ " processus de formation différentiellement " -#~ "privé en tant qu'enveloppes pour les " -#~ "instances des classes abstraites " -#~ ":code:`Strategy` et :code:`NumPyClient` " -#~ "respectivement. Cette approche basée sur " -#~ "l'enveloppe a l'avantage d'être facilement " -#~ "composable avec d'autres enveloppes que " -#~ "quelqu'un pourrait contribuer à la " -#~ "bibliothèque Flower à l'avenir, par " -#~ "exemple, pour l'agrégation sécurisée. " -#~ "L'utilisation de l'héritage à la place" -#~ " peut être fastidieuse car cela " -#~ "nécessiterait la création de nouvelles " -#~ "sous-classes chaque fois qu'une nouvelle" -#~ " classe mettant en œuvre :code:`Strategy`" -#~ " ou :code:`NumPyClient` est définie." +#~ "`ImageNet-2012 `_ est " +#~ "l'un des principaux ensembles de données" +#~ " de vision par ordinateur. L'exemple " +#~ "Flower ImageNet utilise PyTorch pour " +#~ "entraîner un classificateur ResNet-18 dans " +#~ "une configuration d'apprentissage fédéré avec" +#~ " dix clients." + +#~ msgid "$ ./src/py/flwr_example/pytorch_imagenet/run-server.sh" +#~ msgstr "$ ./src/py/flwr_example/pytorch_imagenet/run-server.sh" + +#~ msgid "$ ./src/py/flwr_example/pytorch_imagenet/run-clients.sh" +#~ msgstr "$ ./src/py/flwr_example/pytorch_imagenet/run-clients.sh" + +#~ msgid "For more details, see :code:`src/py/flwr_example/pytorch_imagenet`." +#~ msgstr "" +#~ "Pour plus de détails, voir " +#~ ":code:`src/py/flwr_example/pytorch_imagenet`." + +#~ msgid "TensorFlow Examples" +#~ msgstr "Exemples de TensorFlow" #~ msgid "" -#~ "The first version of our solution " -#~ "was to define a decorator whose " -#~ "constructor accepted, among other things, " -#~ "a boolean-valued variable indicating " -#~ "whether adaptive clipping was to be " -#~ "enabled or not. We quickly realized " -#~ "that this would clutter its " -#~ ":code:`__init__()` function with variables " -#~ "corresponding to hyperparameters of adaptive" -#~ " clipping that would remain unused " -#~ "when it was disabled. A cleaner " -#~ "implementation could be achieved by " -#~ "splitting the functionality into two " -#~ "decorators, :code:`DPFedAvgFixed` and " -#~ ":code:`DPFedAvgAdaptive`, with the latter sub-" -#~ " classing the former. The constructors " -#~ "for both classes accept a boolean " -#~ "parameter :code:`server_side_noising`, which, as " -#~ "the name suggests, determines where " -#~ "noising is to be performed." +#~ "Our TensorFlow examples are based on " +#~ "TensorFlow 2.0 or newer. So far, " +#~ "we provide the following examples." #~ msgstr "" -#~ "La première version de notre solution" -#~ " consistait à définir un décorateur " -#~ "dont le constructeur acceptait, entre " -#~ "autres, une variable à valeur booléenne" -#~ " indiquant si l'écrêtage adaptatif devait" -#~ " être activé ou non. Nous nous " -#~ "sommes rapidement rendu compte que cela" -#~ " encombrerait sa fonction :code:`__init__()` " -#~ "avec des variables correspondant aux " -#~ "hyperparamètres de l'écrêtage adaptatif qui" -#~ " resteraient inutilisées lorsque celui-ci" -#~ " était désactivé. Une implémentation plus" -#~ " propre pourrait être obtenue en " -#~ "divisant la fonctionnalité en deux " -#~ "décorateurs, :code:`DPFedAvgFixed` et " -#~ ":code:`DPFedAvgAdaptive`, le second sous-" -#~ "classant le premier. Les constructeurs " -#~ "des deux classes acceptent un paramètre" -#~ " booléen :code:`server_side_noising` qui, comme" -#~ " son nom l'indique, détermine l'endroit " -#~ "où le noising doit être effectué." +#~ "Nos exemples TensorFlow sont basés sur" +#~ " TensorFlow 2.0 ou une version plus" +#~ " récente. Jusqu'à présent, nous te " +#~ "proposons les exemples suivants." + +#~ msgid "Fashion-MNIST Image Classification" +#~ msgstr "Classification d'images Fashion-MNIST" #~ msgid "" -#~ "The server-side capabilities required " -#~ "for the original version of DP-" -#~ "FedAvg, i.e., the one which performed" -#~ " fixed clipping, can be completely " -#~ "captured with the help of wrapper " -#~ "logic for just the following two " -#~ "methods of the :code:`Strategy` abstract " -#~ "class." +#~ "`Fashion-MNIST `_ is often used as " +#~ "the \"Hello, world!\" of machine " +#~ "learning. We follow this tradition and" +#~ " provide an example which samples " +#~ "random local datasets from Fashion-MNIST" +#~ " and trains a simple image " +#~ "classification model over those partitions." #~ msgstr "" -#~ "Les capacités côté serveur requises pour" -#~ " la version originale de DP-FedAvg," -#~ " c'est-à-dire celle qui effectue un " -#~ "écrêtage fixe, peuvent être entièrement " -#~ "capturées à l'aide d'une logique " -#~ "d'enveloppement pour les deux méthodes " -#~ "suivantes de la classe abstraite " -#~ ":code:`Strategy`." +#~ "nous suivons cette tradition et " +#~ "fournissons un exemple qui échantillonne " +#~ "des ensembles de données locales " +#~ "aléatoires de Fashion-MNIST et entraîne" +#~ " un modèle simple de classification " +#~ "d'images sur ces partitions." + +#~ msgid "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-server.sh" +#~ msgstr "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-server.sh" + +#~ msgid "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-clients.sh" +#~ msgstr "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-clients.sh" #~ msgid "" -#~ ":code:`configure_fit()` : The config " -#~ "dictionary being sent by the wrapped " -#~ ":code:`Strategy` to each client needs to" -#~ " be augmented with an additional " -#~ "value equal to the clipping threshold" -#~ " (keyed under :code:`dpfedavg_clip_norm`) and," -#~ " if :code:`server_side_noising=true`, another one" -#~ " equal to the scale of the " -#~ "Gaussian noise that needs to be " -#~ "added at the client (keyed under " -#~ ":code:`dpfedavg_noise_stddev`). This entails " -#~ "*post*-processing of the results returned " -#~ "by the wrappee's implementation of " -#~ ":code:`configure_fit()`." +#~ "For more details, see " +#~ ":code:`src/py/flwr_example/tensorflow_fashion_mnist`." #~ msgstr "" -#~ ":code:`configure_fit()` : Le dictionnaire de" -#~ " configuration envoyé par la " -#~ ":code:`Strategy` enveloppée à chaque client" -#~ " doit être augmenté d'une valeur " -#~ "supplémentaire égale au seuil d'écrêtage " -#~ "(indiqué sous :code:`dpfedavg_clip_norm`) et, " -#~ "si :code:`server_side_noising=true`, d'une autre " -#~ "égale à l'échelle du bruit gaussien " -#~ "qui doit être ajouté au client " -#~ "(indiqué sous :code:`dpfedavg_noise_stddev`)." +#~ "Pour plus de détails, voir " +#~ ":code:`src/py/flwr_example/tensorflow_fashion_mnist`." #~ msgid "" -#~ ":code:`aggregate_fit()`: We check whether any" -#~ " of the sampled clients dropped out" -#~ " or failed to upload an update " -#~ "before the round timed out. In " -#~ "that case, we need to abort the" -#~ " current round, discarding any successful" -#~ " updates that were received, and move" -#~ " on to the next one. On the " -#~ "other hand, if all clients responded " -#~ "successfully, we must force the " -#~ "averaging of the updates to happen " -#~ "in an unweighted manner by intercepting" -#~ " the :code:`parameters` field of " -#~ ":code:`FitRes` for each received update " -#~ "and setting it to 1. Furthermore, " -#~ "if :code:`server_side_noising=true`, each update " -#~ "is perturbed with an amount of " -#~ "noise equal to what it would have" -#~ " been subjected to had client-side" -#~ " noising being enabled. This entails " -#~ "*pre*-processing of the arguments to " -#~ "this method before passing them on " -#~ "to the wrappee's implementation of " -#~ ":code:`aggregate_fit()`." +#~ "MXNet is no longer maintained and " +#~ "has been moved into `Attic " +#~ "`_. As a " +#~ "result, we would encourage you to " +#~ "use other ML frameworks alongise Flower," +#~ " for example, PyTorch. This tutorial " +#~ "might be removed in future versions " +#~ "of Flower." #~ msgstr "" -#~ ":code:`aggregate_fit()`: We check whether any" -#~ " of the sampled clients dropped out" -#~ " or failed to upload an update " -#~ "before the round timed out. In " -#~ "that case, we need to abort the" -#~ " current round, discarding any successful" -#~ " updates that were received, and move" -#~ " on to the next one. On the " -#~ "other hand, if all clients responded " -#~ "successfully, we must force the " -#~ "averaging of the updates to happen " -#~ "in an unweighted manner by intercepting" -#~ " the :code:`parameters` field of " -#~ ":code:`FitRes` for each received update " -#~ "and setting it to 1. Furthermore, " -#~ "if :code:`server_side_noising=true`, each update " -#~ "is perturbed with an amount of " -#~ "noise equal to what it would have" -#~ " been subjected to had client-side" -#~ " noising being enabled. This entails " -#~ "*pre*-processing of the arguments to " -#~ "this method before passing them on " -#~ "to the wrappee's implementation of " -#~ ":code:`aggregate_fit()`." #~ msgid "" -#~ "We can't directly change the aggregation" -#~ " function of the wrapped strategy to" -#~ " force it to add noise to the" -#~ " aggregate, hence we simulate client-" -#~ "side noising to implement server-side" -#~ " noising." +#~ "Now that you have known how " +#~ "federated XGBoost work with Flower, it's" +#~ " time to run some more comprehensive" +#~ " experiments by customising the " +#~ "experimental settings. In the xgboost-" +#~ "comprehensive example (`full code " +#~ "`_), we provide more options " +#~ "to define various experimental setups, " +#~ "including aggregation strategies, data " +#~ "partitioning and centralised/distributed evaluation." +#~ " We also support `Flower simulation " +#~ "`_ making it easy to " +#~ "simulate large client cohorts in a " +#~ "resource-aware manner. Let's take a " +#~ "look!" #~ msgstr "" -#~ "Nous ne pouvons pas modifier directement" -#~ " la fonction d'agrégation de la " -#~ "stratégie enveloppée pour la forcer à" -#~ " ajouter du bruit à l'agrégat, c'est" -#~ " pourquoi nous simulons le bruit côté" -#~ " client pour mettre en œuvre le " -#~ "bruit côté serveur." -#~ msgid "" -#~ "These changes have been put together " -#~ "into a class called :code:`DPFedAvgFixed`, " -#~ "whose constructor accepts the strategy " -#~ "being decorated, the clipping threshold " -#~ "and the number of clients sampled " -#~ "every round as compulsory arguments. The" -#~ " user is expected to specify the " -#~ "clipping threshold since the order of" -#~ " magnitude of the update norms is " -#~ "highly dependent on the model being " -#~ "trained and providing a default value" -#~ " would be misleading. The number of" -#~ " clients sampled at every round is" -#~ " required to calculate the amount of" -#~ " noise that must be added to " -#~ "each individual update, either by the" -#~ " server or the clients." +#~ msgid "|31e4b1afa87c4b968327bbeafbf184d4|" +#~ msgstr "" + +#~ msgid "|c9d935b4284e4c389a33d86b33e07c0a|" +#~ msgstr "" + +#~ msgid "|00727b5faffb468f84dd1b03ded88638|" +#~ msgstr "" + +#~ msgid "|daf0cf0ff4c24fd29439af78416cf47b|" +#~ msgstr "" + +#~ msgid "|9f093007080d471d94ca90d3e9fde9b6|" +#~ msgstr "" + +#~ msgid "|46a26e6150e0479fbd3dfd655f36eb13|" +#~ msgstr "" + +#~ msgid "|3daba297595c4c7fb845d90404a6179a|" +#~ msgstr "" + +#~ msgid "|5769874fa9c4455b80b2efda850d39d7|" +#~ msgstr "" + +#~ msgid "|ba47ffb421814b0f8f9fa5719093d839|" +#~ msgstr "" + +#~ msgid "|aeac5bf79cbf497082e979834717e01b|" +#~ msgstr "" + +#~ msgid "|ce27ed4bbe95459dba016afc42486ba2|" #~ msgstr "" -#~ "Ces modifications ont été regroupées " -#~ "dans une classe appelée :code:`DPFedAvgFixed`," -#~ " dont le constructeur accepte la " -#~ "stratégie décorée, le seuil d'écrêtage " -#~ "et le nombre de clients échantillonnés" -#~ " à chaque tour comme arguments " -#~ "obligatoires. L'utilisateur est censé " -#~ "spécifier le seuil d'écrêtage car " -#~ "l'ordre de grandeur des normes de " -#~ "mise à jour dépend fortement du " -#~ "modèle formé et fournir une valeur " -#~ "par défaut serait trompeur. Le nombre" -#~ " de clients échantillonnés à chaque " -#~ "tour est nécessaire pour calculer la " -#~ "quantité de bruit qui doit être " -#~ "ajoutée à chaque mise à jour " -#~ "individuelle, que ce soit par le " -#~ "serveur ou par les clients." -#~ msgid "" -#~ "The additional functionality required to " -#~ "facilitate adaptive clipping has been " -#~ "provided in :code:`DPFedAvgAdaptive`, a " -#~ "subclass of :code:`DPFedAvgFixed`. It " -#~ "overrides the above-mentioned methods to" -#~ " do the following." +#~ msgid "|ae94a7f71dda443cbec2385751427d41|" #~ msgstr "" -#~ "La fonctionnalité supplémentaire nécessaire " -#~ "pour faciliter l'écrêtage adaptatif a " -#~ "été fournie dans :code:`DPFedAvgAdaptive`, une" -#~ " sous-classe de :code:`DPFedAvgFixed`. Elle" -#~ " remplace les méthodes mentionnées ci-" -#~ "dessus pour effectuer les opérations " -#~ "suivantes." -#~ msgid "" -#~ ":code:`configure_fit()` : It intercepts the" -#~ " config dict returned by " -#~ ":code:`super.configure_fit()` to add the " -#~ "key-value pair " -#~ ":code:`dpfedavg_adaptive_clip_enabled:True` to it, " -#~ "which the client interprets as an " -#~ "instruction to include an indicator bit" -#~ " (1 if update norm <= clipping " -#~ "threshold, 0 otherwise) in the results" -#~ " returned by it." +#~ msgid "|e61fce4d43d243e7bb08bdde97d81ce6|" #~ msgstr "" -#~ ":code:`configure_fit()` : Il intercepte le " -#~ "dict de configuration renvoyé par " -#~ ":code:`super.configure_fit()` pour y ajouter " -#~ "la paire clé-valeur " -#~ ":code:`dpfedavg_adaptive_clip_enabled:True`, que le " -#~ "client interprète comme une instruction " -#~ "d'inclure un bit indicateur (1 si " -#~ "la norme de mise à jour <= " -#~ "seuil d'écrêtage, 0 sinon) dans les " -#~ "résultats qu'il renvoie." -#~ msgid "" -#~ ":code:`aggregate_fit()` : It follows a " -#~ "call to :code:`super.aggregate_fit()` with one" -#~ " to :code:`__update_clip_norm__()`, a procedure" -#~ " which adjusts the clipping threshold " -#~ "on the basis of the indicator bits" -#~ " received from the sampled clients." +#~ msgid "|08cb60859b07461588fe44e55810b050|" #~ msgstr "" -#~ ":code:`aggregate_fit()` : Il fait suivre " -#~ "un appel à :code:`super.aggregate_fit()` d'un" -#~ " appel à :code:`__update_clip_norm__()`, une " -#~ "procédure qui ajuste le seuil d'écrêtage" -#~ " sur la base des bits indicateurs " -#~ "reçus des clients échantillonnés." #~ msgid "" -#~ "The client-side capabilities required " -#~ "can be completely captured through " -#~ "wrapper logic for just the :code:`fit()`" -#~ " method of the :code:`NumPyClient` abstract" -#~ " class. To be precise, we need " -#~ "to *post-process* the update computed" -#~ " by the wrapped client to clip " -#~ "it, if necessary, to the threshold " -#~ "value supplied by the server as " -#~ "part of the config dictionary. In " -#~ "addition to this, it may need to" -#~ " perform some extra work if either" -#~ " (or both) of the following keys " -#~ "are also present in the dict." +#~ "Flower provides pre-made docker images" +#~ " on `Docker Hub " +#~ "`_ that include" +#~ " all necessary dependencies for running " +#~ "the server. You can also build " +#~ "your own custom docker images from " +#~ "scratch with a different version of " +#~ "Python or Ubuntu if that is what" +#~ " you need. In this guide, we " +#~ "will explain what images exist and " +#~ "how to build them locally." #~ msgstr "" -#~ "Les capacités requises côté client " -#~ "peuvent être entièrement capturées par " -#~ "une logique de wrapper pour la " -#~ "seule méthode :code:`fit()` de la classe" -#~ " abstraite :code:`NumPyClient`. Pour être " -#~ "précis, nous devons *post-traiter* la" -#~ " mise à jour calculée par le " -#~ "client wrapped pour l'écrêter, si " -#~ "nécessaire, à la valeur seuil fournie" -#~ " par le serveur dans le cadre " -#~ "du dictionnaire de configuration. En " -#~ "plus de cela, il peut avoir besoin" -#~ " d'effectuer un travail supplémentaire si" -#~ " l'une des clés suivantes (ou les " -#~ "deux) est également présente dans le " -#~ "dict." #~ msgid "" -#~ ":code:`dpfedavg_noise_stddev` : Generate and " -#~ "add the specified amount of noise " -#~ "to the clipped update." +#~ "Currently, Flower provides two images, a" +#~ " base image and a server image. " +#~ "There will also be a client image" +#~ " soon. The base image, as the " +#~ "name suggests, contains basic dependencies " +#~ "that both the server and the " +#~ "client need. This includes system " +#~ "dependencies, Python and Python tools. " +#~ "The server image is based on the" +#~ " base image, but it additionally " +#~ "installs the Flower server using " +#~ "``pip``." #~ msgstr "" -#~ ":code:`dpfedavg_noise_stddev` : Génère et " -#~ "ajoute la quantité de bruit spécifiée" -#~ " à la mise à jour de " -#~ "l'écrêtage." #~ msgid "" -#~ ":code:`dpfedavg_adaptive_clip_enabled` : Augment the" -#~ " metrics dict in the :code:`FitRes` " -#~ "object being returned to the server " -#~ "with an indicator bit, calculated as " -#~ "described earlier." +#~ "Both, base and server image are " +#~ "configured via build arguments. Through " +#~ "build arguments, we can make our " +#~ "build more flexible. For example, in " +#~ "the base image, we can specify the" +#~ " version of Python to install using" +#~ " the ``PYTHON_VERSION`` build argument. " +#~ "Some of the build arguments have " +#~ "default values, others must be specified" +#~ " when building the image. All " +#~ "available build arguments for each image" +#~ " are listed in one of the " +#~ "tables below." #~ msgstr "" -#~ ":code:`dpfedavg_adaptive_clip_enabled` : Complète " -#~ "les métriques dict dans l'objet " -#~ ":code:`FitRes` renvoyé au serveur avec " -#~ "un bit indicateur, calculé comme décrit" -#~ " précédemment." -#~ msgid "Performing the :math:`(\\epsilon, \\delta)` analysis" -#~ msgstr "Effectuer l'analyse :math:`(\\epsilon, \\delta)`" +#~ msgid "Defaults to ``flwr/server``." +#~ msgstr "" -#~ msgid "" -#~ "Assume you have trained for :math:`n`" -#~ " rounds with sampling fraction :math:`q`" -#~ " and noise multiplier :math:`z`. In " -#~ "order to calculate the :math:`\\epsilon` " -#~ "value this would result in for a" -#~ " particular :math:`\\delta`, the following " -#~ "script may be used." +#~ msgid "``BASE_IMAGE_TAG``" +#~ msgstr "" + +#~ msgid "The image tag of the base image." +#~ msgstr "" + +#~ msgid "Defaults to ``py3.11-ubuntu22.04``." #~ msgstr "" -#~ "Supposons que tu te sois entraîné " -#~ "pendant :math:`n` tours avec la fraction" -#~ " d'échantillonnage :math:`q` et le " -#~ "multiplicateur de bruit :math:`z`. Afin " -#~ "de calculer la valeur :math:`epsilon` " -#~ "qui en résulterait pour un " -#~ ":math:`\\delta` particulier, le script suivant" -#~ " peut être utilisé." #~ msgid "" -#~ "`How to run Flower using Docker " -#~ "`_" +#~ "The following example creates a server" +#~ " image with the official Flower base" +#~ " image py3.11-ubuntu22.04 and Flower 1.7.0:" #~ msgstr "" -#~ msgid "Enjoy building more robust and flexible ``ClientApp``s with mods!" +#~ msgid "" +#~ "The name of image is ``flwr_server`` " +#~ "and the tag ``0.1.0``. Remember that " +#~ "the build arguments as well as the" +#~ " name and tag can be adapted to" +#~ " your needs. These values serve as" +#~ " examples only." #~ msgstr "" #~ msgid "" -#~ ":py:obj:`ClientApp `\\ " -#~ "\\(client\\_fn\\[\\, mods\\]\\)" +#~ "If you want to use your own " +#~ "base image instead of the official " +#~ "Flower base image, all you need to" +#~ " do is set the ``BASE_REPOSITORY`` " +#~ "and ``BASE_IMAGE_TAG`` build arguments. The" +#~ " value of ``BASE_REPOSITORY`` must match" +#~ " the name of your image and the" +#~ " value of ``BASE_IMAGE_TAG`` must match " +#~ "the tag of your image." #~ msgstr "" -#~ msgid ":py:obj:`flwr.server.driver `\\" +#~ msgid "" +#~ "It is important to follow the " +#~ "instructions described in comments. For " +#~ "instance, in order to not break " +#~ "how our changelog system works, you " +#~ "should read the information above the" +#~ " ``Changelog entry`` section carefully. You" +#~ " can also checkout some examples and" +#~ " details in the :ref:`changelogentry` " +#~ "appendix." #~ msgstr "" -#~ msgid "Flower driver SDK." -#~ msgstr "Serveur de Flower" +#~ msgid "Open a PR (as shown above)" +#~ msgstr "Ouvre un RP (comme indiqué ci-dessus)" -#~ msgid "driver" -#~ msgstr "serveur" +#~ msgid "" +#~ "Add CI job to deploy the staging" +#~ " system when the ``main`` branch " +#~ "changes" +#~ msgstr "Add CI job to deploy the staging system when the `main` branch changes" + +#~ msgid "Changelog entry" +#~ msgstr "Changelog" #~ msgid "" -#~ ":py:obj:`start_driver `\\ " -#~ "\\(\\*\\[\\, server\\_address\\, server\\, ...\\]\\)" +#~ "When opening a new PR, inside its" +#~ " description, there should be a " +#~ "``Changelog entry`` header." #~ msgstr "" #~ msgid "" -#~ ":py:obj:`Driver `\\ " -#~ "\\(\\[driver\\_service\\_address\\, ...\\]\\)" +#~ "Above this header you should see " +#~ "the following comment that explains how" +#~ " to write your changelog entry:" #~ msgstr "" #~ msgid "" -#~ ":py:obj:`GrpcDriver `\\ " -#~ "\\(\\[driver\\_service\\_address\\, ...\\]\\)" +#~ "Inside the following 'Changelog entry' " +#~ "section, you should put the description" +#~ " of your changes that will be " +#~ "added to the changelog alongside your" +#~ " PR title." #~ msgstr "" -#~ msgid "`GrpcDriver` provides access to the gRPC Driver API/service." +#~ msgid "" +#~ "If the section is completely empty " +#~ "(without any token) or non-existent, " +#~ "the changelog will just contain the " +#~ "title of the PR for the changelog" +#~ " entry, without any description." #~ msgstr "" -#~ msgid ":py:obj:`get_nodes `\\ \\(\\)" +#~ msgid "" +#~ "If the section contains some text " +#~ "other than tokens, it will use it" +#~ " to add a description to the " +#~ "change." #~ msgstr "" #~ msgid "" -#~ ":py:obj:`pull_task_res " -#~ "`\\ \\(task\\_ids\\)" +#~ "If the section contains one of the" +#~ " following tokens it will ignore any" +#~ " other text and put the PR " +#~ "under the corresponding section of the" +#~ " changelog:" #~ msgstr "" -#~ msgid "Get task results." +#~ msgid " is for classifying a PR as a general improvement." +#~ msgstr "" + +#~ msgid " is to not add the PR to the changelog" +#~ msgstr "" + +#~ msgid " is to add a general baselines change to the PR" +#~ msgstr "" + +#~ msgid " is to add a general examples change to the PR" +#~ msgstr "" + +#~ msgid " is to add a general sdk change to the PR" +#~ msgstr "" + +#~ msgid " is to add a general simulations change to the PR" +#~ msgstr "" + +#~ msgid "Note that only one token should be used." #~ msgstr "" #~ msgid "" -#~ ":py:obj:`push_task_ins " -#~ "`\\ " -#~ "\\(task\\_ins\\_list\\)" +#~ "Its content must have a specific " +#~ "format. We will break down what " +#~ "each possibility does:" #~ msgstr "" -#~ msgid "Schedule tasks." +#~ msgid "" +#~ "If the ``### Changelog entry`` section" +#~ " contains nothing or doesn't exist, " +#~ "the following text will be added " +#~ "to the changelog::" #~ msgstr "" -#~ msgid "GrpcDriver" +#~ msgid "" +#~ "If the ``### Changelog entry`` section" +#~ " contains a description (and no " +#~ "token), the following text will be " +#~ "added to the changelog::" #~ msgstr "" -#~ msgid ":py:obj:`connect `\\ \\(\\)" +#~ msgid "" +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, nothing will change" +#~ " in the changelog." #~ msgstr "" -#~ msgid "Connect to the Driver API." +#~ msgid "" +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, the following text" +#~ " will be added to the changelog::" #~ msgstr "" #~ msgid "" -#~ ":py:obj:`create_run " -#~ "`\\ \\(req\\)" +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, the following " +#~ "text will be added to the " +#~ "changelog::" #~ msgstr "" -#~ msgid "Request for run ID." -#~ msgstr "Demande pour une nouvelle Flower Baseline" +#~ msgid "" +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, the following " +#~ "text will be added to the " +#~ "changelog::" +#~ msgstr "" #~ msgid "" -#~ ":py:obj:`disconnect " -#~ "`\\ \\(\\)" +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, the following text " +#~ "will be added to the changelog::" #~ msgstr "" -#~ msgid "Disconnect from the Driver API." +#~ msgid "" +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, the following " +#~ "text will be added to the " +#~ "changelog::" #~ msgstr "" #~ msgid "" -#~ ":py:obj:`get_nodes `\\" -#~ " \\(req\\)" +#~ "Note that only one token must be" +#~ " provided, otherwise, only the first " +#~ "action (in the order listed above), " +#~ "will be performed." #~ msgstr "" -#~ msgid "Get client IDs." -#~ msgstr "Moteur client Edge" +#~ msgid "Example: MXNet - Run MXNet Federated" +#~ msgstr "Exemple : MXNet - Exécuter MXNet Federated" #~ msgid "" -#~ ":py:obj:`pull_task_res " -#~ "`\\ \\(req\\)" +#~ "This tutorial will show you how to" +#~ " use Flower to build a federated " +#~ "version of an existing MXNet workload." +#~ " We are using MXNet to train a" +#~ " Sequential model on the MNIST " +#~ "dataset. We will structure the example" +#~ " similar to our `PyTorch - From " +#~ "Centralized To Federated " +#~ "`_ walkthrough. " +#~ "MXNet and PyTorch are very similar " +#~ "and a very good comparison between " +#~ "MXNet and PyTorch is given `here " +#~ "`_. First, " +#~ "we build a centralized training approach" +#~ " based on the `Handwritten Digit " +#~ "Recognition " +#~ "`_" +#~ " tutorial. Then, we build upon the" +#~ " centralized training code to run the" +#~ " training in a federated fashion." +#~ msgstr "" +#~ "Ce tutoriel te montrera comment utiliser" +#~ " Flower pour construire une version " +#~ "fédérée d'une charge de travail MXNet" +#~ " existante. Nous utilisons MXNet pour " +#~ "former un modèle séquentiel sur " +#~ "l'ensemble de données MNIST. Nous " +#~ "structurerons l'exemple de la même " +#~ "manière que notre présentation `PyTorch " +#~ "- De la centralisation à la " +#~ "fédération `_. " +#~ "MXNet et PyTorch sont très similaires" +#~ " et une très bonne comparaison entre" +#~ " MXNet et PyTorch est donnée ici " +#~ "`_. Tout " +#~ "d'abord, nous construisons une approche " +#~ "de formation centralisée basée sur le" +#~ " tutoriel `Handandwritten Digit Recognition " +#~ "`_." +#~ " Ensuite, nous nous basons sur le " +#~ "code de formation centralisé pour " +#~ "exécuter la formation de manière " +#~ "fédérée." + +#~ msgid "" +#~ "Before we start setting up our " +#~ "MXNet example, we install the " +#~ ":code:`mxnet` and :code:`flwr` packages:" +#~ msgstr "" +#~ "Avant de commencer à configurer notre" +#~ " exemple MXNet, nous installons les " +#~ "paquets :code:`mxnet` et :code:`flwr` :" + +#~ msgid "MNIST Training with MXNet" +#~ msgstr "Formation MNIST avec MXNet" + +#~ msgid "" +#~ "We begin with a brief description " +#~ "of the centralized training code based" +#~ " on a :code:`Sequential` model. If " +#~ "you want a more in-depth " +#~ "explanation of what's going on then " +#~ "have a look at the official `MXNet" +#~ " tutorial " +#~ "`_." +#~ msgstr "" +#~ "Nous commençons par une brève " +#~ "description du code d'entraînement centralisé" +#~ " basé sur un modèle :code:`Sequential`. " +#~ "Si tu veux une explication plus " +#~ "approfondie de ce qui se passe, " +#~ "jette un coup d'œil au tutoriel " +#~ "officiel `MXNet " +#~ "`_." + +#~ msgid "" +#~ "Let's create a new file " +#~ "called:code:`mxnet_mnist.py` with all the " +#~ "components required for a traditional " +#~ "(centralized) MNIST training. First, the " +#~ "MXNet package :code:`mxnet` needs to be" +#~ " imported. You can see that we " +#~ "do not yet import the :code:`flwr` " +#~ "package for federated learning. This " +#~ "will be done later." +#~ msgstr "" +#~ "Créons un nouveau fichier appelé " +#~ ":code:`mxnet_mnist.py` avec tous les " +#~ "composants requis pour un apprentissage " +#~ "MNIST traditionnel (centralisé). Tout d'abord," +#~ " le package MXNet :code:`mxnet` doit " +#~ "être importé. Tu peux voir que " +#~ "nous n'avons pas encore importé le " +#~ "package :code:`flwr` pour l'apprentissage " +#~ "fédéré. Cela sera fait plus tard." + +#~ msgid "" +#~ "The :code:`load_data()` function loads the " +#~ "MNIST training and test sets." +#~ msgstr "" +#~ "La fonction :code:`load_data()` charge les " +#~ "ensembles d'entraînement et de test " +#~ "MNIST." + +#~ msgid "" +#~ "As already mentioned, we will use " +#~ "the MNIST dataset for this machine " +#~ "learning workload. The model architecture " +#~ "(a very simple :code:`Sequential` model) " +#~ "is defined in :code:`model()`." +#~ msgstr "" +#~ "Comme nous l'avons déjà mentionné, nous" +#~ " utiliserons l'ensemble de données MNIST" +#~ " pour cette charge de travail " +#~ "d'apprentissage automatique. L'architecture du " +#~ "modèle (un modèle :code:`Séquentiel` très " +#~ "simple) est définie dans :code:`model()`." + +#~ msgid "" +#~ "We now need to define the training" +#~ " (function :code:`train()`) which loops " +#~ "over the training set and measures " +#~ "the loss for each batch of " +#~ "training examples." +#~ msgstr "" +#~ "Nous devons maintenant définir la " +#~ "formation (fonction :code:`train()`) qui passe" +#~ " en boucle sur l'ensemble de la " +#~ "formation et mesure la perte pour " +#~ "chaque lot d'exemples de formation." + +#~ msgid "" +#~ "The evaluation of the model is " +#~ "defined in function :code:`test()`. The " +#~ "function loops over all test samples " +#~ "and measures the loss and accuracy " +#~ "of the model based on the test " +#~ "dataset." #~ msgstr "" +#~ "L'évaluation du modèle est définie dans" +#~ " la fonction :code:`test()`. Cette fonction" +#~ " passe en boucle sur tous les " +#~ "échantillons de test et mesure la " +#~ "perte et la précision du modèle en" +#~ " fonction de l'ensemble des données " +#~ "de test." + +#~ msgid "" +#~ "Having defined the data loading, model" +#~ " architecture, training, and evaluation we" +#~ " can put everything together and " +#~ "train our model on MNIST. Note " +#~ "that the GPU/CPU device for the " +#~ "training and testing is defined within" +#~ " the :code:`ctx` (context)." +#~ msgstr "" +#~ "Après avoir défini le chargement des " +#~ "données, l'architecture du modèle, " +#~ "l'entraînement et l'évaluation, nous pouvons" +#~ " tout assembler et entraîner notre " +#~ "modèle sur MNIST. Note que le " +#~ "dispositif GPU/CPU pour l'entraînement et " +#~ "le test est défini dans le " +#~ ":code:`ctx` (contexte)." + +#~ msgid "You can now run your (centralized) MXNet machine learning workload:" +#~ msgstr "" +#~ "Tu peux maintenant exécuter ta charge" +#~ " de travail (centralisée) d'apprentissage " +#~ "automatique MXNet :" + +#~ msgid "" +#~ "So far this should all look fairly" +#~ " familiar if you've used MXNet (or" +#~ " even PyTorch) before. Let's take the" +#~ " next step and use what we've " +#~ "built to create a simple federated " +#~ "learning system consisting of one server" +#~ " and two clients." +#~ msgstr "" +#~ "Jusqu'à présent, tout cela devrait te" +#~ " sembler assez familier si tu as " +#~ "déjà utilisé MXNet (ou même PyTorch)." +#~ " Passons à l'étape suivante et " +#~ "utilisons ce que nous avons construit" +#~ " pour créer un simple système " +#~ "d'apprentissage fédéré composé d'un serveur" +#~ " et de deux clients." + +#~ msgid "MXNet meets Flower" +#~ msgstr "MXNet rencontre Flower" + +#~ msgid "" +#~ "So far, it was not easily possible" +#~ " to use MXNet workloads for federated" +#~ " learning because federated learning is " +#~ "not supported in MXNet. Since Flower " +#~ "is fully agnostic towards the underlying" +#~ " machine learning framework, it can " +#~ "be used to federated arbitrary machine" +#~ " learning workloads. This section will " +#~ "show you how Flower can be used" +#~ " to federate our centralized MXNet " +#~ "workload." +#~ msgstr "" +#~ "Jusqu'à présent, il n'était pas facile" +#~ " d'utiliser les charges de travail " +#~ "MXNet pour l'apprentissage fédéré car " +#~ "l'apprentissage fédéré n'est pas pris en" +#~ " charge dans MXNet. Comme Flower est" +#~ " totalement agnostique vis-à-vis du cadre" +#~ " d'apprentissage automatique sous-jacent, " +#~ "il peut être utilisé pour fédérer " +#~ "des charges de travail d'apprentissage " +#~ "automatique arbitraires. Cette section te " +#~ "montrera comment Flower peut être " +#~ "utilisé pour fédérer notre charge de " +#~ "travail MXNet centralisée." + +#~ msgid "" +#~ "The concept to federate an existing " +#~ "workload is always the same and " +#~ "easy to understand. We have to " +#~ "start a *server* and then use the" +#~ " code in :code:`mxnet_mnist.py` for the " +#~ "*clients* that are connected to the " +#~ "*server*. The *server* sends model " +#~ "parameters to the clients. The *clients*" +#~ " run the training and update the " +#~ "parameters. The updated parameters are " +#~ "sent back to the *server* which " +#~ "averages all received parameter updates. " +#~ "This describes one round of the " +#~ "federated learning process and we repeat" +#~ " this for multiple rounds." +#~ msgstr "" +#~ "Le concept pour fédérer une charge " +#~ "de travail existante est toujours le " +#~ "même et facile à comprendre. Nous " +#~ "devons démarrer un *serveur* et ensuite" +#~ " utiliser le code dans " +#~ ":code:`mxnet_mnist.py` pour les *clients* qui" +#~ " sont connectés au *serveur*. Le " +#~ "*serveur* envoie les paramètres du " +#~ "modèle aux clients. Les *clients* " +#~ "exécutent la formation et mettent à " +#~ "jour les paramètres. Les paramètres mis" +#~ " à jour sont renvoyés au *serveur*" +#~ " qui fait la moyenne de toutes " +#~ "les mises à jour de paramètres " +#~ "reçues. Ceci décrit un tour du " +#~ "processus d'apprentissage fédéré et nous " +#~ "répétons cette opération pour plusieurs " +#~ "tours." + +#~ msgid "" +#~ "Finally, we will define our *client* " +#~ "logic in :code:`client.py` and build " +#~ "upon the previously defined MXNet " +#~ "training in :code:`mxnet_mnist.py`. Our " +#~ "*client* needs to import :code:`flwr`, " +#~ "but also :code:`mxnet` to update the " +#~ "parameters on our MXNet model:" +#~ msgstr "" +#~ "Enfin, nous allons définir la logique" +#~ " de notre *client* dans :code:`client.py`" +#~ " et nous appuyer sur l'entraînement " +#~ "MXNet défini précédemment dans " +#~ ":code:`mxnet_mnist.py`. Notre *client* doit " +#~ "importer :code:`flwr`, mais aussi " +#~ ":code:`mxnet` pour mettre à jour les " +#~ "paramètres de notre modèle MXNet :" + +#~ msgid "" +#~ "Implementing a Flower *client* basically " +#~ "means implementing a subclass of either" +#~ " :code:`flwr.client.Client` or " +#~ ":code:`flwr.client.NumPyClient`. Our implementation " +#~ "will be based on " +#~ ":code:`flwr.client.NumPyClient` and we'll call " +#~ "it :code:`MNISTClient`. :code:`NumPyClient` is " +#~ "slightly easier to implement than " +#~ ":code:`Client` if you use a framework" +#~ " with good NumPy interoperability (like " +#~ "PyTorch or MXNet) because it avoids " +#~ "some of the boilerplate that would " +#~ "otherwise be necessary. :code:`MNISTClient` " +#~ "needs to implement four methods, two " +#~ "methods for getting/setting model parameters," +#~ " one method for training the model," +#~ " and one method for testing the " +#~ "model:" +#~ msgstr "" +#~ "Implementing a Flower *client* basically " +#~ "means implementing a subclass of either" +#~ " :code:`flwr.client.Client` or " +#~ ":code:`flwr.client.NumPyClient`. Our implementation " +#~ "will be based on " +#~ ":code:`flwr.client.NumPyClient` and we'll call " +#~ "it :code:`MNISTClient`. :code:`NumPyClient` is " +#~ "slightly easier to implement than " +#~ ":code:`Client` if you use a framework" +#~ " with good NumPy interoperability (like " +#~ "PyTorch or MXNet) because it avoids " +#~ "some of the boilerplate that would " +#~ "otherwise be necessary. :code:`MNISTClient` " +#~ "needs to implement four methods, two " +#~ "methods for getting/setting model parameters," +#~ " one method for training the model," +#~ " and one method for testing the " +#~ "model:" + +#~ msgid "transform MXNet :code:`NDArray`'s to NumPy :code:`ndarray`'s" +#~ msgstr "transforme les :code:`NDArray` du MXNet en :code:`ndarray` de NumPy" + +#~ msgid "" +#~ "The challenging part is to transform " +#~ "the MXNet parameters from :code:`NDArray` " +#~ "to :code:`NumPy Arrays` to make it " +#~ "readable for Flower." +#~ msgstr "" +#~ "La partie la plus difficile est de" +#~ " transformer les paramètres MXNet de " +#~ ":code:`NDArray` en :code:`NumPy Arrays` pour" +#~ " les rendre lisibles pour Flower." + +#~ msgid "" +#~ "The two :code:`NumPyClient` methods " +#~ ":code:`fit` and :code:`evaluate` make use " +#~ "of the functions :code:`train()` and " +#~ ":code:`test()` previously defined in " +#~ ":code:`mxnet_mnist.py`. So what we really " +#~ "do here is we tell Flower through" +#~ " our :code:`NumPyClient` subclass which of" +#~ " our already defined functions to " +#~ "call for training and evaluation. We " +#~ "included type annotations to give you" +#~ " a better understanding of the data" +#~ " types that get passed around." +#~ msgstr "" +#~ "Les deux méthodes :code:`NumPyClient` " +#~ ":code:`fit` et :code:`evaluate` utilisent les" +#~ " fonctions :code:`train()` et :code:`test()` " +#~ "définies précédemment dans :code:`mxnet_mnist.py`." +#~ " Ce que nous faisons vraiment ici," +#~ " c'est que nous indiquons à Flower," +#~ " par le biais de notre sous-" +#~ "classe :code:`NumPyClient`, laquelle de nos" +#~ " fonctions déjà définies doit être " +#~ "appelée pour l'entraînement et l'évaluation." +#~ " Nous avons inclus des annotations de" +#~ " type pour te donner une meilleure" +#~ " compréhension des types de données " +#~ "qui sont transmis." + +#~ msgid "" +#~ "Having defined data loading, model " +#~ "architecture, training, and evaluation we " +#~ "can put everything together and train" +#~ " our :code:`Sequential` model on MNIST." +#~ msgstr "" +#~ "Après avoir défini le chargement des " +#~ "données, l'architecture du modèle, la " +#~ "formation et l'évaluation, nous pouvons " +#~ "tout rassembler et former notre modèle" +#~ " :code:`Sequential` sur MNIST." + +#~ msgid "" +#~ "in each window (make sure that the" +#~ " server is still running before you" +#~ " do so) and see your MXNet " +#~ "project run federated learning across " +#~ "two clients. Congratulations!" +#~ msgstr "" +#~ "dans chaque fenêtre (assure-toi que " +#~ "le serveur est toujours en cours " +#~ "d'exécution avant de le faire) et " +#~ "tu verras ton projet MXNet exécuter " +#~ "l'apprentissage fédéré sur deux clients. " +#~ "Félicitations !" + +#~ msgid "" +#~ "The full source code for this " +#~ "example: `MXNet: From Centralized To " +#~ "Federated (Code) " +#~ "`_. Our " +#~ "example is of course somewhat over-" +#~ "simplified because both clients load the" +#~ " exact same dataset, which isn't " +#~ "realistic. You're now prepared to " +#~ "explore this topic further. How about" +#~ " using a CNN or using a " +#~ "different dataset? How about adding more" +#~ " clients?" +#~ msgstr "" +#~ "Le code source complet de cet " +#~ "exemple : `MXNet : From Centralized " +#~ "To Federated (Code) " +#~ "`_. Notre " +#~ "exemple est bien sûr un peu trop" +#~ " simplifié parce que les deux clients" +#~ " chargent exactement le même ensemble " +#~ "de données, ce qui n'est pas " +#~ "réaliste. Tu es maintenant prêt à " +#~ "explorer ce sujet plus en profondeur." +#~ " Pourquoi ne pas utiliser un CNN " +#~ "ou un ensemble de données différent " +#~ "? Pourquoi ne pas ajouter d'autres " +#~ "clients ?" + +#~ msgid "with the following command sequence:" +#~ msgstr "avec la séquence de commandes suivante :" + +#~ msgid "" +#~ "In case you are a researcher you" +#~ " might be just fine using the " +#~ "self-signed certificates generated using " +#~ "the scripts which are part of this" +#~ " guide." +#~ msgstr "" +#~ "Si tu es un chercheur, tu peux " +#~ "très bien utiliser les certificats " +#~ "auto-signés générés à l'aide des " +#~ "scripts qui font partie de ce " +#~ "guide." + +#~ msgid "" +#~ "We are now going to show how " +#~ "to write a sever which uses the" +#~ " previously generated scripts." +#~ msgstr "" +#~ "Nous allons maintenant montrer comment " +#~ "écrire un serveur qui utilise les " +#~ "scripts générés précédemment." + +#~ msgid "" +#~ "When providing certificates, the server " +#~ "expects a tuple of three certificates." +#~ " :code:`Path` can be used to easily" +#~ " read the contents of those files " +#~ "into byte strings, which is the " +#~ "data type :code:`start_server` expects." +#~ msgstr "" +#~ "Lorsqu'il fournit des certificats, le " +#~ "serveur attend un tuple de trois " +#~ "certificats. :code:`Path` peut être utilisé" +#~ " pour lire facilement le contenu de" +#~ " ces fichiers en chaînes d'octets, ce" +#~ " qui est le type de données " +#~ "attendu par :code:`start_server`." + +#~ msgid "" +#~ "The simplest way to get started " +#~ "with Flower is by using the " +#~ "pre-made Docker images, which you can" +#~ " find on `Docker Hub " +#~ "`_." +#~ msgstr "" + +#~ msgid "Flower server" +#~ msgstr "Serveur de Flower" #~ msgid "" -#~ ":py:obj:`push_task_ins " -#~ "`\\ \\(req\\)" +#~ "The command will pull the Docker " +#~ "image with the tag " +#~ "``1.7.0-py3.11-ubuntu22.04`` from Docker Hub. " +#~ "The tag contains the information which" +#~ " Flower, Python and Ubuntu is used." +#~ " In this case, it uses Flower " +#~ "1.7.0, Python 3.11 and Ubuntu 22.04. " +#~ "The ``--rm`` flag tells Docker to " +#~ "remove the container after it exits." #~ msgstr "" #~ msgid "" -#~ "Optionally specify the type of actor " -#~ "to use. The actor object, which " -#~ "persists throughout the simulation, will " -#~ "be the process in charge of " -#~ "running the clients' jobs (i.e. their" -#~ " `fit()` method)." +#~ "By default, the Flower server keeps " +#~ "state in-memory. When using the " +#~ "Docker flag ``--rm``, the state is " +#~ "not persisted between container starts. " +#~ "We will show below how to save " +#~ "the state in a file on your " +#~ "host system." #~ msgstr "" #~ msgid "" -#~ "Much effort went into a completely " -#~ "restructured Flower docs experience. The " -#~ "documentation on [flower.ai/docs](flower.ai/docs) is" -#~ " now divided into Flower Framework, " -#~ "Flower Baselines, Flower Android SDK, " -#~ "Flower iOS SDK, and code example " -#~ "projects." +#~ "The ``-p :`` flag tells " +#~ "Docker to map the ports " +#~ "``9091``/``9092`` of the host to " +#~ "``9091``/``9092`` of the container, allowing" +#~ " you to access the Driver API " +#~ "on ``http://localhost:9091`` and the Fleet " +#~ "API on ``http://localhost:9092``. Lastly, any" +#~ " flag that comes after the tag " +#~ "is passed to the Flower server. " +#~ "Here, we are passing the flag " +#~ "``--insecure``." #~ msgstr "" #~ msgid "" -#~ "Flower usage examples used to be " -#~ "bundled with Flower in a package " -#~ "called ``flwr_example``. We are migrating " -#~ "those examples to standalone projects to" -#~ " make them easier to use. All " -#~ "new examples are based in the " -#~ "directory `examples " -#~ "`_." +#~ "The ``--insecure`` flag enables insecure " +#~ "communication (using HTTP, not HTTPS) " +#~ "and should only be used for " +#~ "testing purposes. We strongly recommend " +#~ "enabling `SSL `_ when " +#~ "deploying to a production environment." #~ msgstr "" -#~ "Les exemples d'utilisation de Flower " -#~ "étaient auparavant regroupés avec Flower " -#~ "dans un paquet appelé ``flwr_example``. " -#~ "Nous migrons ces exemples vers des " -#~ "projets autonomes pour les rendre plus" -#~ " faciles à utiliser. Tous les " -#~ "nouveaux exemples sont basés dans le " -#~ "répertoire ``examples " -#~ "`_." -#~ msgid "Quickstart TensorFlow/Keras" -#~ msgstr "Démarrage rapide de TensorFlow/Keras" +#~ msgid "" +#~ "You can use ``--help`` to view all" +#~ " available flags that the server " +#~ "supports:" +#~ msgstr "" + +#~ msgid "" +#~ "If you want to persist the state" +#~ " of the server on your host " +#~ "system, all you need to do is " +#~ "specify a path where you want to" +#~ " save the file on your host " +#~ "system and a name for the database" +#~ " file. In the example below, we " +#~ "tell Docker via the flag ``-v`` to" +#~ " mount the user's home directory " +#~ "(``~/`` on your host) into the " +#~ "``/app/`` directory of the container. " +#~ "Furthermore, we use the flag " +#~ "``--database`` to specify the name of" +#~ " the database file." +#~ msgstr "" + +#~ msgid "" +#~ "As soon as the server starts, the" +#~ " file ``state.db`` is created in the" +#~ " user's home directory on your host" +#~ " system. If the file already exists," +#~ " the server tries to restore the " +#~ "state from the file. To start the" +#~ " server with an empty database, " +#~ "simply remove the ``state.db`` file." +#~ msgstr "" -#~ msgid "Legacy Examples (`flwr_example`)" -#~ msgstr "Exemples hérités (`flwr_example`)" +#~ msgid "" +#~ "To enable SSL, you will need a " +#~ "CA certificate, a server certificate and" +#~ " a server private key." +#~ msgstr "" #~ msgid "" -#~ "The useage examples in `flwr_example` " -#~ "are deprecated and will be removed " -#~ "in the future. New examples are " -#~ "provided as standalone projects in " -#~ "`examples `_." +#~ "For testing purposes, you can generate" +#~ " your own self-signed certificates. " +#~ "The `Enable SSL connections " +#~ "`_ page contains " +#~ "a section that will guide you " +#~ "through the process." #~ msgstr "" -#~ "Les exemples d'utilisation dans `flwr_example`" -#~ " sont obsolètes et seront supprimés à" -#~ " l'avenir. De nouveaux exemples sont " -#~ "fournis en tant que projets autonomes" -#~ " dans `examples " -#~ "`_." -#~ msgid "Extra Dependencies" -#~ msgstr "Dépendances supplémentaires" +#~ msgid "" +#~ "Assuming all files we need are in" +#~ " the local ``certificates`` directory, we" +#~ " can use the flag ``-v`` to " +#~ "mount the local directory into the " +#~ "``/app/`` directory of the container. " +#~ "This allows the server to access " +#~ "the files within the container. Finally," +#~ " we pass the names of the " +#~ "certificates to the server with the " +#~ "``--certificates`` flag." +#~ msgstr "" + +#~ msgid "Using a different Flower or Python version" +#~ msgstr "" #~ msgid "" -#~ "The core Flower framework keeps a " -#~ "minimal set of dependencies. The " -#~ "examples demonstrate Flower in the " -#~ "context of different machine learning " -#~ "frameworks, so additional dependencies need" -#~ " to be installed before an example" -#~ " can be run." +#~ "If you want to use a different " +#~ "version of Flower or Python, you " +#~ "can do so by changing the tag. " +#~ "All versions we provide are available" +#~ " on `Docker Hub " +#~ "`_." #~ msgstr "" -#~ "Le noyau du framework Flower conserve" -#~ " un ensemble minimal de dépendances. " -#~ "Les exemples démontrent Flower dans le" -#~ " contexte de différents frameworks " -#~ "d'apprentissage automatique, de sorte que " -#~ "des dépendances supplémentaires doivent être" -#~ " installées avant qu'un exemple puisse " -#~ "être exécuté." -#~ msgid "For PyTorch examples::" -#~ msgstr "Pour les exemples de PyTorch: :" +#~ msgid "" +#~ "The following command returns the " +#~ "current image hash referenced by the " +#~ "``server:1.7.0-py3.11-ubuntu22.04`` tag:" +#~ msgstr "" -#~ msgid "For TensorFlow examples::" -#~ msgstr "Pour les exemples de TensorFlow : :" +#~ msgid "Next, we can pin the hash when running a new server container:" +#~ msgstr "" -#~ msgid "For both PyTorch and TensorFlow examples::" -#~ msgstr "Pour les exemples PyTorch et TensorFlow: :" +#~ msgid "flower-driver-api" +#~ msgstr "flower-driver-api" + +#~ msgid "flower-fleet-api" +#~ msgstr "flower-fleet-api" #~ msgid "" -#~ "Please consult :code:`pyproject.toml` for a" -#~ " full list of possible extras " -#~ "(section :code:`[tool.poetry.extras]`)." +#~ "Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " +#~ "[:py:class:`str`, :py:obj:`~typing.Union`\\ " +#~ "[:py:class:`int`, :py:class:`float`, :py:class:`str`, " +#~ ":py:class:`bytes`, :py:class:`bool`, " +#~ ":py:class:`~typing.List`\\ [:py:class:`int`], " +#~ ":py:class:`~typing.List`\\ [:py:class:`float`], " +#~ ":py:class:`~typing.List`\\ [:py:class:`str`], " +#~ ":py:class:`~typing.List`\\ [:py:class:`bytes`], " +#~ ":py:class:`~typing.List`\\ [:py:class:`bool`]]]" #~ msgstr "" -#~ "Tu peux consulter :code:`pyproject.toml` pour" -#~ " une liste complète des extras " -#~ "possibles (section :code:`[tool.poetry.extras]`)." -#~ msgid "PyTorch Examples" -#~ msgstr "Exemples de PyTorch" +#~ msgid "" +#~ ":py:obj:`create_error_reply " +#~ "`\\ \\(error\\, " +#~ "ttl\\)" +#~ msgstr "" #~ msgid "" -#~ "Our PyTorch examples are based on " -#~ "PyTorch 1.7. They should work with " -#~ "other releases as well. So far, we" -#~ " provide the following examples." +#~ ":py:obj:`create_reply `\\ " +#~ "\\(content\\, ttl\\)" #~ msgstr "" -#~ "Nos exemples PyTorch sont basés sur " -#~ "PyTorch 1.7. Ils devraient fonctionner " -#~ "avec d'autres versions également. Jusqu'à " -#~ "présent, nous fournissons les exemples " -#~ "suivants." -#~ msgid "CIFAR-10 Image Classification" -#~ msgstr "Classification d'images CIFAR-10" +#~ msgid "" +#~ "Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " +#~ "[:py:class:`str`, :py:obj:`~typing.Union`\\ " +#~ "[:py:class:`int`, :py:class:`float`, " +#~ ":py:class:`~typing.List`\\ [:py:class:`int`], " +#~ ":py:class:`~typing.List`\\ [:py:class:`float`]]]" +#~ msgstr "" + +#~ msgid "Run Flower server (Driver API and Fleet API)." +#~ msgstr "" #~ msgid "" -#~ "`CIFAR-10 and CIFAR-100 " -#~ "`_ are " -#~ "popular RGB image datasets. The Flower" -#~ " CIFAR-10 example uses PyTorch to " -#~ "train a simple CNN classifier in a" -#~ " federated learning setup with two " -#~ "clients." +#~ ":py:obj:`start_driver `\\ " +#~ "\\(\\*\\[\\, server\\_address\\, server\\, ...\\]\\)" #~ msgstr "" -#~ "`CIFAR-10 et CIFAR-100 " -#~ "`_ sont des" -#~ " ensembles de données d'images RVB " -#~ "populaires. L'exemple Flower CIFAR-10 utilise" -#~ " PyTorch pour former un classificateur " -#~ "CNN simple dans une configuration " -#~ "d'apprentissage fédéré avec deux clients." -#~ msgid "First, start a Flower server:" +#~ msgid "Start a Flower Driver API server." #~ msgstr "Tout d'abord, démarre un serveur Flower :" -#~ msgid "$ ./src/py/flwr_example/pytorch_cifar/run-server.sh" -#~ msgstr "$ ./src/py/flwr_example/pytorch_cifar/run-server.sh" - -#~ msgid "Then, start the two clients in a new terminal window:" +#~ msgid "" +#~ ":py:obj:`Driver `\\ " +#~ "\\(\\[driver\\_service\\_address\\, ...\\]\\)" #~ msgstr "" -#~ "Ensuite, démarre les deux clients dans" -#~ " une nouvelle fenêtre de terminal :" +#~ "Flower 1.0 : ``start_server(..., " +#~ "config=flwr.server.ServerConfig(num_rounds=3, " +#~ "round_timeout=600.0), ...)``" -#~ msgid "$ ./src/py/flwr_example/pytorch_cifar/run-clients.sh" -#~ msgstr "$ ./src/py/flwr_example/pytorch_cifar/run-clients.sh" +#~ msgid "`Driver` class provides an interface to the Driver API." +#~ msgstr "" -#~ msgid "For more details, see :code:`src/py/flwr_example/pytorch_cifar`." -#~ msgstr "Pour plus de détails, voir :code:`src/py/flwr_example/pytorch_cifar`." +#~ msgid "" +#~ "The IPv4 or IPv6 address of the" +#~ " Driver API server. Defaults to " +#~ "`\"[::]:9091\"`." +#~ msgstr "" -#~ msgid "ImageNet-2012 Image Classification" -#~ msgstr "ImageNet-2012 Classification des images" +#~ msgid "Disconnect from the SuperLink if connected." +#~ msgstr "" #~ msgid "" -#~ "`ImageNet-2012 `_ is " -#~ "one of the major computer vision " -#~ "datasets. The Flower ImageNet example " -#~ "uses PyTorch to train a ResNet-18 " -#~ "classifier in a federated learning setup" -#~ " with ten clients." +#~ ":py:obj:`create_message `\\" +#~ " \\(content\\, message\\_type\\, ...\\)" #~ msgstr "" -#~ "`ImageNet-2012 `_ est " -#~ "l'un des principaux ensembles de données" -#~ " de vision par ordinateur. L'exemple " -#~ "Flower ImageNet utilise PyTorch pour " -#~ "entraîner un classificateur ResNet-18 dans " -#~ "une configuration d'apprentissage fédéré avec" -#~ " dix clients." -#~ msgid "$ ./src/py/flwr_example/pytorch_imagenet/run-server.sh" -#~ msgstr "$ ./src/py/flwr_example/pytorch_imagenet/run-server.sh" +#~ msgid "" +#~ "Time-to-live for the round trip" +#~ " of this message, i.e., the time " +#~ "from sending this message to receiving" +#~ " a reply. It specifies the duration" +#~ " for which the message and its " +#~ "potential reply are considered valid." +#~ msgstr "" -#~ msgid "$ ./src/py/flwr_example/pytorch_imagenet/run-clients.sh" -#~ msgstr "$ ./src/py/flwr_example/pytorch_imagenet/run-clients.sh" +#~ msgid "start\\_driver" +#~ msgstr "start_client" -#~ msgid "For more details, see :code:`src/py/flwr_example/pytorch_imagenet`." +#~ msgid "" +#~ "The IPv4 or IPv6 address of the" +#~ " Driver API server. Defaults to " +#~ "`\"[::]:8080\"`." #~ msgstr "" -#~ "Pour plus de détails, voir " -#~ ":code:`src/py/flwr_example/pytorch_imagenet`." -#~ msgid "TensorFlow Examples" -#~ msgstr "Exemples de TensorFlow" +#~ msgid "" +#~ "A server implementation, either " +#~ "`flwr.server.Server` or a subclass thereof." +#~ " If no instance is provided, then " +#~ "`start_driver` will create one." +#~ msgstr "" #~ msgid "" -#~ "Our TensorFlow examples are based on " -#~ "TensorFlow 2.0 or newer. So far, " -#~ "we provide the following examples." +#~ "An implementation of the class " +#~ "`flwr.server.ClientManager`. If no implementation" +#~ " is provided, then `start_driver` will " +#~ "use `flwr.server.SimpleClientManager`." #~ msgstr "" -#~ "Nos exemples TensorFlow sont basés sur" -#~ " TensorFlow 2.0 ou une version plus" -#~ " récente. Jusqu'à présent, nous te " -#~ "proposons les exemples suivants." -#~ msgid "Fashion-MNIST Image Classification" -#~ msgstr "Classification d'images Fashion-MNIST" +#~ msgid "The Driver object to use." +#~ msgstr "" + +#~ msgid "Starting a driver that connects to an insecure server:" +#~ msgstr "" + +#~ msgid "Starting a driver that connects to an SSL-enabled server:" +#~ msgstr "" #~ msgid "" -#~ "`Fashion-MNIST `_ is often used as " -#~ "the \"Hello, world!\" of machine " -#~ "learning. We follow this tradition and" -#~ " provide an example which samples " -#~ "random local datasets from Fashion-MNIST" -#~ " and trains a simple image " -#~ "classification model over those partitions." +#~ ":py:obj:`run_simulation_from_cli " +#~ "`\\ \\(\\)" #~ msgstr "" -#~ "nous suivons cette tradition et " -#~ "fournissons un exemple qui échantillonne " -#~ "des ensembles de données locales " -#~ "aléatoires de Fashion-MNIST et entraîne" -#~ " un modèle simple de classification " -#~ "d'images sur ces partitions." -#~ msgid "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-server.sh" -#~ msgstr "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-server.sh" +#~ msgid "Run Simulation Engine from the CLI." +#~ msgstr "" -#~ msgid "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-clients.sh" -#~ msgstr "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-clients.sh" +#~ msgid "run\\_simulation\\_from\\_cli" +#~ msgstr "Simulation de moniteur" #~ msgid "" -#~ "For more details, see " -#~ ":code:`src/py/flwr_example/tensorflow_fashion_mnist`." +#~ "Check out this Federated Learning " +#~ "quickstart tutorial for using Flower " +#~ "with MXNet to train a Sequential " +#~ "model on MNIST." #~ msgstr "" -#~ "Pour plus de détails, voir " -#~ ":code:`src/py/flwr_example/tensorflow_fashion_mnist`." + +#~ msgid "Quickstart MXNet" +#~ msgstr "Démarrage rapide de MXNet" #~ msgid "" #~ "MXNet is no longer maintained and " #~ "has been moved into `Attic " #~ "`_. As a " #~ "result, we would encourage you to " -#~ "use other ML frameworks alongise Flower," -#~ " for example, PyTorch. This tutorial " -#~ "might be removed in future versions " -#~ "of Flower." +#~ "use other ML frameworks alongside " +#~ "Flower, for example, PyTorch. This " +#~ "tutorial might be removed in future " +#~ "versions of Flower." #~ msgstr "" #~ msgid "" -#~ "Now that you have known how " -#~ "federated XGBoost work with Flower, it's" -#~ " time to run some more comprehensive" -#~ " experiments by customising the " -#~ "experimental settings. In the xgboost-" -#~ "comprehensive example (`full code " -#~ "`_), we provide more options " -#~ "to define various experimental setups, " -#~ "including aggregation strategies, data " -#~ "partitioning and centralised/distributed evaluation." -#~ " We also support `Flower simulation " -#~ "`_ making it easy to " -#~ "simulate large client cohorts in a " -#~ "resource-aware manner. Let's take a " -#~ "look!" +#~ "In this tutorial, we will learn " +#~ "how to train a :code:`Sequential` model" +#~ " on MNIST using Flower and MXNet." #~ msgstr "" +#~ "Dans ce tutoriel, nous allons apprendre" +#~ " à former un modèle :code:`Sequential` " +#~ "sur MNIST à l'aide de Flower et" +#~ " de MXNet." -#~ msgid "|31e4b1afa87c4b968327bbeafbf184d4|" +#~ msgid "Since we want to use MXNet, let's go ahead and install it:" +#~ msgstr "Puisque nous voulons utiliser MXNet, allons-y et installons-le :" + +#~ msgid "" +#~ "Now that we have all our " +#~ "dependencies installed, let's run a " +#~ "simple distributed training with two " +#~ "clients and one server. Our training " +#~ "procedure and network architecture are " +#~ "based on MXNet´s `Hand-written Digit " +#~ "Recognition tutorial " +#~ "`_." #~ msgstr "" +#~ "Maintenant que toutes nos dépendances " +#~ "sont installées, lançons une formation " +#~ "distribuée simple avec deux clients et" +#~ " un serveur. Notre procédure de " +#~ "formation et l'architecture du réseau " +#~ "sont basées sur le tutoriel de " +#~ "reconnaissance de chiffres écrits à la" +#~ " main du MXNet " +#~ "`_." + +#~ msgid "" +#~ "In a file called :code:`client.py`, " +#~ "import Flower and MXNet related " +#~ "packages:" +#~ msgstr "" +#~ "Dans un fichier appelé :code:`client.py`, " +#~ "importe Flower et les paquets liés " +#~ "au MXNet :" + +#~ msgid "In addition, define the device allocation in MXNet with:" +#~ msgstr "En outre, définis l'attribution de l'appareil dans MXNet avec :" + +#~ msgid "" +#~ "We use MXNet to load MNIST, a " +#~ "popular image classification dataset of " +#~ "handwritten digits for machine learning. " +#~ "The MXNet utility :code:`mx.test_utils.get_mnist()`" +#~ " downloads the training and test " +#~ "data." +#~ msgstr "" +#~ "Nous utilisons MXNet pour charger MNIST," +#~ " un ensemble de données de " +#~ "classification d'images populaire de chiffres" +#~ " manuscrits pour l'apprentissage automatique. " +#~ "L'utilitaire MXNet :code:`mx.test_utils.get_mnist()` " +#~ "télécharge les données d'entraînement et " +#~ "de test." + +#~ msgid "" +#~ "Define the training and loss with " +#~ "MXNet. We train the model by " +#~ "looping over the dataset, measure the" +#~ " corresponding loss, and optimize it." +#~ msgstr "" +#~ "Définis l'entraînement et la perte avec" +#~ " MXNet. Nous entraînons le modèle en" +#~ " parcourant en boucle l'ensemble des " +#~ "données, nous mesurons la perte " +#~ "correspondante et nous l'optimisons." + +#~ msgid "" +#~ "Next, we define the validation of " +#~ "our machine learning model. We loop " +#~ "over the test set and measure both" +#~ " loss and accuracy on the test " +#~ "set." +#~ msgstr "" +#~ "Ensuite, nous définissons la validation " +#~ "de notre modèle d'apprentissage automatique." +#~ " Nous effectuons une boucle sur " +#~ "l'ensemble de test et mesurons à " +#~ "la fois la perte et la précision" +#~ " sur l'ensemble de test." + +#~ msgid "" +#~ "After defining the training and testing" +#~ " of a MXNet machine learning model," +#~ " we use these functions to implement" +#~ " a Flower client." +#~ msgstr "" +#~ "Après avoir défini la formation et " +#~ "le test d'un modèle d'apprentissage " +#~ "automatique MXNet, nous utilisons ces " +#~ "fonctions pour mettre en œuvre un " +#~ "client Flower." + +#~ msgid "Our Flower clients will use a simple :code:`Sequential` model:" +#~ msgstr "Nos clients Flower utiliseront un modèle simple :code:`Sequential` :" + +#~ msgid "" +#~ "After loading the dataset with " +#~ ":code:`load_data()` we perform one forward " +#~ "propagation to initialize the model and" +#~ " model parameters with :code:`model(init)`. " +#~ "Next, we implement a Flower client." +#~ msgstr "" +#~ "Après avoir chargé l'ensemble de données" +#~ " avec :code:`load_data()`, nous effectuons " +#~ "une propagation vers l'avant pour " +#~ "initialiser le modèle et les paramètres" +#~ " du modèle avec :code:`model(init)`. " +#~ "Ensuite, nous implémentons un client " +#~ "Flower." -#~ msgid "|c9d935b4284e4c389a33d86b33e07c0a|" +#~ msgid "" +#~ "Flower provides a convenience class " +#~ "called :code:`NumPyClient` which makes it " +#~ "easier to implement the :code:`Client` " +#~ "interface when your workload uses MXNet." +#~ " Implementing :code:`NumPyClient` usually means" +#~ " defining the following methods " +#~ "(:code:`set_parameters` is optional though):" #~ msgstr "" +#~ "Flower fournit une classe de commodité" +#~ " appelée :code:`NumPyClient` qui facilite " +#~ "l'implémentation de l'interface :code:`Client` " +#~ "lorsque ta charge de travail utilise " +#~ "MXNet. L'implémentation de :code:`NumPyClient` " +#~ "signifie généralement la définition des " +#~ "méthodes suivantes (:code:`set_parameters` est " +#~ "cependant facultatif) :" -#~ msgid "|00727b5faffb468f84dd1b03ded88638|" +#~ msgid "They can be implemented in the following way:" +#~ msgstr "Ils peuvent être mis en œuvre de la manière suivante :" + +#~ msgid "" +#~ "We can now create an instance of" +#~ " our class :code:`MNISTClient` and add " +#~ "one line to actually run this " +#~ "client:" #~ msgstr "" +#~ "Nous pouvons maintenant créer une " +#~ "instance de notre classe :code:`MNISTClient`" +#~ " et ajouter une ligne pour exécuter" +#~ " ce client :" -#~ msgid "|daf0cf0ff4c24fd29439af78416cf47b|" +#~ msgid "" +#~ "That's it for the client. We only" +#~ " have to implement :code:`Client` or " +#~ ":code:`NumPyClient` and call " +#~ ":code:`fl.client.start_client()` or " +#~ ":code:`fl.client.start_numpy_client()`. The string " +#~ ":code:`\"0.0.0.0:8080\"` tells the client " +#~ "which server to connect to. In our" +#~ " case we can run the server and" +#~ " the client on the same machine, " +#~ "therefore we use :code:`\"0.0.0.0:8080\"`. If" +#~ " we run a truly federated workload" +#~ " with the server and clients running" +#~ " on different machines, all that " +#~ "needs to change is the " +#~ ":code:`server_address` we pass to the " +#~ "client." #~ msgstr "" +#~ "C'est tout pour le client. Il nous" +#~ " suffit d'implémenter :code:`Client` ou " +#~ ":code:`NumPyClient` et d'appeler " +#~ ":code:`fl.client.start_client()`. La chaîne " +#~ ":code:`\"0.0.0:8080\"` indique au client à " +#~ "quel serveur se connecter. Dans notre" +#~ " cas, nous pouvons exécuter le " +#~ "serveur et le client sur la même" +#~ " machine, c'est pourquoi nous utilisons " +#~ ":code:`\"0.0.0:8080\"`. Si nous exécutons une" +#~ " charge de travail véritablement fédérée" +#~ " avec le serveur et les clients " +#~ "s'exécutant sur des machines différentes, " +#~ "tout ce qui doit changer est " +#~ ":code:`server_address` que nous transmettons " +#~ "au client." + +#~ msgid "" +#~ "With both client and server ready, " +#~ "we can now run everything and see" +#~ " federated learning in action. Federated" +#~ " learning systems usually have a " +#~ "server and multiple clients. We " +#~ "therefore have to start the server " +#~ "first:" +#~ msgstr "" +#~ "Le client et le serveur étant " +#~ "prêts, nous pouvons maintenant tout " +#~ "exécuter et voir l'apprentissage fédéré " +#~ "en action. Les systèmes d'apprentissage " +#~ "fédéré ont généralement un serveur et" +#~ " plusieurs clients. Nous devons donc " +#~ "commencer par démarrer le serveur :" -#~ msgid "|9f093007080d471d94ca90d3e9fde9b6|" +#~ msgid "" +#~ "Congratulations! You've successfully built and" +#~ " run your first federated learning " +#~ "system. The full `source code " +#~ "`_ for this example can " +#~ "be found in :code:`examples/quickstart-mxnet`." #~ msgstr "" +#~ "Félicitations ! Tu as réussi à " +#~ "construire et à faire fonctionner ton" +#~ " premier système d'apprentissage fédéré. Le" +#~ " code source complet " +#~ "`_ de cet exemple se " +#~ "trouve dans :code:`examples/quickstart-mxnet`." -#~ msgid "|46a26e6150e0479fbd3dfd655f36eb13|" +#~ msgid ":code:`load_mnist()`" +#~ msgstr ":code:`load_mnist()`" + +#~ msgid "Loads the MNIST dataset using OpenML" +#~ msgstr "Charge l'ensemble de données MNIST à l'aide d'OpenML" + +#~ msgid ":code:`shuffle()`" +#~ msgstr ":code:`shuffle()`" + +#~ msgid "Shuffles data and its label" +#~ msgstr "Mélange les données et leur étiquette" + +#~ msgid ":code:`partition()`" +#~ msgstr ":code:`partition()`" + +#~ msgid "Splits datasets into a number of partitions" +#~ msgstr "Divise les ensembles de données en un certain nombre de partitions" + +#~ msgid "" +#~ "We load the MNIST dataset from " +#~ "`OpenML " +#~ "`_, a" +#~ " popular image classification dataset of" +#~ " handwritten digits for machine learning." +#~ " The utility :code:`utils.load_mnist()` downloads" +#~ " the training and test data. The " +#~ "training set is split afterwards into" +#~ " 10 partitions with :code:`utils.partition()`." #~ msgstr "" +#~ "Nous chargeons l'ensemble de données " +#~ "MNIST de `OpenML `_," +#~ " un ensemble de données de " +#~ "classification d'images populaires de chiffres" +#~ " manuscrits pour l'apprentissage automatique. " +#~ "L'utilitaire :code:`utils.load_mnist()` télécharge " +#~ "les données d'entraînement et de test." +#~ " L'ensemble d'entraînement est ensuite " +#~ "divisé en 10 partitions avec " +#~ ":code:`utils.partition()`." -#~ msgid "|3daba297595c4c7fb845d90404a6179a|" +#~ msgid "Let's get stated!" +#~ msgstr "Allons-y, déclarons-le !" + +#~ msgid "|2b5c62c529f6416f840c594cce062fbb|" #~ msgstr "" -#~ msgid "|5769874fa9c4455b80b2efda850d39d7|" +#~ msgid "|90b334680cb7467d9a04d39b8e8dca9f|" #~ msgstr "" -#~ msgid "|ba47ffb421814b0f8f9fa5719093d839|" +#~ msgid "|65764ceee89f4335bfd93fd0b115e831|" #~ msgstr "" -#~ msgid "|aeac5bf79cbf497082e979834717e01b|" +#~ msgid "|d97319ec28bb407ea0ab9705e38f3bcf|" #~ msgstr "" -#~ msgid "|ce27ed4bbe95459dba016afc42486ba2|" +#~ msgid "|11e95ac83a8548d8b3505b4663187d07|" #~ msgstr "" -#~ msgid "|ae94a7f71dda443cbec2385751427d41|" +#~ msgid "|1dab2f3a23674abc8a6731f20fa10730|" #~ msgstr "" -#~ msgid "|e61fce4d43d243e7bb08bdde97d81ce6|" +#~ msgid "|7f0ee162da38450788493a21627306f7|" #~ msgstr "" -#~ msgid "|08cb60859b07461588fe44e55810b050|" +#~ msgid "|296a1fb72c514b23b3d8905ff0ff98c6|" +#~ msgstr "" + +#~ msgid "|5b1408eec0d746cdb91162a9107b6089|" +#~ msgstr "" + +#~ msgid "|aef19f4b122c4e8d9f4c57f99bcd5dd2|" +#~ msgstr "" + +#~ msgid "|2881a86d8fc54ba29d96b29fc2819f4a|" +#~ msgstr "" + +#~ msgid "|ec1fe880237247e0975f52766775ab84|" +#~ msgstr "" + +#~ msgid "|9fdf048ed58d4467b2718cdf4aaf1ec3|" +#~ msgstr "" + +#~ msgid "|ff726bc5505e432388ee2fdd6ef420b9|" #~ msgstr "" diff --git a/doc/locales/fr/LC_MESSAGES/sphinx.po b/doc/locales/fr/LC_MESSAGES/sphinx.po index 28669f60bab..d1978d70144 100644 --- a/doc/locales/fr/LC_MESSAGES/sphinx.po +++ b/doc/locales/fr/LC_MESSAGES/sphinx.po @@ -8,7 +8,7 @@ msgid "" msgstr "" "Project-Id-Version: Flower \n" "Report-Msgid-Bugs-To: \n" -"POT-Creation-Date: 2023-08-29 16:41+0200\n" +"POT-Creation-Date: 2024-05-28 11:47+0200\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language: fr\n" @@ -17,21 +17,26 @@ msgstr "" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: 8bit\n" -"Generated-By: Babel 2.12.1\n" +"Generated-By: Babel 2.15.0\n" -#: ../../source/_templates/base.html:13 +#: ../../source/_templates/base.html:18 msgid "About these documents" msgstr "" -#: ../../source/_templates/base.html:16 +#: ../../source/_templates/base.html:21 msgid "Index" msgstr "" -#: ../../source/_templates/base.html:19 +#: ../../source/_templates/base.html:24 +#: ../../source/_templates/sidebar/search.html:2 msgid "Search" msgstr "" -#: ../../source/_templates/base.html:22 +#: ../../source/_templates/base.html:27 msgid "Copyright" msgstr "" +#: ../../source/_templates/sidebar/search.html:2 +msgid "Type / to search" +msgstr "" + diff --git a/doc/locales/ko/LC_MESSAGES/framework-docs.po b/doc/locales/ko/LC_MESSAGES/framework-docs.po index 63a16023290..17960b66315 100644 --- a/doc/locales/ko/LC_MESSAGES/framework-docs.po +++ b/doc/locales/ko/LC_MESSAGES/framework-docs.po @@ -7,18 +7,17 @@ msgid "" msgstr "" "Project-Id-Version: Flower main\n" "Report-Msgid-Bugs-To: \n" -"POT-Creation-Date: 2024-05-13 09:48+0200\n" +"POT-Creation-Date: 2024-05-28 11:47+0200\n" "PO-Revision-Date: 2024-05-14 21:01+0000\n" "Last-Translator: \"Young D. Kwon\" \n" -"Language-Team: Korean \n" "Language: ko\n" +"Language-Team: Korean \n" +"Plural-Forms: nplurals=1; plural=0;\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: 8bit\n" -"Plural-Forms: nplurals=1; plural=0;\n" -"X-Generator: Weblate 5.6-dev\n" -"Generated-By: Babel 2.14.0\n" +"Generated-By: Babel 2.15.0\n" #: ../../source/contributor-explanation-architecture.rst:2 msgid "Flower Architecture" @@ -32,8 +31,7 @@ msgstr "엣지 클라이언트 엔진" msgid "" "`Flower `_ core framework architecture with Edge " "Client Engine" -msgstr "`Flower `_의 핵심 프레임워크 아키텍처와 엣지 클라이언트 " -"엔진" +msgstr "`Flower `_의 핵심 프레임워크 아키텍처와 엣지 클라이언트 엔진" #: ../../source/contributor-explanation-architecture.rst:13 msgid "Virtual Client Engine" @@ -43,8 +41,7 @@ msgstr "가상 클라이언트 엔진" msgid "" "`Flower `_ core framework architecture with Virtual " "Client Engine" -msgstr "`Flower `_의 핵심 프레임워크 아키텍처와 가상 클라이언트 " -"엔진" +msgstr "`Flower `_의 핵심 프레임워크 아키텍처와 가상 클라이언트 엔진" #: ../../source/contributor-explanation-architecture.rst:21 msgid "Virtual Client Engine and Edge Client Engine in the same workload" @@ -54,8 +51,7 @@ msgstr "동일 작업에서 가상 클라이언트 엔진과 엣지 클라이언 msgid "" "`Flower `_ core framework architecture with both " "Virtual Client Engine and Edge Client Engine" -msgstr "`Flower `_의 핵심 프레임워크 아키텍처와 가상 및 엣지 " -"클라이언트 엔진" +msgstr "`Flower `_의 핵심 프레임워크 아키텍처와 가상 및 엣지 클라이언트 엔진" #: ../../source/contributor-how-to-build-docker-images.rst:2 msgid "How to build Docker Flower images locally" @@ -78,17 +74,16 @@ msgid "" msgstr "" #: ../../source/contributor-how-to-build-docker-images.rst:11 -#: ../../source/how-to-run-flower-using-docker.rst:129 msgid "Clone the flower repository." msgstr "" #: ../../source/contributor-how-to-build-docker-images.rst:17 -#: ../../source/how-to-run-flower-using-docker.rst:135 +#: ../../source/how-to-run-flower-using-docker.rst:144 msgid "Verify the Docker daemon is running." msgstr "" #: ../../source/contributor-how-to-build-docker-images.rst:19 -#: ../../source/how-to-run-flower-using-docker.rst:137 +#: ../../source/how-to-run-flower-using-docker.rst:146 msgid "" "Please follow the first section on :doc:`Run Flower using Docker ` which covers this step in more detail." @@ -288,22 +283,10 @@ msgstr "" msgid "" "If you want to use your own base image instead of the official Flower " "base image, all you need to do is set the ``BASE_REPOSITORY``, " -"``PYTHON_VERSION`` and ``UBUNTU_VERSION`` build arguments. .. code-" -"block:: bash" -msgstr "" - -#: ../../source/contributor-how-to-build-docker-images.rst:129 -msgid "$ cd src/docker/superlink/ $ docker build \\" +"``PYTHON_VERSION`` and ``UBUNTU_VERSION`` build arguments." msgstr "" -#: ../../source/contributor-how-to-build-docker-images.rst:131 -msgid "" -"--build-arg BASE_REPOSITORY=flwr_base \\ --build-arg PYTHON_VERSION=3.11 " -"\\ --build-arg UBUNTU_VERSION=ubuntu22.04 \\ --build-arg " -"FLWR_VERSION=1.8.0 \\ -t flwr_superlink:0.1.0 ." -msgstr "" - -#: ../../source/contributor-how-to-build-docker-images.rst:137 +#: ../../source/contributor-how-to-build-docker-images.rst:138 msgid "After creating the image, we can test whether the image is working:" msgstr "" @@ -1511,7 +1494,7 @@ msgstr "" msgid "Creating and merging a pull request (PR)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:205 +#: ../../source/contributor-tutorial-contribute-on-github.rst:206 msgid "**Create the PR**" msgstr "" @@ -1544,178 +1527,180 @@ msgstr "" #: ../../source/contributor-tutorial-contribute-on-github.rst:193 msgid "" +"The title should be changed to adhere to the :ref:`pr_title_format` " +"guidelines, otherwise it won't be possible to merge the PR. So in this " +"case, a correct title might be ``docs(framework:skip) Fix typos``." +msgstr "" + +#: ../../source/contributor-tutorial-contribute-on-github.rst:196 +msgid "" "The input box in the middle is there for you to describe what your PR " "does and to link it to existing issues. We have placed comments (that " "won't be rendered once the PR is opened) to guide you through the " "process." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:196 -msgid "" -"It is important to follow the instructions described in comments. For " -"instance, in order to not break how our changelog system works, you " -"should read the information above the ``Changelog entry`` section " -"carefully. You can also checkout some examples and details in the " -":ref:`changelogentry` appendix." +#: ../../source/contributor-tutorial-contribute-on-github.rst:199 +msgid "It is important to follow the instructions described in comments." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:200 +#: ../../source/contributor-tutorial-contribute-on-github.rst:201 msgid "" "At the bottom you will find the button to open the PR. This will notify " "reviewers that a new PR has been opened and that they should look over it" " to merge or to request changes." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:203 +#: ../../source/contributor-tutorial-contribute-on-github.rst:204 msgid "" "If your PR is not yet ready for review, and you don't want to notify " "anyone, you have the option to create a draft pull request:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:208 +#: ../../source/contributor-tutorial-contribute-on-github.rst:209 msgid "**Making new changes**" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:208 +#: ../../source/contributor-tutorial-contribute-on-github.rst:209 msgid "" "Once the PR has been opened (as draft or not), you can still push new " "commits to it the same way we did before, by making changes to the branch" " associated with the PR." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:230 +#: ../../source/contributor-tutorial-contribute-on-github.rst:231 msgid "**Review the PR**" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:211 +#: ../../source/contributor-tutorial-contribute-on-github.rst:212 msgid "" "Once the PR has been opened or once the draft PR has been marked as " "ready, a review from code owners will be automatically requested:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:215 +#: ../../source/contributor-tutorial-contribute-on-github.rst:216 msgid "" "Code owners will then look into the code, ask questions, request changes " "or validate the PR." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:217 +#: ../../source/contributor-tutorial-contribute-on-github.rst:218 msgid "Merging will be blocked if there are ongoing requested changes." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:221 +#: ../../source/contributor-tutorial-contribute-on-github.rst:222 msgid "" "To resolve them, just push the necessary changes to the branch associated" " with the PR:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:225 +#: ../../source/contributor-tutorial-contribute-on-github.rst:226 msgid "And resolve the conversation:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:229 +#: ../../source/contributor-tutorial-contribute-on-github.rst:230 msgid "" "Once all the conversations have been resolved, you can re-request a " "review." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:250 +#: ../../source/contributor-tutorial-contribute-on-github.rst:251 msgid "**Once the PR is merged**" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:233 +#: ../../source/contributor-tutorial-contribute-on-github.rst:234 msgid "" "If all the automatic tests have passed and reviewers have no more changes" " to request, they can approve the PR and merge it." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:237 +#: ../../source/contributor-tutorial-contribute-on-github.rst:238 msgid "" "Once it is merged, you can delete the branch on GitHub (a button should " "appear to do so) and also delete it locally by doing:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:244 +#: ../../source/contributor-tutorial-contribute-on-github.rst:245 msgid "Then you should update your forked repository by doing:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:253 +#: ../../source/contributor-tutorial-contribute-on-github.rst:254 msgid "Example of first contribution" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:256 +#: ../../source/contributor-tutorial-contribute-on-github.rst:257 msgid "Problem" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:258 +#: ../../source/contributor-tutorial-contribute-on-github.rst:259 msgid "" "For our documentation, we've started to use the `Diàtaxis framework " "`_." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:260 +#: ../../source/contributor-tutorial-contribute-on-github.rst:261 msgid "" "Our \"How to\" guides should have titles that continue the sentence \"How" " to …\", for example, \"How to upgrade to Flower 1.0\"." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:262 +#: ../../source/contributor-tutorial-contribute-on-github.rst:263 msgid "" "Most of our guides do not follow this new format yet, and changing their " "title is (unfortunately) more involved than one might think." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:264 +#: ../../source/contributor-tutorial-contribute-on-github.rst:265 msgid "" "This issue is about changing the title of a doc from present continuous " "to present simple." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:266 +#: ../../source/contributor-tutorial-contribute-on-github.rst:267 msgid "" "Let's take the example of \"Saving Progress\" which we changed to \"Save " "Progress\". Does this pass our check?" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:268 +#: ../../source/contributor-tutorial-contribute-on-github.rst:269 msgid "Before: \"How to saving progress\" ❌" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:270 +#: ../../source/contributor-tutorial-contribute-on-github.rst:271 msgid "After: \"How to save progress\" ✅" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:273 +#: ../../source/contributor-tutorial-contribute-on-github.rst:274 msgid "Solution" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:275 +#: ../../source/contributor-tutorial-contribute-on-github.rst:276 msgid "" "This is a tiny change, but it'll allow us to test your end-to-end setup. " "After cloning and setting up the Flower repo, here's what you should do:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:277 +#: ../../source/contributor-tutorial-contribute-on-github.rst:278 msgid "Find the source file in ``doc/source``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:278 +#: ../../source/contributor-tutorial-contribute-on-github.rst:279 msgid "" "Make the change in the ``.rst`` file (beware, the dashes under the title " "should be the same length as the title itself)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:279 +#: ../../source/contributor-tutorial-contribute-on-github.rst:280 msgid "" "Build the docs and `check the result `_" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:282 +#: ../../source/contributor-tutorial-contribute-on-github.rst:283 msgid "Rename file" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:284 +#: ../../source/contributor-tutorial-contribute-on-github.rst:285 msgid "" "You might have noticed that the file name still reflects the old wording." " If we just change the file, then we break all existing links to it - it " @@ -1723,142 +1708,68 @@ msgid "" "engine ranking." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:287 +#: ../../source/contributor-tutorial-contribute-on-github.rst:288 msgid "Here's how to change the file name:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:289 +#: ../../source/contributor-tutorial-contribute-on-github.rst:290 msgid "Change the file name to ``save-progress.rst``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:290 +#: ../../source/contributor-tutorial-contribute-on-github.rst:291 msgid "Add a redirect rule to ``doc/source/conf.py``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:292 +#: ../../source/contributor-tutorial-contribute-on-github.rst:293 msgid "" "This will cause a redirect from ``saving-progress.html`` to ``save-" "progress.html``, old links will continue to work." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:295 +#: ../../source/contributor-tutorial-contribute-on-github.rst:296 msgid "Apply changes in the index file" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:297 +#: ../../source/contributor-tutorial-contribute-on-github.rst:298 msgid "" "For the lateral navigation bar to work properly, it is very important to " "update the ``index.rst`` file as well. This is where we define the whole " "arborescence of the navbar." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:300 +#: ../../source/contributor-tutorial-contribute-on-github.rst:301 msgid "Find and modify the file name in ``index.rst``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:303 +#: ../../source/contributor-tutorial-contribute-on-github.rst:304 msgid "Open PR" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:305 +#: ../../source/contributor-tutorial-contribute-on-github.rst:306 msgid "" "Commit the changes (commit messages are always imperative: \"Do " "something\", in this case \"Change …\")" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:306 -msgid "Push the changes to your fork" -msgstr "" - #: ../../source/contributor-tutorial-contribute-on-github.rst:307 -msgid "Open a PR (as shown above)" +msgid "Push the changes to your fork" msgstr "" #: ../../source/contributor-tutorial-contribute-on-github.rst:308 -msgid "Wait for it to be approved!" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:309 -msgid "Congrats! 🥳 You're now officially a Flower contributor!" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:313 -msgid "How to write a good PR title" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:315 -msgid "" -"A well-crafted PR title helps team members quickly understand the purpose" -" and scope of the changes being proposed. Here's a guide to help you " -"write a good GitHub PR title:" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:317 -msgid "" -"1. Be Clear and Concise: Provide a clear summary of the changes in a " -"concise manner. 1. Use Actionable Verbs: Start with verbs like \"Add,\" " -"\"Update,\" or \"Fix\" to indicate the purpose. 1. Include Relevant " -"Information: Mention the affected feature or module for context. 1. Keep " -"it Short: Avoid lengthy titles for easy readability. 1. Use Proper " -"Capitalization and Punctuation: Follow grammar rules for clarity." -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:323 -msgid "" -"Let's start with a few examples for titles that should be avoided because" -" they do not provide meaningful information:" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:325 -msgid "Implement Algorithm" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:326 -msgid "Database" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:327 -msgid "Add my_new_file.py to codebase" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:328 -msgid "Improve code in module" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:329 -msgid "Change SomeModule" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:331 msgid "" -"Here are a few positive examples which provide helpful information " -"without repeating how they do it, as that is already visible in the " -"\"Files changed\" section of the PR:" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:333 -msgid "Update docs banner to mention Flower Summit 2023" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:334 -msgid "Remove unnecessary XGBoost dependency" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:335 -msgid "Remove redundant attributes in strategies subclassing FedAvg" +"Open a PR (as shown above) with title ``docs(framework) Update how-to " +"guide title``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:336 -msgid "Add CI job to deploy the staging system when the ``main`` branch changes" +#: ../../source/contributor-tutorial-contribute-on-github.rst:309 +msgid "Wait for it to be approved!" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:337 -msgid "" -"Add new amazing library which will be used to improve the simulation " -"engine" +#: ../../source/contributor-tutorial-contribute-on-github.rst:310 +msgid "Congrats! 🥳 You're now officially a Flower contributor!" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:341 +#: ../../source/contributor-tutorial-contribute-on-github.rst:314 #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:548 #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:946 #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:727 @@ -1867,153 +1778,91 @@ msgstr "" msgid "Next steps" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:343 +#: ../../source/contributor-tutorial-contribute-on-github.rst:316 msgid "" "Once you have made your first PR, and want to contribute more, be sure to" " check out the following :" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:345 +#: ../../source/contributor-tutorial-contribute-on-github.rst:318 msgid "" ":doc:`Good first contributions `, where you should particularly look into the " ":code:`baselines` contributions." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:349 +#: ../../source/contributor-tutorial-contribute-on-github.rst:322 #: ../../source/fed/0000-20200102-fed-template.md:60 msgid "Appendix" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:354 -msgid "Changelog entry" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:356 -msgid "" -"When opening a new PR, inside its description, there should be a " -"``Changelog entry`` header." -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:358 -msgid "" -"Above this header you should see the following comment that explains how " -"to write your changelog entry:" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:360 -msgid "" -"Inside the following 'Changelog entry' section, you should put the " -"description of your changes that will be added to the changelog alongside" -" your PR title." +#: ../../source/contributor-tutorial-contribute-on-github.rst:327 +msgid "PR title format" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:363 -msgid "" -"If the section is completely empty (without any token) or non-existent, " -"the changelog will just contain the title of the PR for the changelog " -"entry, without any description." +#: ../../source/contributor-tutorial-contribute-on-github.rst:329 +msgid "We enforce the following PR title format:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:366 +#: ../../source/contributor-tutorial-contribute-on-github.rst:335 msgid "" -"If the section contains some text other than tokens, it will use it to " -"add a description to the change." +"(or ``(:skip) `` to ignore the PR in the " +"changelog)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:368 +#: ../../source/contributor-tutorial-contribute-on-github.rst:337 msgid "" -"If the section contains one of the following tokens it will ignore any " -"other text and put the PR under the corresponding section of the " -"changelog:" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:370 -msgid " is for classifying a PR as a general improvement." +"Where ```` needs to be in ``{ci, fix, feat, docs, refactor, " +"break}``, ```` should be in ``{framework, baselines, datasets, " +"examples, or '*' when modifying multiple projects which requires the " +"':skip' flag to be used}``, and ```` starts with a capitalised " +"verb in the imperative mood." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:372 -msgid " is to not add the PR to the changelog" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:374 -msgid " is to add a general baselines change to the PR" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:376 -msgid " is to add a general examples change to the PR" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:378 -msgid " is to add a general sdk change to the PR" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:380 -msgid " is to add a general simulations change to the PR" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:382 -msgid "Note that only one token should be used." +#: ../../source/contributor-tutorial-contribute-on-github.rst:341 +msgid "Valid examples:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:384 -msgid "" -"Its content must have a specific format. We will break down what each " -"possibility does:" +#: ../../source/contributor-tutorial-contribute-on-github.rst:343 +msgid "``feat(framework) Add flwr build CLI command``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:386 -msgid "" -"If the ``### Changelog entry`` section contains nothing or doesn't exist," -" the following text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:344 +msgid "``refactor(examples:skip) Improve quickstart-pytorch logging``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:390 -msgid "" -"If the ``### Changelog entry`` section contains a description (and no " -"token), the following text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:345 +msgid "``ci(*:skip) Enforce PR title format``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:396 -msgid "" -"If the ``### Changelog entry`` section contains ````, nothing will " -"change in the changelog." +#: ../../source/contributor-tutorial-contribute-on-github.rst:347 +msgid "Invalid examples:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:398 -msgid "" -"If the ``### Changelog entry`` section contains ````, the " -"following text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:349 +msgid "``feat(framework): Add flwr build CLI command`` (extra ``:``)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:402 +#: ../../source/contributor-tutorial-contribute-on-github.rst:350 msgid "" -"If the ``### Changelog entry`` section contains ````, the " -"following text will be added to the changelog::" +"``feat(*) Add flwr build CLI command`` (missing ``skip`` flag along with " +"``*``)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:406 -msgid "" -"If the ``### Changelog entry`` section contains ````, the " -"following text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:351 +msgid "``feat(skip) Add flwr build CLI command`` (missing ````)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:410 -msgid "" -"If the ``### Changelog entry`` section contains ````, the following " -"text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:352 +msgid "``feat(framework) add flwr build CLI command`` (non capitalised verb)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:414 -msgid "" -"If the ``### Changelog entry`` section contains ````, the " -"following text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:353 +msgid "``feat(framework) Add flwr build CLI command.`` (dot at the end)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:418 -msgid "" -"Note that only one token must be provided, otherwise, only the first " -"action (in the order listed above), will be performed." +#: ../../source/contributor-tutorial-contribute-on-github.rst:354 +msgid "``Add flwr build CLI command.`` (missing ``()``)" msgstr "" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:2 @@ -2021,7 +1870,7 @@ msgid "Get started as a contributor" msgstr "" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:5 -#: ../../source/how-to-run-flower-using-docker.rst:123 +#: ../../source/how-to-run-flower-using-docker.rst:132 msgid "Prerequisites" msgstr "" @@ -3779,7 +3628,7 @@ msgstr "" #: ../../source/how-to-authenticate-supernodes.rst:12 msgid "" "We recommend you to check out the complete `code example " -"`_ demonstrating federated learning with Flower in an " "authenticated setting." msgstr "" @@ -3812,18 +3661,19 @@ msgid "" "authentication enabled:" msgstr "" -#: ../../source/how-to-authenticate-supernodes.rst:34 -msgid "Let's break down the :code:`--require-client-authentication` flag:" +#: ../../source/how-to-authenticate-supernodes.rst:36 +msgid "Let's break down the authentication flags:" msgstr "" -#: ../../source/how-to-authenticate-supernodes.rst:36 +#: ../../source/how-to-authenticate-supernodes.rst:38 msgid "" -"The first argument is a path to a CSV file storing all known node public " -"keys. You need to store all known node public keys that are allowed to " -"participate in a federation in one CSV file (:code:`.csv`)." +"The first flag :code:`--auth-list-public-keys` expects a path to a CSV " +"file storing all known node public keys. You need to store all known node" +" public keys that are allowed to participate in a federation in one CSV " +"file (:code:`.csv`)." msgstr "" -#: ../../source/how-to-authenticate-supernodes.rst:38 +#: ../../source/how-to-authenticate-supernodes.rst:40 msgid "" "A valid CSV file storing known node public keys should list the keys in " "OpenSSH format, separated by commas and without any comments. For an " @@ -3831,14 +3681,15 @@ msgid "" "known node public keys." msgstr "" -#: ../../source/how-to-authenticate-supernodes.rst:40 +#: ../../source/how-to-authenticate-supernodes.rst:42 msgid "" -"The second and third arguments are paths to the server's private and " +"The second and third flags :code:`--auth-superlink-private-key` and :code" +":`--auth-superlink-public-key` expect paths to the server's private and " "public keys. For development purposes, you can generate a private and " "public key pair using :code:`ssh-keygen -t ecdsa -b 384`." msgstr "" -#: ../../source/how-to-authenticate-supernodes.rst:43 +#: ../../source/how-to-authenticate-supernodes.rst:45 msgid "" "In Flower 1.9, there is no support for dynamically removing, editing, or " "adding known node public keys to the SuperLink. To change the set of " @@ -3847,11 +3698,11 @@ msgid "" " nodes is on the roadmap to be released in Flower 1.10 (ETA: June)." msgstr "" -#: ../../source/how-to-authenticate-supernodes.rst:49 +#: ../../source/how-to-authenticate-supernodes.rst:51 msgid "Enable node authentication in :code:`SuperNode`" msgstr "" -#: ../../source/how-to-authenticate-supernodes.rst:51 +#: ../../source/how-to-authenticate-supernodes.rst:53 msgid "" "Similar to the long-running Flower server (:code:`SuperLink`), you can " "easily enable node authentication in the long-running Flower client " @@ -3859,19 +3710,20 @@ msgid "" "authenticated :code:`SuperNode`:" msgstr "" -#: ../../source/how-to-authenticate-supernodes.rst:61 +#: ../../source/how-to-authenticate-supernodes.rst:64 msgid "" -"The :code:`--authentication-keys` flag expects two arguments: a path to " -"the node's private key file and a path to the node's public key file. For" -" development purposes, you can generate a private and public key pair " -"using :code:`ssh-keygen -t ecdsa -b 384`." +"The :code:`--auth-supernode-private-key` flag expects a path to the " +"node's private key file and the :code:`--auth-supernode-public-key` flag " +"expects a path to the node's public key file. For development purposes, " +"you can generate a private and public key pair using :code:`ssh-keygen -t" +" ecdsa -b 384`." msgstr "" -#: ../../source/how-to-authenticate-supernodes.rst:65 +#: ../../source/how-to-authenticate-supernodes.rst:68 msgid "Security notice" msgstr "" -#: ../../source/how-to-authenticate-supernodes.rst:67 +#: ../../source/how-to-authenticate-supernodes.rst:70 msgid "" "The system's security relies on the credentials of the SuperLink and each" " SuperNode. Therefore, it is imperative to safeguard and safely store the" @@ -3882,14 +3734,14 @@ msgid "" "methods." msgstr "" -#: ../../source/how-to-authenticate-supernodes.rst:72 +#: ../../source/how-to-authenticate-supernodes.rst:75 #: ../../source/how-to-enable-ssl-connections.rst:65 #: ../../source/how-to-use-built-in-mods.rst:85 #: ../../source/tutorial-series-what-is-federated-learning.ipynb:287 msgid "Conclusion" msgstr "" -#: ../../source/how-to-authenticate-supernodes.rst:74 +#: ../../source/how-to-authenticate-supernodes.rst:77 msgid "" "You should now have learned how to start a long-running Flower server " "(:code:`SuperLink`) and client (:code:`SuperNode`) with node " @@ -4481,7 +4333,7 @@ msgid "Install stable release" msgstr "" #: ../../source/how-to-install-flower.rst:15 -#: ../../source/how-to-upgrade-to-flower-next.rst:45 +#: ../../source/how-to-upgrade-to-flower-next.rst:46 msgid "Using pip" msgstr "" @@ -4819,7 +4671,7 @@ msgstr "" msgid "" "The simplest way to get started with Flower is by using the pre-made " "Docker images, which you can find on `Docker Hub " -"`_." +"`__." msgstr "" #: ../../source/how-to-run-flower-using-docker.rst:7 @@ -4842,26 +4694,34 @@ msgid "" "official Docker website." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:25 +#: ../../source/how-to-run-flower-using-docker.rst:26 +msgid "" +"To ensure optimal performance and compatibility, the SuperLink, SuperNode" +" and ServerApp image must have the same version when running together. " +"This guarantees seamless integration and avoids potential conflicts or " +"issues that may arise from using different versions." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:31 msgid "Flower SuperLink" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:28 +#: ../../source/how-to-run-flower-using-docker.rst:34 msgid "Quickstart" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:30 +#: ../../source/how-to-run-flower-using-docker.rst:36 msgid "If you're looking to try out Flower, you can use the following command:" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:36 +#: ../../source/how-to-run-flower-using-docker.rst:42 msgid "" "The command pulls the Docker image with the tag ``1.8.0`` from Docker " "Hub. The tag specifies the Flower version. In this case, Flower 1.8.0. " "The ``--rm`` flag tells Docker to remove the container after it exits." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:42 +#: ../../source/how-to-run-flower-using-docker.rst:48 msgid "" "By default, the Flower SuperLink keeps state in-memory. When using the " "Docker flag ``--rm``, the state is not persisted between container " @@ -4869,7 +4729,7 @@ msgid "" "system." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:46 +#: ../../source/how-to-run-flower-using-docker.rst:52 msgid "" "The ``-p :`` flag tells Docker to map the ports " "``9091``/``9092`` of the host to ``9091``/``9092`` of the container, " @@ -4879,38 +4739,39 @@ msgid "" " flag ``--insecure``." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:53 -#: ../../source/how-to-run-flower-using-docker.rst:216 +#: ../../source/how-to-run-flower-using-docker.rst:59 +#: ../../source/how-to-run-flower-using-docker.rst:238 +#: ../../source/how-to-run-flower-using-docker.rst:354 msgid "" "The ``--insecure`` flag enables insecure communication (using HTTP, not " "HTTPS) and should only be used for testing purposes. We strongly " "recommend enabling `SSL `_ when " +"flower-using-docker.html#enabling-ssl-for-secure-connections>`__ when " "deploying to a production environment." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:58 +#: ../../source/how-to-run-flower-using-docker.rst:64 msgid "" "You can use ``--help`` to view all available flags that the SuperLink " "supports:" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:65 +#: ../../source/how-to-run-flower-using-docker.rst:71 msgid "Mounting a volume to store the state on the host system" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:67 +#: ../../source/how-to-run-flower-using-docker.rst:73 msgid "" "If you want to persist the state of the SuperLink on your host system, " "all you need to do is specify a path where you want to save the file on " "your host system and a name for the database file. In the example below, " -"we tell Docker via the flag ``-v`` to mount the user's home directory " -"(``~/`` on your host) into the ``/app/`` directory of the container. " -"Furthermore, we use the flag ``--database`` to specify the name of the " -"database file." +"we tell Docker via the flag ``--volume`` to mount the user's home " +"directory (``~/`` on your host) into the ``/app/`` directory of the " +"container. Furthermore, we use the flag ``--database`` to specify the " +"name of the database file." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:80 +#: ../../source/how-to-run-flower-using-docker.rst:86 msgid "" "As soon as the SuperLink starts, the file ``state.db`` is created in the " "user's home directory on your host system. If the file already exists, " @@ -4918,210 +4779,350 @@ msgid "" "SuperLink with an empty database, simply remove the ``state.db`` file." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:85 -#: ../../source/how-to-run-flower-using-docker.rst:234 +#: ../../source/how-to-run-flower-using-docker.rst:91 +#: ../../source/how-to-run-flower-using-docker.rst:260 +#: ../../source/how-to-run-flower-using-docker.rst:375 msgid "Enabling SSL for secure connections" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:87 +#: ../../source/how-to-run-flower-using-docker.rst:93 msgid "" "To enable SSL, you will need a PEM-encoded root certificate, a PEM-" "encoded private key and a PEM-encoded certificate chain." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:91 +#: ../../source/how-to-run-flower-using-docker.rst:97 msgid "" "For testing purposes, you can generate your own self-signed certificates." " The `Enable SSL connections `_ page contains a section that " -"will guide you through the process." +"enable-ssl-connections.html#certificates>`__ page contains a section that" +" will guide you through the process." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:95 +#: ../../source/how-to-run-flower-using-docker.rst:101 msgid "" "Assuming all files we need are in the local ``certificates`` directory, " -"we can use the flag ``-v`` to mount the local directory into the " +"we can use the flag ``--volume`` to mount the local directory into the " "``/app/`` directory of the container. This allows the SuperLink to access" " the files within the container. Finally, we pass the names of the " "certificates to the SuperLink with the ``--certificates`` flag." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:107 +#: ../../source/how-to-run-flower-using-docker.rst:113 msgid "Flower SuperNode" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:109 +#: ../../source/how-to-run-flower-using-docker.rst:115 msgid "" "The SuperNode Docker image comes with a pre-installed version of Flower " "and serves as a base for building your own SuperNode image." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:114 +#: ../../source/how-to-run-flower-using-docker.rst:120 msgid "" "The SuperNode Docker image currently works only with the 1.9.0-nightly " "release. A stable version will be available when Flower 1.9.0 (stable) " "gets released (ETA: May). A SuperNode nightly image must be paired with " -"the corresponding SuperLink nightly image released on the same day. To " -"ensure the versions are in sync, using the concrete tag, e.g., " -"``1.9.0.dev20240501`` instead of ``nightly`` is recommended." +"the corresponding SuperLink and ServerApp nightly images released on the " +"same day. To ensure the versions are in sync, using the concrete tag, " +"e.g., ``1.9.0.dev20240501`` instead of ``nightly`` is recommended." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:119 +#: ../../source/how-to-run-flower-using-docker.rst:126 msgid "" -"We will use the ``app-pytorch`` example, which you can find in the Flower" -" repository, to illustrate how you can dockerize your client-app." +"We will use the ``quickstart-pytorch`` example, which you can find in the" +" Flower repository, to illustrate how you can dockerize your ClientApp." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:125 +#: ../../source/how-to-run-flower-using-docker.rst:134 msgid "" "Before we can start, we need to meet a few prerequisites in our local " "development environment. You can skip the first part if you want to run " -"your client-app instead of the ``app-pytorch`` example." +"your ClientApp instead of the ``quickstart-pytorch`` example." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:138 +msgid "Clone the Flower repository." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:143 +#: ../../source/how-to-run-flower-using-docker.rst:152 msgid "Creating a SuperNode Dockerfile" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:145 +#: ../../source/how-to-run-flower-using-docker.rst:154 +#: ../../source/how-to-run-flower-using-docker.rst:289 msgid "Let's assume the following project layout:" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:156 +#: ../../source/how-to-run-flower-using-docker.rst:163 msgid "" -"First, we need to create a Dockerfile in the directory where the " -"``ClientApp`` code is located. If you use the ``app-pytorch`` example, " -"create a new file called ``Dockerfile`` in ``examples/app-pytorch``." +"First, we need to create a ``requirements.txt`` file in the directory " +"where the ``ClientApp`` code is located. In the file, we list all the " +"dependencies that the ClientApp requires." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:175 +msgid "" +"Note that `flwr `__ is already installed " +"in the ``flwr/supernode`` base image, so you only need to include other " +"package dependencies in your ``requirements.txt``, such as ``torch``, " +"``tensorflow``, etc." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:160 +#: ../../source/how-to-run-flower-using-docker.rst:179 msgid "" -"The ``Dockerfile`` contains the instructions that assemble the SuperNode " -"image." +"Next, we create a Dockerfile. If you use the ``quickstart-pytorch`` " +"example, create a new file called ``Dockerfile.supernode`` in ``examples" +"/quickstart-pytorch``." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:173 +#: ../../source/how-to-run-flower-using-docker.rst:182 +msgid "" +"The ``Dockerfile.supernode`` contains the instructions that assemble the " +"SuperNode image." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:196 msgid "" "In the first two lines, we instruct Docker to use the SuperNode image " "tagged ``nightly`` as a base image and set our working directory to " "``/app``. The following instructions will now be executed in the ``/app``" -" directory. Next, we install the ``ClientApp`` dependencies by copying " -"the ``requirements.txt`` file into the image and run ``pip install``. In " -"the last two lines, we copy the ``ClientApp`` code (``client.py`` and " -"``task.py``) into the image and set the entry point to ``flower-client-" -"app``." +" directory. Next, we install the ClientApp dependencies by copying the " +"``requirements.txt`` file into the image and run ``pip install``. In the " +"last two lines, we copy the ``client.py`` module into the image and set " +"the entry point to ``flower-client-app`` with the argument " +"``client:app``. The argument is the object reference of the ClientApp " +"(``:``) that will be run inside the ClientApp." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:181 +#: ../../source/how-to-run-flower-using-docker.rst:205 msgid "Building the SuperNode Docker image" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:183 +#: ../../source/how-to-run-flower-using-docker.rst:207 msgid "" "Next, we build the SuperNode Docker image by running the following " -"command in the directory where Dockerfile and client-app code are " -"located." +"command in the directory where Dockerfile and ClientApp code are located." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:190 +#: ../../source/how-to-run-flower-using-docker.rst:214 msgid "" "We gave the image the name ``flwr_supernode``, and the tag ``0.0.1``. " "Remember that the here chosen values only serve as an example. You can " "change them to your needs." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:195 +#: ../../source/how-to-run-flower-using-docker.rst:219 msgid "Running the SuperNode Docker image" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:197 +#: ../../source/how-to-run-flower-using-docker.rst:221 msgid "Now that we have built the SuperNode image, we can finally run it." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:205 +#: ../../source/how-to-run-flower-using-docker.rst:229 +#: ../../source/how-to-run-flower-using-docker.rst:345 msgid "Let's break down each part of this command:" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:207 +#: ../../source/how-to-run-flower-using-docker.rst:231 +#: ../../source/how-to-run-flower-using-docker.rst:347 msgid "``docker run``: This is the command to run a new Docker container." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:208 +#: ../../source/how-to-run-flower-using-docker.rst:232 +#: ../../source/how-to-run-flower-using-docker.rst:348 msgid "" "``--rm``: This option specifies that the container should be " "automatically removed when it stops." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst +#: ../../source/how-to-run-flower-using-docker.rst:233 msgid "``flwr_supernode:0.0.1``: The name the tag of the Docker image to use." msgstr "" +#: ../../source/how-to-run-flower-using-docker.rst:234 +#: ../../source/how-to-run-flower-using-docker.rst:350 +msgid "``--insecure``: This option enables insecure communication." +msgstr "" + #: ../../source/how-to-run-flower-using-docker.rst msgid "" -"``client:app``: The object reference of the ``ClientApp`` " -"(``:``)." +"``--server 192.168.1.100:9092``: This option specifies the address of the" +" SuperLinks Fleet" msgstr "" #: ../../source/how-to-run-flower-using-docker.rst +msgid "API to connect to. Remember to update it with your SuperLink IP." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:248 +msgid "" +"To test running Flower locally, you can create a `bridge network " +"`__, use the ``--network`` argument and pass the " +"name of the Docker network to run your SuperNodes." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:252 +msgid "" +"Any argument that comes after the tag is passed to the Flower SuperNode " +"binary. To see all available flags that the SuperNode supports, run:" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:262 msgid "" -"It points to the ``ClientApp`` that will be run inside the SuperNode " +"To enable SSL, we will need to mount a PEM-encoded root certificate into " +"your SuperNode container." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:264 +msgid "" +"Assuming the certificate already exists locally, we can use the flag " +"``--volume`` to mount the local certificate into the container's " +"``/app/`` directory. This allows the SuperNode to access the certificate " +"within the container. Use the ``--certificates`` flag when starting the " "container." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:212 -msgid "``--insecure``: This option enables insecure communication." +#: ../../source/how-to-run-flower-using-docker.rst:275 +msgid "Flower ServerApp" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst +#: ../../source/how-to-run-flower-using-docker.rst:277 msgid "" -"``--server 192.168.1.100:9092``: This option specifies the address of the" -" SuperLinks Fleet" +"The procedure for building and running a ServerApp image is almost " +"identical to the SuperNode image." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:279 +msgid "" +"Similar to the SuperNode image, the ServerApp Docker image comes with a " +"pre-installed version of Flower and serves as a base for building your " +"own ServerApp image." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:282 +msgid "" +"We will use the same ``quickstart-pytorch`` example as we do in the " +"Flower SuperNode section. If you have not already done so, please follow " +"the `SuperNode Prerequisites`_ before proceeding." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:287 +msgid "Creating a ServerApp Dockerfile" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:298 +msgid "" +"First, we need to create a Dockerfile in the directory where the " +"``ServerApp`` code is located. If you use the ``quickstart-pytorch`` " +"example, create a new file called ``Dockerfile.serverapp`` in ``examples" +"/quickstart-pytorch``." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:302 +msgid "" +"The ``Dockerfile.serverapp`` contains the instructions that assemble the " +"ServerApp image." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:313 +msgid "" +"In the first two lines, we instruct Docker to use the ServerApp image " +"tagged ``1.8.0`` as a base image and set our working directory to " +"``/app``. The following instructions will now be executed in the ``/app``" +" directory. In the last two lines, we copy the ``server.py`` module into " +"the image and set the entry point to ``flower-server-app`` with the " +"argument ``server:app``. The argument is the object reference of the " +"ServerApp (``:``) that will be run inside the " +"ServerApp container." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:321 +msgid "Building the ServerApp Docker image" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:323 +msgid "" +"Next, we build the ServerApp Docker image by running the following " +"command in the directory where Dockerfile and ServerApp code are located." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:330 +msgid "" +"We gave the image the name ``flwr_serverapp``, and the tag ``0.0.1``. " +"Remember that the here chosen values only serve as an example. You can " +"change them to your needs." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:335 +msgid "Running the ServerApp Docker image" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:337 +msgid "Now that we have built the ServerApp image, we can finally run it." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:349 +msgid "``flwr_serverapp:0.0.1``: The name the tag of the Docker image to use." msgstr "" #: ../../source/how-to-run-flower-using-docker.rst -msgid "API to connect to. Remember to update it with your SuperLink IP." +msgid "" +"``--server 192.168.1.100:9091``: This option specifies the address of the" +" SuperLinks Driver" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:226 +#: ../../source/how-to-run-flower-using-docker.rst:363 msgid "" -"Any argument that comes after the tag is passed to the Flower SuperNode " -"binary. To see all available flags that the SuperNode supports, run:" +"To test running Flower locally, you can create a `bridge network " +"`__, use the ``--network`` argument and pass the " +"name of the Docker network to run your ServerApps." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:367 +msgid "" +"Any argument that comes after the tag is passed to the Flower ServerApp " +"binary. To see all available flags that the ServerApp supports, run:" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:236 +#: ../../source/how-to-run-flower-using-docker.rst:377 msgid "" "To enable SSL, we will need to mount a PEM-encoded root certificate into " -"your SuperNode container." +"your ServerApp container." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:238 +#: ../../source/how-to-run-flower-using-docker.rst:379 msgid "" "Assuming the certificate already exists locally, we can use the flag " -"``-v`` to mount the local certificate into the container's ``/app/`` " -"directory. This allows the SuperNode to access the certificate within the" -" container. Use the ``--certificates`` flag when starting the container." +"``--volume`` to mount the local certificate into the container's " +"``/app/`` directory. This allows the ServerApp to access the certificate " +"within the container. Use the ``--certificates`` flag when starting the " +"container." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:249 +#: ../../source/how-to-run-flower-using-docker.rst:390 msgid "Advanced Docker options" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:252 +#: ../../source/how-to-run-flower-using-docker.rst:393 msgid "Using a different Flower version" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:254 +#: ../../source/how-to-run-flower-using-docker.rst:395 msgid "" "If you want to use a different version of Flower, for example Flower " "nightly, you can do so by changing the tag. All available versions are on" -" `Docker Hub `_." +" `Docker Hub `__." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:259 +#: ../../source/how-to-run-flower-using-docker.rst:400 msgid "Pinning a Docker image to a specific version" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:261 +#: ../../source/how-to-run-flower-using-docker.rst:402 msgid "" "It may happen that we update the images behind the tags. Such updates " "usually include security updates of system dependencies that should not " @@ -5130,21 +5131,21 @@ msgid "" "instead of the tag." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:266 +#: ../../source/how-to-run-flower-using-docker.rst:407 msgid "" "The following command returns the current image hash referenced by the " "``superlink:1.8.0`` tag:" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:273 +#: ../../source/how-to-run-flower-using-docker.rst:414 msgid "Next, we can pin the hash when running a new SuperLink container:" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:282 +#: ../../source/how-to-run-flower-using-docker.rst:423 msgid "Setting environment variables" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:284 +#: ../../source/how-to-run-flower-using-docker.rst:425 msgid "" "To set a variable inside a Docker container, you can use the ``-e " "=`` flag." @@ -5617,7 +5618,7 @@ msgid "" msgstr "" #: ../../source/how-to-upgrade-to-flower-1.0.rst:8 -#: ../../source/how-to-upgrade-to-flower-next.rst:42 +#: ../../source/how-to-upgrade-to-flower-next.rst:43 msgid "Install update" msgstr "" @@ -5661,7 +5662,7 @@ msgid "" msgstr "" #: ../../source/how-to-upgrade-to-flower-1.0.rst:24 -#: ../../source/how-to-upgrade-to-flower-next.rst:99 +#: ../../source/how-to-upgrade-to-flower-next.rst:100 msgid "Required changes" msgstr "" @@ -5865,7 +5866,7 @@ msgid "" msgstr "" #: ../../source/how-to-upgrade-to-flower-1.0.rst:89 -#: ../../source/how-to-upgrade-to-flower-next.rst:316 +#: ../../source/how-to-upgrade-to-flower-next.rst:317 msgid "Further help" msgstr "" @@ -5903,44 +5904,44 @@ msgstr "" msgid "Let's dive in!" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:47 +#: ../../source/how-to-upgrade-to-flower-next.rst:48 msgid "" "Here's how to update an existing installation of Flower to Flower Next " "with ``pip``:" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:53 +#: ../../source/how-to-upgrade-to-flower-next.rst:54 msgid "or if you need Flower Next with simulation:" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:60 +#: ../../source/how-to-upgrade-to-flower-next.rst:61 msgid "" "Ensure you set the following version constraint in your " "``requirements.txt``" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:70 +#: ../../source/how-to-upgrade-to-flower-next.rst:71 msgid "or ``pyproject.toml``:" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:81 +#: ../../source/how-to-upgrade-to-flower-next.rst:82 msgid "Using Poetry" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:83 +#: ../../source/how-to-upgrade-to-flower-next.rst:84 msgid "" "Update the ``flwr`` dependency in ``pyproject.toml`` and then reinstall " "(don't forget to delete ``poetry.lock`` via ``rm poetry.lock`` before " "running ``poetry install``)." msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:85 +#: ../../source/how-to-upgrade-to-flower-next.rst:86 msgid "" "Ensure you set the following version constraint in your " "``pyproject.toml``:" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:101 +#: ../../source/how-to-upgrade-to-flower-next.rst:102 msgid "" "In Flower Next, the *infrastructure* and *application layers* have been " "decoupled. Instead of starting a client in code via ``start_client()``, " @@ -5953,31 +5954,31 @@ msgid "" "way:" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:108 +#: ../../source/how-to-upgrade-to-flower-next.rst:109 msgid "|clientapp_link|_" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:109 +#: ../../source/how-to-upgrade-to-flower-next.rst:110 msgid "" "Wrap your existing client with |clientapp_link|_ instead of launching it " "via |startclient_link|_. Here's an example:" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:131 +#: ../../source/how-to-upgrade-to-flower-next.rst:132 msgid "|serverapp_link|_" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:132 +#: ../../source/how-to-upgrade-to-flower-next.rst:133 msgid "" "Wrap your existing strategy with |serverapp_link|_ instead of starting " "the server via |startserver_link|_. Here's an example:" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:153 +#: ../../source/how-to-upgrade-to-flower-next.rst:154 msgid "Deployment" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:154 +#: ../../source/how-to-upgrade-to-flower-next.rst:155 msgid "" "Run the ``SuperLink`` using |flowernext_superlink_link|_ before running, " "in sequence, |flowernext_clientapp_link|_ (2x) and " @@ -5985,56 +5986,56 @@ msgid "" " `server.py` as Python scripts." msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:157 +#: ../../source/how-to-upgrade-to-flower-next.rst:158 msgid "" "Here's an example to start the server without HTTPS (only for " "prototyping):" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:173 +#: ../../source/how-to-upgrade-to-flower-next.rst:174 msgid "" "Here's another example to start with HTTPS. Use the ``--certificates`` " "command line argument to pass paths to (CA certificate, server " "certificate, and server private key)." msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:200 +#: ../../source/how-to-upgrade-to-flower-next.rst:201 msgid "Simulation in CLI" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:201 +#: ../../source/how-to-upgrade-to-flower-next.rst:202 msgid "" "Wrap your existing client and strategy with |clientapp_link|_ and " "|serverapp_link|_, respectively. There is no need to use |startsim_link|_" " anymore. Here's an example:" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:231 +#: ../../source/how-to-upgrade-to-flower-next.rst:232 msgid "" -"Run |runsimcli_link|_ in CLI and point to the ``server_app`` / " +"Run |flower_simulation_link|_ in CLI and point to the ``server_app`` / " "``client_app`` object in the code instead of executing the Python script." " Here's an example (assuming the ``server_app`` and ``client_app`` " "objects are in a ``sim.py`` module):" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:248 +#: ../../source/how-to-upgrade-to-flower-next.rst:249 msgid "" "Set default resources for each |clientapp_link|_ using the ``--backend-" "config`` command line argument instead of setting the " "``client_resources`` argument in |startsim_link|_. Here's an example:" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:274 +#: ../../source/how-to-upgrade-to-flower-next.rst:275 msgid "Simulation in a Notebook" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:275 +#: ../../source/how-to-upgrade-to-flower-next.rst:276 msgid "" "Run |runsim_link|_ in your notebook instead of |startsim_link|_. Here's " "an example:" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:318 +#: ../../source/how-to-upgrade-to-flower-next.rst:319 msgid "" "Some official `Flower code examples `_ " "are already updated to Flower Next so they can serve as a reference for " @@ -6045,18 +6046,18 @@ msgid "" " or share and learn from others about migrating to Flower Next." msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:324 +#: ../../source/how-to-upgrade-to-flower-next.rst:325 msgid "Important" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:327 +#: ../../source/how-to-upgrade-to-flower-next.rst:328 msgid "" "As we continuously enhance Flower Next at a rapid pace, we'll be " "periodically updating this guide. Please feel free to share any feedback " "with us!" msgstr "" -#: ../../source/how-to-upgrade-to-flower-next.rst:333 +#: ../../source/how-to-upgrade-to-flower-next.rst:334 msgid "Happy migrating! 🚀" msgstr "" @@ -10212,10 +10213,6 @@ msgstr "" msgid "run\\_superlink" msgstr "" -#: ../../source/ref-api/flwr.server.start_driver.rst:2 -msgid "start\\_driver" -msgstr "" - #: ../../source/ref-api/flwr.server.start_server.rst:2 msgid "start\\_server" msgstr "" @@ -13308,10 +13305,6 @@ msgid "" " enabled, DEBUG-level logs will be displayed." msgstr "" -#: ../../source/ref-api/flwr.simulation.run_simulation_from_cli.rst:2 -msgid "run\\_simulation\\_from\\_cli" -msgstr "" - #: ../../source/ref-api/flwr.simulation.start_simulation.rst:2 msgid "start\\_simulation" msgstr "" @@ -19595,7 +19588,7 @@ msgid "" "We now have a list of ten training sets and ten validation sets " "(``trainloaders`` and ``valloaders``) representing the data of ten " "different organizations. Each ``trainloader``/``valloader`` pair contains" -" 4500 training examples and 500 validation examples. There's also a " +" 4000 training examples and 1000 validation examples. There's also a " "single ``testloader`` (we did not split the test set). Again, this is " "only necessary for building research or educational systems, actual " "federated learning systems have their data naturally distributed across " @@ -20220,7 +20213,7 @@ msgid "" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:41 -msgid "|191c6b8b5e1d46f99de4872746afa8af|" +msgid "|d8bf04f23d9b46d8a23cc6f4887d7873|" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:109 @@ -20235,7 +20228,7 @@ msgid "" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:53 -msgid "|21b83f3feb024a049617190555a13549|" +msgid "|5aa1711387d74d0f8b9c499e1a51627e|" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:111 @@ -20256,7 +20249,7 @@ msgid "" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:67 -msgid "|0dd15b4df7e3422f88aaf74cb401bfa7|" +msgid "|2bc8e069228d4873804061ff4a95048c|" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:113 @@ -20272,7 +20265,7 @@ msgid "" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:79 -msgid "|60e16f6be7354ca793444e01aa7adf25|" +msgid "|c258488766324dc9a6807f0e7c4fd5f4|" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:115 @@ -20288,7 +20281,7 @@ msgid "" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:91 -msgid "|a7032acbd65948a8beef8bccbbb9b83a|" +msgid "|d5f962c3f4ec48529efda980868c14b0|" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:117 @@ -20303,7 +20296,7 @@ msgid "" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:103 -msgid "|dd0e05706e584ee29e07cd39e6af5498|" +msgid "|a5eccea18d4c43a68b54b65043cabef8|" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:119 @@ -20323,7 +20316,7 @@ msgid "" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:138 -msgid "|2a2031018a1c4f81a69ea16df4947bd0|" +msgid "|f17662f7df2d42f68cac70a1fdeda8a7|" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:173 @@ -20338,7 +20331,7 @@ msgid "" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:150 -msgid "|5e841497933340d3b5c2efbf37e3e6a6|" +msgid "|241fc906441a4f038c625a19d30d01b2|" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:175 @@ -20478,7 +20471,7 @@ msgid "" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:210 -msgid "|19687aecbc3a485da999b66fe2051005|" +msgid "|0aa5aa05810b44b6a835cecce28f3137|" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:307 @@ -20502,7 +20495,7 @@ msgid "" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:225 -msgid "|32ef0bbade4d4500b7be97cf62405661|" +msgid "|c742940dd4bf4de09d8d0d5e8d179638|" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:309 @@ -20526,7 +20519,7 @@ msgid "" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:240 -msgid "|9d57ed324b304a698263f5a983a56a6b|" +msgid "|1f169ab4601a47e1a226f1628f4ebddb|" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:311 @@ -20549,7 +20542,7 @@ msgid "" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:255 -msgid "|d41510e6781c4bf18c234c6bfb8d4937|" +msgid "|12cfa9cde14440ecb8c8f6c1d7185bec|" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:313 @@ -20587,7 +20580,7 @@ msgid "" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:273 -msgid "|a0198a7ebbfb4b9289e7312711cbc967|" +msgid "|72939caf6e294b0986fee6dde96614d7|" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:315 @@ -20681,7 +20674,7 @@ msgid "" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:334 -msgid "|2c13f726c8c843fc8aae997bf906125b|" +msgid "|83a8daee45da4a98b8d6f24ae098fc50|" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:340 @@ -20709,3 +20702,526 @@ msgid "" "pytorch.html>`__ shows how to build a simple federated learning system " "with PyTorch and Flower." msgstr "" + +#~ msgid "" +#~ "If you want to use your own " +#~ "base image instead of the official " +#~ "Flower base image, all you need to" +#~ " do is set the ``BASE_REPOSITORY``, " +#~ "``PYTHON_VERSION`` and ``UBUNTU_VERSION`` build " +#~ "arguments. .. code-block:: bash" +#~ msgstr "" + +#~ msgid "$ cd src/docker/superlink/ $ docker build \\" +#~ msgstr "" + +#~ msgid "" +#~ "--build-arg BASE_REPOSITORY=flwr_base \\ " +#~ "--build-arg PYTHON_VERSION=3.11 \\ --build-" +#~ "arg UBUNTU_VERSION=ubuntu22.04 \\ --build-arg" +#~ " FLWR_VERSION=1.8.0 \\ -t flwr_superlink:0.1.0" +#~ " ." +#~ msgstr "" + +#~ msgid "" +#~ "It is important to follow the " +#~ "instructions described in comments. For " +#~ "instance, in order to not break " +#~ "how our changelog system works, you " +#~ "should read the information above the" +#~ " ``Changelog entry`` section carefully. You" +#~ " can also checkout some examples and" +#~ " details in the :ref:`changelogentry` " +#~ "appendix." +#~ msgstr "" + +#~ msgid "Open a PR (as shown above)" +#~ msgstr "" + +#~ msgid "How to write a good PR title" +#~ msgstr "" + +#~ msgid "" +#~ "A well-crafted PR title helps team" +#~ " members quickly understand the purpose " +#~ "and scope of the changes being " +#~ "proposed. Here's a guide to help " +#~ "you write a good GitHub PR title:" +#~ msgstr "" + +#~ msgid "" +#~ "1. Be Clear and Concise: Provide a" +#~ " clear summary of the changes in " +#~ "a concise manner. 1. Use Actionable " +#~ "Verbs: Start with verbs like \"Add,\"" +#~ " \"Update,\" or \"Fix\" to indicate " +#~ "the purpose. 1. Include Relevant " +#~ "Information: Mention the affected feature " +#~ "or module for context. 1. Keep it" +#~ " Short: Avoid lengthy titles for easy" +#~ " readability. 1. Use Proper Capitalization" +#~ " and Punctuation: Follow grammar rules " +#~ "for clarity." +#~ msgstr "" + +#~ msgid "" +#~ "Let's start with a few examples " +#~ "for titles that should be avoided " +#~ "because they do not provide meaningful" +#~ " information:" +#~ msgstr "" + +#~ msgid "Implement Algorithm" +#~ msgstr "" + +#~ msgid "Database" +#~ msgstr "" + +#~ msgid "Add my_new_file.py to codebase" +#~ msgstr "" + +#~ msgid "Improve code in module" +#~ msgstr "" + +#~ msgid "Change SomeModule" +#~ msgstr "" + +#~ msgid "" +#~ "Here are a few positive examples " +#~ "which provide helpful information without " +#~ "repeating how they do it, as that" +#~ " is already visible in the \"Files" +#~ " changed\" section of the PR:" +#~ msgstr "" + +#~ msgid "Update docs banner to mention Flower Summit 2023" +#~ msgstr "" + +#~ msgid "Remove unnecessary XGBoost dependency" +#~ msgstr "" + +#~ msgid "Remove redundant attributes in strategies subclassing FedAvg" +#~ msgstr "" + +#~ msgid "" +#~ "Add CI job to deploy the staging" +#~ " system when the ``main`` branch " +#~ "changes" +#~ msgstr "" + +#~ msgid "" +#~ "Add new amazing library which will " +#~ "be used to improve the simulation " +#~ "engine" +#~ msgstr "" + +#~ msgid "Changelog entry" +#~ msgstr "" + +#~ msgid "" +#~ "When opening a new PR, inside its" +#~ " description, there should be a " +#~ "``Changelog entry`` header." +#~ msgstr "" + +#~ msgid "" +#~ "Above this header you should see " +#~ "the following comment that explains how" +#~ " to write your changelog entry:" +#~ msgstr "" + +#~ msgid "" +#~ "Inside the following 'Changelog entry' " +#~ "section, you should put the description" +#~ " of your changes that will be " +#~ "added to the changelog alongside your" +#~ " PR title." +#~ msgstr "" + +#~ msgid "" +#~ "If the section is completely empty " +#~ "(without any token) or non-existent, " +#~ "the changelog will just contain the " +#~ "title of the PR for the changelog" +#~ " entry, without any description." +#~ msgstr "" + +#~ msgid "" +#~ "If the section contains some text " +#~ "other than tokens, it will use it" +#~ " to add a description to the " +#~ "change." +#~ msgstr "" + +#~ msgid "" +#~ "If the section contains one of the" +#~ " following tokens it will ignore any" +#~ " other text and put the PR " +#~ "under the corresponding section of the" +#~ " changelog:" +#~ msgstr "" + +#~ msgid " is for classifying a PR as a general improvement." +#~ msgstr "" + +#~ msgid " is to not add the PR to the changelog" +#~ msgstr "" + +#~ msgid " is to add a general baselines change to the PR" +#~ msgstr "" + +#~ msgid " is to add a general examples change to the PR" +#~ msgstr "" + +#~ msgid " is to add a general sdk change to the PR" +#~ msgstr "" + +#~ msgid " is to add a general simulations change to the PR" +#~ msgstr "" + +#~ msgid "Note that only one token should be used." +#~ msgstr "" + +#~ msgid "" +#~ "Its content must have a specific " +#~ "format. We will break down what " +#~ "each possibility does:" +#~ msgstr "" + +#~ msgid "" +#~ "If the ``### Changelog entry`` section" +#~ " contains nothing or doesn't exist, " +#~ "the following text will be added " +#~ "to the changelog::" +#~ msgstr "" + +#~ msgid "" +#~ "If the ``### Changelog entry`` section" +#~ " contains a description (and no " +#~ "token), the following text will be " +#~ "added to the changelog::" +#~ msgstr "" + +#~ msgid "" +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, nothing will change" +#~ " in the changelog." +#~ msgstr "" + +#~ msgid "" +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, the following text" +#~ " will be added to the changelog::" +#~ msgstr "" + +#~ msgid "" +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, the following " +#~ "text will be added to the " +#~ "changelog::" +#~ msgstr "" + +#~ msgid "" +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, the following " +#~ "text will be added to the " +#~ "changelog::" +#~ msgstr "" + +#~ msgid "" +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, the following text " +#~ "will be added to the changelog::" +#~ msgstr "" + +#~ msgid "" +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, the following " +#~ "text will be added to the " +#~ "changelog::" +#~ msgstr "" + +#~ msgid "" +#~ "Note that only one token must be" +#~ " provided, otherwise, only the first " +#~ "action (in the order listed above), " +#~ "will be performed." +#~ msgstr "" + +#~ msgid "" +#~ "We recommend you to check out the" +#~ " complete `code example " +#~ "`_ demonstrating federated " +#~ "learning with Flower in an authenticated" +#~ " setting." +#~ msgstr "" + +#~ msgid "Let's break down the :code:`--require-client-authentication` flag:" +#~ msgstr "" + +#~ msgid "" +#~ "The first argument is a path to" +#~ " a CSV file storing all known " +#~ "node public keys. You need to " +#~ "store all known node public keys " +#~ "that are allowed to participate in " +#~ "a federation in one CSV file " +#~ "(:code:`.csv`)." +#~ msgstr "" + +#~ msgid "" +#~ "The second and third arguments are " +#~ "paths to the server's private and " +#~ "public keys. For development purposes, " +#~ "you can generate a private and " +#~ "public key pair using :code:`ssh-keygen" +#~ " -t ecdsa -b 384`." +#~ msgstr "" + +#~ msgid "" +#~ "The :code:`--authentication-keys` flag expects" +#~ " two arguments: a path to the " +#~ "node's private key file and a path" +#~ " to the node's public key file. " +#~ "For development purposes, you can " +#~ "generate a private and public key " +#~ "pair using :code:`ssh-keygen -t ecdsa" +#~ " -b 384`." +#~ msgstr "" + +#~ msgid "" +#~ "The simplest way to get started " +#~ "with Flower is by using the " +#~ "pre-made Docker images, which you can" +#~ " find on `Docker Hub " +#~ "`_." +#~ msgstr "" + +#~ msgid "" +#~ "The ``--insecure`` flag enables insecure " +#~ "communication (using HTTP, not HTTPS) " +#~ "and should only be used for " +#~ "testing purposes. We strongly recommend " +#~ "enabling `SSL `_ when " +#~ "deploying to a production environment." +#~ msgstr "" + +#~ msgid "" +#~ "If you want to persist the state" +#~ " of the SuperLink on your host " +#~ "system, all you need to do is " +#~ "specify a path where you want to" +#~ " save the file on your host " +#~ "system and a name for the database" +#~ " file. In the example below, we " +#~ "tell Docker via the flag ``-v`` to" +#~ " mount the user's home directory " +#~ "(``~/`` on your host) into the " +#~ "``/app/`` directory of the container. " +#~ "Furthermore, we use the flag " +#~ "``--database`` to specify the name of" +#~ " the database file." +#~ msgstr "" + +#~ msgid "" +#~ "For testing purposes, you can generate" +#~ " your own self-signed certificates. " +#~ "The `Enable SSL connections " +#~ "`_ page contains " +#~ "a section that will guide you " +#~ "through the process." +#~ msgstr "" + +#~ msgid "" +#~ "Assuming all files we need are in" +#~ " the local ``certificates`` directory, we" +#~ " can use the flag ``-v`` to " +#~ "mount the local directory into the " +#~ "``/app/`` directory of the container. " +#~ "This allows the SuperLink to access " +#~ "the files within the container. Finally," +#~ " we pass the names of the " +#~ "certificates to the SuperLink with the" +#~ " ``--certificates`` flag." +#~ msgstr "" + +#~ msgid "" +#~ "The SuperNode Docker image currently " +#~ "works only with the 1.9.0-nightly " +#~ "release. A stable version will be " +#~ "available when Flower 1.9.0 (stable) " +#~ "gets released (ETA: May). A SuperNode" +#~ " nightly image must be paired with" +#~ " the corresponding SuperLink nightly image" +#~ " released on the same day. To " +#~ "ensure the versions are in sync, " +#~ "using the concrete tag, e.g., " +#~ "``1.9.0.dev20240501`` instead of ``nightly`` " +#~ "is recommended." +#~ msgstr "" + +#~ msgid "" +#~ "We will use the ``app-pytorch`` " +#~ "example, which you can find in the" +#~ " Flower repository, to illustrate how " +#~ "you can dockerize your client-app." +#~ msgstr "" + +#~ msgid "" +#~ "Before we can start, we need to" +#~ " meet a few prerequisites in our " +#~ "local development environment. You can " +#~ "skip the first part if you want" +#~ " to run your client-app instead " +#~ "of the ``app-pytorch`` example." +#~ msgstr "" + +#~ msgid "" +#~ "First, we need to create a " +#~ "Dockerfile in the directory where the" +#~ " ``ClientApp`` code is located. If " +#~ "you use the ``app-pytorch`` example, " +#~ "create a new file called ``Dockerfile``" +#~ " in ``examples/app-pytorch``." +#~ msgstr "" + +#~ msgid "" +#~ "The ``Dockerfile`` contains the instructions" +#~ " that assemble the SuperNode image." +#~ msgstr "" + +#~ msgid "" +#~ "In the first two lines, we " +#~ "instruct Docker to use the SuperNode " +#~ "image tagged ``nightly`` as a base " +#~ "image and set our working directory " +#~ "to ``/app``. The following instructions " +#~ "will now be executed in the " +#~ "``/app`` directory. Next, we install the" +#~ " ``ClientApp`` dependencies by copying the" +#~ " ``requirements.txt`` file into the image" +#~ " and run ``pip install``. In the " +#~ "last two lines, we copy the " +#~ "``ClientApp`` code (``client.py`` and " +#~ "``task.py``) into the image and set " +#~ "the entry point to ``flower-client-" +#~ "app``." +#~ msgstr "" + +#~ msgid "" +#~ "Next, we build the SuperNode Docker " +#~ "image by running the following command" +#~ " in the directory where Dockerfile " +#~ "and client-app code are located." +#~ msgstr "" + +#~ msgid "" +#~ "``client:app``: The object reference of " +#~ "the ``ClientApp`` (``:``)." +#~ msgstr "" + +#~ msgid "" +#~ "It points to the ``ClientApp`` that " +#~ "will be run inside the SuperNode " +#~ "container." +#~ msgstr "" + +#~ msgid "" +#~ "Assuming the certificate already exists " +#~ "locally, we can use the flag " +#~ "``-v`` to mount the local certificate" +#~ " into the container's ``/app/`` directory." +#~ " This allows the SuperNode to access" +#~ " the certificate within the container. " +#~ "Use the ``--certificates`` flag when " +#~ "starting the container." +#~ msgstr "" + +#~ msgid "" +#~ "If you want to use a different " +#~ "version of Flower, for example Flower" +#~ " nightly, you can do so by " +#~ "changing the tag. All available versions" +#~ " are on `Docker Hub " +#~ "`_." +#~ msgstr "" + +#~ msgid "" +#~ "Run |runsimcli_link|_ in CLI and point" +#~ " to the ``server_app`` / ``client_app`` " +#~ "object in the code instead of " +#~ "executing the Python script. Here's an" +#~ " example (assuming the ``server_app`` and" +#~ " ``client_app`` objects are in a " +#~ "``sim.py`` module):" +#~ msgstr "" + +#~ msgid "start\\_driver" +#~ msgstr "" + +#~ msgid "run\\_simulation\\_from\\_cli" +#~ msgstr "" + +#~ msgid "" +#~ "We now have a list of ten " +#~ "training sets and ten validation sets" +#~ " (``trainloaders`` and ``valloaders``) " +#~ "representing the data of ten different" +#~ " organizations. Each ``trainloader``/``valloader`` " +#~ "pair contains 4500 training examples and" +#~ " 500 validation examples. There's also " +#~ "a single ``testloader`` (we did not " +#~ "split the test set). Again, this " +#~ "is only necessary for building research" +#~ " or educational systems, actual federated" +#~ " learning systems have their data " +#~ "naturally distributed across multiple " +#~ "partitions." +#~ msgstr "" + +#~ msgid "|191c6b8b5e1d46f99de4872746afa8af|" +#~ msgstr "" + +#~ msgid "|21b83f3feb024a049617190555a13549|" +#~ msgstr "" + +#~ msgid "|0dd15b4df7e3422f88aaf74cb401bfa7|" +#~ msgstr "" + +#~ msgid "|60e16f6be7354ca793444e01aa7adf25|" +#~ msgstr "" + +#~ msgid "|a7032acbd65948a8beef8bccbbb9b83a|" +#~ msgstr "" + +#~ msgid "|dd0e05706e584ee29e07cd39e6af5498|" +#~ msgstr "" + +#~ msgid "|2a2031018a1c4f81a69ea16df4947bd0|" +#~ msgstr "" + +#~ msgid "|5e841497933340d3b5c2efbf37e3e6a6|" +#~ msgstr "" + +#~ msgid "|19687aecbc3a485da999b66fe2051005|" +#~ msgstr "" + +#~ msgid "|32ef0bbade4d4500b7be97cf62405661|" +#~ msgstr "" + +#~ msgid "|9d57ed324b304a698263f5a983a56a6b|" +#~ msgstr "" + +#~ msgid "|d41510e6781c4bf18c234c6bfb8d4937|" +#~ msgstr "" + +#~ msgid "|a0198a7ebbfb4b9289e7312711cbc967|" +#~ msgstr "" + +#~ msgid "|2c13f726c8c843fc8aae997bf906125b|" +#~ msgstr "" + diff --git a/doc/locales/ko/LC_MESSAGES/sphinx.po b/doc/locales/ko/LC_MESSAGES/sphinx.po index bcfa940edea..fbb6c75085a 100644 --- a/doc/locales/ko/LC_MESSAGES/sphinx.po +++ b/doc/locales/ko/LC_MESSAGES/sphinx.po @@ -8,7 +8,7 @@ msgid "" msgstr "" "Project-Id-Version: Flower main\n" "Report-Msgid-Bugs-To: \n" -"POT-Creation-Date: 2024-05-13 09:48+0200\n" +"POT-Creation-Date: 2024-05-28 11:47+0200\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language: ko\n" @@ -17,7 +17,7 @@ msgstr "" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: 8bit\n" -"Generated-By: Babel 2.14.0\n" +"Generated-By: Babel 2.15.0\n" #: ../../source/_templates/base.html:18 msgid "About these documents" @@ -28,6 +28,7 @@ msgid "Index" msgstr "" #: ../../source/_templates/base.html:24 +#: ../../source/_templates/sidebar/search.html:2 msgid "Search" msgstr "" @@ -35,3 +36,7 @@ msgstr "" msgid "Copyright" msgstr "" +#: ../../source/_templates/sidebar/search.html:2 +msgid "Type / to search" +msgstr "" + diff --git a/doc/locales/pt_BR/LC_MESSAGES/framework-docs.po b/doc/locales/pt_BR/LC_MESSAGES/framework-docs.po index 75a49d95d40..f0127ad93ed 100644 --- a/doc/locales/pt_BR/LC_MESSAGES/framework-docs.po +++ b/doc/locales/pt_BR/LC_MESSAGES/framework-docs.po @@ -7,18 +7,17 @@ msgid "" msgstr "" "Project-Id-Version: Flower main\n" "Report-Msgid-Bugs-To: \n" -"POT-Creation-Date: 2024-03-15 14:23+0000\n" +"POT-Creation-Date: 2024-05-28 11:47+0200\n" "PO-Revision-Date: 2024-05-25 11:09+0000\n" "Last-Translator: Gustavo Bertoli \n" -"Language-Team: Portuguese (Brazil) \n" "Language: pt_BR\n" +"Language-Team: Portuguese (Brazil) \n" +"Plural-Forms: nplurals=2; plural=n > 1;\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: 8bit\n" -"Plural-Forms: nplurals=2; plural=n > 1;\n" -"X-Generator: Weblate 5.6-dev\n" -"Generated-By: Babel 2.14.0\n" +"Generated-By: Babel 2.15.0\n" #: ../../source/contributor-explanation-architecture.rst:2 msgid "Flower Architecture" @@ -33,8 +32,8 @@ msgid "" "`Flower `_ core framework architecture with Edge " "Client Engine" msgstr "" -"`Flower `_ arquitetura principal do framework com Engine " -"do Edge Client" +"`Flower `_ arquitetura principal do framework com " +"Engine do Edge Client" #: ../../source/contributor-explanation-architecture.rst:13 msgid "Virtual Client Engine" @@ -45,8 +44,8 @@ msgid "" "`Flower `_ core framework architecture with Virtual " "Client Engine" msgstr "" -"`Flower `_ arquitetura principal do framework com Engine " -"do Virtual Client" +"`Flower `_ arquitetura principal do framework com " +"Engine do Virtual Client" #: ../../source/contributor-explanation-architecture.rst:21 msgid "Virtual Client Engine and Edge Client Engine in the same workload" @@ -59,28 +58,29 @@ msgid "" "`Flower `_ core framework architecture with both " "Virtual Client Engine and Edge Client Engine" msgstr "" -"`Flower `_ arquitetura principal do framework com ambas " -"engines do Virtual Client e do Edge Client" +"`Flower `_ arquitetura principal do framework com " +"ambas engines do Virtual Client e do Edge Client" #: ../../source/contributor-how-to-build-docker-images.rst:2 msgid "How to build Docker Flower images locally" msgstr "Como construir imagens Docker do Flower localmente" #: ../../source/contributor-how-to-build-docker-images.rst:4 +#, fuzzy msgid "" "Flower provides pre-made docker images on `Docker Hub " -"`_ that include all necessary " -"dependencies for running the server. You can also build your own custom " -"docker images from scratch with a different version of Python or Ubuntu " -"if that is what you need. In this guide, we will explain what images " -"exist and how to build them locally." -msgstr "" -"Flower disponibiliza imagens docker em `Docker Hub `_ que incluem todas as dependências necesárias para " -"executar o servidor. Você pode também compilar suas próprias imagens docker " -"customizadas do zero com uma versão diferente do Python ou do Ubuntu se isso " -"for o que você precisa. Neste guia, explicaremos quais imagens existem e " -"como compilar localmente." +"`_ that include all necessary dependencies" +" for running the SuperLink. You can also build your own custom docker " +"images from scratch with a different version of Python or Ubuntu if that " +"is what you need. In this guide, we will explain what images exist and " +"how to build them locally." +msgstr "" +"Flower disponibiliza imagens docker em `Docker Hub " +"`_ que incluem todas as " +"dependências necesárias para executar o servidor. Você pode também " +"compilar suas próprias imagens docker customizadas do zero com uma versão" +" diferente do Python ou do Ubuntu se isso for o que você precisa. Neste " +"guia, explicaremos quais imagens existem e como compilar localmente." #: ../../source/contributor-how-to-build-docker-images.rst:9 msgid "" @@ -95,10 +95,12 @@ msgid "Clone the flower repository." msgstr "Clone o repositório do flower." #: ../../source/contributor-how-to-build-docker-images.rst:17 +#: ../../source/how-to-run-flower-using-docker.rst:144 msgid "Verify the Docker daemon is running." msgstr "Verifique que o serviço Docker está rodando." #: ../../source/contributor-how-to-build-docker-images.rst:19 +#: ../../source/how-to-run-flower-using-docker.rst:146 msgid "" "Please follow the first section on :doc:`Run Flower using Docker ` which covers this step in more detail." @@ -107,20 +109,20 @@ msgstr "" "` que cobre este passo em mais detalhes." #: ../../source/contributor-how-to-build-docker-images.rst:23 +#, fuzzy msgid "" -"Currently, Flower provides two images, a base image and a server image. " -"There will also be a client image soon. The base image, as the name " -"suggests, contains basic dependencies that both the server and the client" -" need. This includes system dependencies, Python and Python tools. The " -"server image is based on the base image, but it additionally installs the" -" Flower server using ``pip``." +"Currently, Flower provides two images, a ``base`` image and a " +"``superlink`` image. The base image, as the name suggests, contains basic" +" dependencies that the SuperLink needs. This includes system " +"dependencies, Python and Python tools. The SuperLink image is based on " +"the base image, but it additionally installs the SuperLink using ``pip``." msgstr "" "Atualmente, Flower fornece duas imagens, uma imagem base e uma imagem de " -"servidor. Também haverá uma imagem de cliente em breve. A imagem base, como " -"o nome sugere, contém dependências básicas que tanto o servidor quanto o " -"cliente precisam. Isso inclui dependências do sistema, Python e ferramentas " -"Python. A imagem do servidor é baseada na imagem base, mas também instala o " -"servidor Flower usando ``pip```." +"servidor. Também haverá uma imagem de cliente em breve. A imagem base, " +"como o nome sugere, contém dependências básicas que tanto o servidor " +"quanto o cliente precisam. Isso inclui dependências do sistema, Python e " +"ferramentas Python. A imagem do servidor é baseada na imagem base, mas " +"também instala o servidor Flower usando ``pip```." #: ../../source/contributor-how-to-build-docker-images.rst:28 msgid "" @@ -129,27 +131,28 @@ msgid "" "``src/docker``." msgstr "" "As instruções de compilação que montam as imagens estão localizadas nos " -"respectivos Dockerfiles. Você pode encontrá-los nos subdiretórios ``src/" -"docker```." +"respectivos Dockerfiles. Você pode encontrá-los nos subdiretórios " +"``src/docker```." #: ../../source/contributor-how-to-build-docker-images.rst:31 +#, fuzzy msgid "" -"Both, base and server image are configured via build arguments. Through " -"build arguments, we can make our build more flexible. For example, in the" -" base image, we can specify the version of Python to install using the " -"``PYTHON_VERSION`` build argument. Some of the build arguments have " -"default values, others must be specified when building the image. All " -"available build arguments for each image are listed in one of the tables " -"below." -msgstr "" -"Ambas, imagens base e do servidor são configuradas através dos argumentos de " -"compilação. Através dos argumentos de compilação, podemos tornar nossa " -"compilação mais flexível. Por exemplo, na imagem base, podemos especificar a " -"versão do Python para instalar usando o argumento de compilação " -"`PYTHON_VERSION`. Alguns dos argumentos de compilação têm valores padrão, " -"outros devem ser especificados ao compilar a imagem. Todos os argumentos de " -"compilação disponíveis para cada imagem estão listados em uma das tabelas " -"abaixo." +"Both, base and SuperLink image are configured via build arguments. " +"Through build arguments, we can make our build more flexible. For " +"example, in the base image, we can specify the version of Python to " +"install using the ``PYTHON_VERSION`` build argument. Some of the build " +"arguments have default values, others must be specified when building the" +" image. All available build arguments for each image are listed in one of" +" the tables below." +msgstr "" +"Ambas, imagens base e do servidor são configuradas através dos argumentos" +" de compilação. Através dos argumentos de compilação, podemos tornar " +"nossa compilação mais flexível. Por exemplo, na imagem base, podemos " +"especificar a versão do Python para instalar usando o argumento de " +"compilação `PYTHON_VERSION`. Alguns dos argumentos de compilação têm " +"valores padrão, outros devem ser especificados ao compilar a imagem. " +"Todos os argumentos de compilação disponíveis para cada imagem estão " +"listados em uma das tabelas abaixo." #: ../../source/contributor-how-to-build-docker-images.rst:38 msgid "Building the base image" @@ -176,6 +179,7 @@ msgid "Example" msgstr "Exemplo" #: ../../source/contributor-how-to-build-docker-images.rst:48 +#: ../../source/contributor-how-to-build-docker-images.rst:94 msgid "``PYTHON_VERSION``" msgstr "``PYTHON_VERSION``" @@ -186,7 +190,7 @@ msgstr "Versão do ``python`` a ser instalada." #: ../../source/contributor-how-to-build-docker-images.rst:50 #: ../../source/contributor-how-to-build-docker-images.rst:54 #: ../../source/contributor-how-to-build-docker-images.rst:58 -#: ../../source/contributor-how-to-build-docker-images.rst:100 +#: ../../source/contributor-how-to-build-docker-images.rst:108 msgid "Yes" msgstr "Sim" @@ -219,6 +223,7 @@ msgid "``69.0.2``" msgstr "``69.0.2``" #: ../../source/contributor-how-to-build-docker-images.rst:60 +#: ../../source/contributor-how-to-build-docker-images.rst:98 msgid "``UBUNTU_VERSION``" msgstr "``UBUNTU_VERSION``" @@ -245,11 +250,12 @@ msgid "" "needs. These values serve as examples only." msgstr "" "O nome da imagem é ``flwr_base`` com a tag ``0.1.0``. Lembre-se que os " -"argumentos de construção assim como o nome e a tag podem ser adaptados de " -"acordo com suas necessidades. Estes valores servem apenas como exemplo." +"argumentos de construção assim como o nome e a tag podem ser adaptados de" +" acordo com suas necessidades. Estes valores servem apenas como exemplo." #: ../../source/contributor-how-to-build-docker-images.rst:80 -msgid "Building the server image" +#, fuzzy +msgid "Building the SuperLink image" msgstr "Construindo a imagem do servidor" #: ../../source/contributor-how-to-build-docker-images.rst:90 @@ -261,67 +267,92 @@ msgid "The repository name of the base image." msgstr "O nome do repositório da imagem base." #: ../../source/contributor-how-to-build-docker-images.rst:92 -msgid "Defaults to ``flwr/server``." +#, fuzzy +msgid "Defaults to ``flwr/base``." msgstr "Pré-definido para ``flwr/server``." -#: ../../source/contributor-how-to-build-docker-images.rst:94 -msgid "``BASE_IMAGE_TAG``" -msgstr "``BASE_IMAGE_TAG``" - #: ../../source/contributor-how-to-build-docker-images.rst:95 #, fuzzy -msgid "The image tag of the base image." -msgstr "A tag da imagem da imagem base." +msgid "The Python version of the base image." +msgstr "O nome do repositório da imagem base." #: ../../source/contributor-how-to-build-docker-images.rst:96 -msgid "Defaults to ``py3.11-ubuntu22.04``." +#, fuzzy +msgid "Defaults to ``py3.11``." +msgstr "Como padrão ``22.04``." + +#: ../../source/contributor-how-to-build-docker-images.rst:99 +#, fuzzy +msgid "The Ubuntu version of the base image." +msgstr "O nome do repositório da imagem base." + +#: ../../source/contributor-how-to-build-docker-images.rst:100 +#, fuzzy +msgid "Defaults to ``ubuntu22.04``." msgstr "Pré-definido para ``py3.11-ubuntu22.04``." -#: ../../source/contributor-how-to-build-docker-images.rst:98 +#: ../../source/contributor-how-to-build-docker-images.rst:102 +#, fuzzy +msgid "``FLWR_PACKAGE``" +msgstr "``FLWR_VERSION``" + +#: ../../source/contributor-how-to-build-docker-images.rst:103 +msgid "The PyPI package to install." +msgstr "" + +#: ../../source/contributor-how-to-build-docker-images.rst:104 +#, fuzzy +msgid "Defaults to ``flwr``." +msgstr "Pré-definido para ``flwr/server``." + +#: ../../source/contributor-how-to-build-docker-images.rst:106 msgid "``FLWR_VERSION``" msgstr "``FLWR_VERSION``" -#: ../../source/contributor-how-to-build-docker-images.rst:99 +#: ../../source/contributor-how-to-build-docker-images.rst:107 msgid "Version of Flower to be installed." msgstr "Versão do Flower a ser instalada." -#: ../../source/contributor-how-to-build-docker-images.rst:101 -msgid "``1.7.0``" +#: ../../source/contributor-how-to-build-docker-images.rst:109 +#, fuzzy +msgid "``1.8.0``" msgstr "``1.7.0``" -#: ../../source/contributor-how-to-build-docker-images.rst:103 +#: ../../source/contributor-how-to-build-docker-images.rst:112 +#, fuzzy msgid "" -"The following example creates a server image with the official Flower " -"base image py3.11-ubuntu22.04 and Flower 1.7.0:" +"The following example creates a SuperLink image with the official Flower " +"base image py3.11-ubuntu22.04 and Flower 1.8.0:" msgstr "" -"O exemplo a seguir cria uma imagem de servidor com a imagem base oficial do " -"Flower py3.11-ubuntu22.04 e Flower 1.7.0:" +"O exemplo a seguir cria uma imagem de servidor com a imagem base oficial " +"do Flower py3.11-ubuntu22.04 e Flower 1.7.0:" -#: ../../source/contributor-how-to-build-docker-images.rst:114 +#: ../../source/contributor-how-to-build-docker-images.rst:122 +#, fuzzy msgid "" -"The name of image is ``flwr_server`` and the tag ``0.1.0``. Remember that" -" the build arguments as well as the name and tag can be adapted to your " -"needs. These values serve as examples only." +"The name of image is ``flwr_superlink`` and the tag ``0.1.0``. Remember " +"that the build arguments as well as the name and tag can be adapted to " +"your needs. These values serve as examples only." msgstr "" "O nome da imagem é ``flwr_server`` e a tag ``0.1.0``. Lembre-se que os " "argumentos de compilação, bem como o nome e a tag podem ser adaptados às " "suas necessidades. Esses valores servem apenas como exemplos." -#: ../../source/contributor-how-to-build-docker-images.rst:117 +#: ../../source/contributor-how-to-build-docker-images.rst:125 +#, fuzzy msgid "" "If you want to use your own base image instead of the official Flower " -"base image, all you need to do is set the ``BASE_REPOSITORY`` and " -"``BASE_IMAGE_TAG`` build arguments. The value of ``BASE_REPOSITORY`` must" -" match the name of your image and the value of ``BASE_IMAGE_TAG`` must " -"match the tag of your image." +"base image, all you need to do is set the ``BASE_REPOSITORY``, " +"``PYTHON_VERSION`` and ``UBUNTU_VERSION`` build arguments." msgstr "" -"Se você quiser usar sua própria imagem base ao invés da imagem oficial base " -"do Flower, tudo que você precisa fazer é definir os argumentos " +"Se você quiser usar sua própria imagem base ao invés da imagem oficial " +"base do Flower, tudo que você precisa fazer é definir os argumentos " "``BASE_REPOSITORY`` e ``BASE_IMAGE_TAG`` como parte do comando de " -"compilação. O valor de ``BASE_REPOSITORY`` deve corresponder ao nome da sua " -"imagem e o valor de ``BASE_IMAGE_TAG`` deve corresponder à tag da sua imagem." +"compilação. O valor de ``BASE_REPOSITORY`` deve corresponder ao nome da " +"sua imagem e o valor de ``BASE_IMAGE_TAG`` deve corresponder à tag da sua" +" imagem." -#: ../../source/contributor-how-to-build-docker-images.rst:131 +#: ../../source/contributor-how-to-build-docker-images.rst:138 msgid "After creating the image, we can test whether the image is working:" msgstr "Depois de criar a imagem, podemos testar se a imagem está funcionando:" @@ -340,14 +371,15 @@ msgid "" "also be a great opportunity for those wanting to become open source " "contributors with little prerequisites." msgstr "" -"Desde o `Flower 1.5 `_ nós introduzimos traduções para nossas páginas de " -"documentação, mas, como você pode ter notado, as traduções são muitas vezes " -"imperfeitas. Se você fala línguas diferentes do inglês, você pode ser capaz " -"de nos ajudar neste esforço para tornar o aprendizado federado acessível a " -"tantas pessoas quanto possível, contribuindo para essas traduções! Isso " -"também pode ser uma grande oportunidade para aqueles que querem se tornar " -"contribuintes de código aberto com poucos pré-requisitos." +"Desde o `Flower 1.5 `_ nós introduzimos traduções para " +"nossas páginas de documentação, mas, como você pode ter notado, as " +"traduções são muitas vezes imperfeitas. Se você fala línguas diferentes " +"do inglês, você pode ser capaz de nos ajudar neste esforço para tornar o " +"aprendizado federado acessível a tantas pessoas quanto possível, " +"contribuindo para essas traduções! Isso também pode ser uma grande " +"oportunidade para aqueles que querem se tornar contribuintes de código " +"aberto com poucos pré-requisitos." #: ../../source/contributor-how-to-contribute-translations.rst:13 #, fuzzy @@ -373,10 +405,11 @@ msgid "" " profile settings can be found `here " "`_." msgstr "" -"A primeira coisa que você precisa fazer para contribuir é criar uma conta " -"Weblate gratuita nesta `página `_. Mais informações sobre as configurações de perfil podem ser encontradas " -"`aqui `_." +"A primeira coisa que você precisa fazer para contribuir é criar uma conta" +" Weblate gratuita nesta `página " +"`_. Mais informações sobre" +" as configurações de perfil podem ser encontradas `aqui " +"`_." #: ../../source/contributor-how-to-contribute-translations.rst:29 msgid "" @@ -386,17 +419,17 @@ msgid "" " that can be found on the website." msgstr "" "Uma vez que você esteja conectado ao Weblate, você pode navegar até o " -"projeto `Flower Framework `_. Aqui, você deve ver os diferentes idiomas existentes que " -"podem ser encontrados no site." +"projeto `Flower Framework `_. Aqui, você deve ver os diferentes idiomas existentes " +"que podem ser encontrados no site." #: ../../source/contributor-how-to-contribute-translations.rst:34 msgid "" "Once you have selected the language you want to contribute to, you should" " see a similar interface to this:" msgstr "" -"Uma vez que você tenha selecionado o idioma que deseja contribuir, você deve " -"ver uma interface semelhante a esta:" +"Uma vez que você tenha selecionado o idioma que deseja contribuir, você " +"deve ver uma interface semelhante a esta:" #: ../../source/contributor-how-to-contribute-translations.rst:39 msgid "" @@ -405,9 +438,10 @@ msgid "" "will automatically bring you to the translation interface for " "untranslated strings." msgstr "" -"A opção mais direta aqui é clicar no botão ``Translate`` no canto superior " -"direito (na seção ``Translation status``). Isso te levará automaticamente " -"para a interface de tradução de strings ainda não traduzidas." +"A opção mais direta aqui é clicar no botão ``Translate`` no canto " +"superior direito (na seção ``Translation status``). Isso te levará " +"automaticamente para a interface de tradução de strings ainda não " +"traduzidas." #: ../../source/contributor-how-to-contribute-translations.rst:43 #, fuzzy @@ -426,10 +460,10 @@ msgstr "" "Você insire sua tradução na caixa de texto no topo e depois, uma vez que " "você está satisfeito com ela, você pressiona ``Save and continue`` (para " "salvar a tradução e ir para a próxima string não traduzida), ``Save and " -"stay`` (para salvar a tradução e ficar na mesma página), ``Suggest`` (para " -"adicionar sua tradução para sugestões para outros usuários verem), ou " -"``Skip`` (para ir para a próxima string não traduzida sem salvar nada na " -"atual)." +"stay`` (para salvar a tradução e ficar na mesma página), ``Suggest`` " +"(para adicionar sua tradução para sugestões para outros usuários verem), " +"ou ``Skip`` (para ir para a próxima string não traduzida sem salvar nada " +"na atual)." #: ../../source/contributor-how-to-contribute-translations.rst:54 msgid "" @@ -442,8 +476,9 @@ msgstr "" "Para ajudar com as traduções, você pode ver na parte inferior o ``Nearby " "strings`` (strings próximas), o ``Comments`` (comentários de outros " "contribuidores), o ``Automatic suggestions`` (sugestões atuomáticas de " -"sistemas de tradução automática), as traduções em ``Other languages`` (" -"outras línguas), e o ``History`` (histórico) de traduções para esta string." +"sistemas de tradução automática), as traduções em ``Other languages`` " +"(outras línguas), e o ``History`` (histórico) de traduções para esta " +"string." #: ../../source/contributor-how-to-contribute-translations.rst:59 msgid "" @@ -451,9 +486,9 @@ msgid "" "click the link under ``Source string location`` in order to view the " "source of the doc file containing the string." msgstr "" -"À direita, sob a seção ``String information``, você também pode clicar no " -"link sob ``Source string location`` para visualizar a fonte do arquivo doc " -"que contém a string." +"À direita, sob a seção ``String information``, você também pode clicar no" +" link sob ``Source string location`` para visualizar a fonte do arquivo " +"doc que contém a string." #: ../../source/contributor-how-to-contribute-translations.rst:63 msgid "" @@ -461,9 +496,9 @@ msgid "" "this `in-depth guide " "`_." msgstr "" -"Para obter mais informações sobre como traduzir usando o Weblate, você pode " -"conferir este `guia detalhado `_." +"Para obter mais informações sobre como traduzir usando o Weblate, você " +"pode conferir este `guia detalhado " +"`_." #: ../../source/contributor-how-to-contribute-translations.rst:67 #, fuzzy @@ -476,9 +511,10 @@ msgid "" "either on `Slack `_, or by opening an issue" " on our `GitHub repo `_." msgstr "" -"Se você quiser adicionar uma nova língua, primeiro você terá que entrar em " -"contato conosco, no `Slack `_, ou abrindo uma " -"issue no nosso `repositório GitHub `_." +"Se você quiser adicionar uma nova língua, primeiro você terá que entrar " +"em contato conosco, no `Slack `_, ou " +"abrindo uma issue no nosso `repositório GitHub " +"`_." #: ../../source/contributor-how-to-create-new-messages.rst:2 #, fuzzy @@ -808,14 +844,14 @@ msgstr "" #: ../../source/contributor-how-to-install-development-versions.rst:51 msgid "" -"Open the notebook ``doc/source/tutorial-get-started-with-flower-" +"Open the notebook ``doc/source/tutorial-series-get-started-with-flower-" "pytorch.ipynb``:" msgstr "" #: ../../source/contributor-how-to-install-development-versions.rst:53 msgid "" "https://colab.research.google.com/github/adap/flower/blob/main/doc/source" -"/tutorial-get-started-with-flower-pytorch.ipynb" +"/tutorial-series-get-started-with-flower-pytorch.ipynb" msgstr "" #: ../../source/contributor-how-to-install-development-versions.rst:55 @@ -827,7 +863,7 @@ msgstr "" #: ../../source/contributor-how-to-install-development-versions.rst:57 msgid "" "https://colab.research.google.com/github/adap/flower/blob/branch-" -"name/doc/source/tutorial-get-started-with-flower-pytorch.ipynb" +"name/doc/source/tutorial-series-get-started-with-flower-pytorch.ipynb" msgstr "" #: ../../source/contributor-how-to-install-development-versions.rst:59 @@ -1047,10 +1083,18 @@ msgid "" msgstr "" #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:14 -msgid "Virutualenv with Pyenv/Virtualenv" +msgid "" +"Due to a known incompatibility with `ray " +"`_, we currently recommend utilizing at " +"most `Python 3.11 `_ for running Flower " +"simulations." +msgstr "" + +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:19 +msgid "Virtualenv with Pyenv/Virtualenv" msgstr "" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:16 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:21 msgid "" "One of the recommended virtual environment is `pyenv " "`_/`virtualenv `_ for details." msgstr "" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:18 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:23 msgid "" "Once Pyenv is set up, you can use it to install `Python Version 3.10 " "`_ or above:" msgstr "" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:24 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:29 msgid "Create the virtualenv with:" msgstr "" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:31 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:36 msgid "Activate the virtualenv by running the following command:" msgstr "" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:39 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:44 msgid "Virtualenv with Poetry" msgstr "" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:41 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:46 msgid "" "The Flower examples are based on `Poetry `_ to manage dependencies. After installing Poetry you " "simply create a virtual environment with:" msgstr "" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:47 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:52 msgid "" "If you open a new terminal you can activate the previously created " "virtual environment with the following command:" msgstr "" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:55 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:60 msgid "Virtualenv with Anaconda" msgstr "" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:57 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:62 msgid "" "If you prefer to use Anaconda for your virtual environment then install " "and setup the `conda `_." @@ -1131,7 +1175,7 @@ msgid "" msgstr "" #: ../../source/contributor-how-to-write-documentation.rst:10 -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:142 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:169 msgid "" "Note that, in order to build the documentation locally (with ``poetry run" " make html``, like described below), `Pandoc " @@ -1571,7 +1615,7 @@ msgstr "" msgid "Creating and merging a pull request (PR)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:205 +#: ../../source/contributor-tutorial-contribute-on-github.rst:206 msgid "**Create the PR**" msgstr "" @@ -1604,178 +1648,180 @@ msgstr "" #: ../../source/contributor-tutorial-contribute-on-github.rst:193 msgid "" +"The title should be changed to adhere to the :ref:`pr_title_format` " +"guidelines, otherwise it won't be possible to merge the PR. So in this " +"case, a correct title might be ``docs(framework:skip) Fix typos``." +msgstr "" + +#: ../../source/contributor-tutorial-contribute-on-github.rst:196 +msgid "" "The input box in the middle is there for you to describe what your PR " "does and to link it to existing issues. We have placed comments (that " "won't be rendered once the PR is opened) to guide you through the " "process." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:196 -msgid "" -"It is important to follow the instructions described in comments. For " -"instance, in order to not break how our changelog system works, you " -"should read the information above the ``Changelog entry`` section " -"carefully. You can also checkout some examples and details in the " -":ref:`changelogentry` appendix." +#: ../../source/contributor-tutorial-contribute-on-github.rst:199 +msgid "It is important to follow the instructions described in comments." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:200 +#: ../../source/contributor-tutorial-contribute-on-github.rst:201 msgid "" "At the bottom you will find the button to open the PR. This will notify " "reviewers that a new PR has been opened and that they should look over it" " to merge or to request changes." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:203 +#: ../../source/contributor-tutorial-contribute-on-github.rst:204 msgid "" "If your PR is not yet ready for review, and you don't want to notify " "anyone, you have the option to create a draft pull request:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:208 +#: ../../source/contributor-tutorial-contribute-on-github.rst:209 msgid "**Making new changes**" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:208 +#: ../../source/contributor-tutorial-contribute-on-github.rst:209 msgid "" "Once the PR has been opened (as draft or not), you can still push new " "commits to it the same way we did before, by making changes to the branch" " associated with the PR." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:230 +#: ../../source/contributor-tutorial-contribute-on-github.rst:231 msgid "**Review the PR**" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:211 +#: ../../source/contributor-tutorial-contribute-on-github.rst:212 msgid "" "Once the PR has been opened or once the draft PR has been marked as " "ready, a review from code owners will be automatically requested:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:215 +#: ../../source/contributor-tutorial-contribute-on-github.rst:216 msgid "" "Code owners will then look into the code, ask questions, request changes " "or validate the PR." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:217 +#: ../../source/contributor-tutorial-contribute-on-github.rst:218 msgid "Merging will be blocked if there are ongoing requested changes." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:221 +#: ../../source/contributor-tutorial-contribute-on-github.rst:222 msgid "" "To resolve them, just push the necessary changes to the branch associated" " with the PR:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:225 +#: ../../source/contributor-tutorial-contribute-on-github.rst:226 msgid "And resolve the conversation:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:229 +#: ../../source/contributor-tutorial-contribute-on-github.rst:230 msgid "" "Once all the conversations have been resolved, you can re-request a " "review." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:250 +#: ../../source/contributor-tutorial-contribute-on-github.rst:251 msgid "**Once the PR is merged**" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:233 +#: ../../source/contributor-tutorial-contribute-on-github.rst:234 msgid "" "If all the automatic tests have passed and reviewers have no more changes" " to request, they can approve the PR and merge it." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:237 +#: ../../source/contributor-tutorial-contribute-on-github.rst:238 msgid "" "Once it is merged, you can delete the branch on GitHub (a button should " "appear to do so) and also delete it locally by doing:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:244 +#: ../../source/contributor-tutorial-contribute-on-github.rst:245 msgid "Then you should update your forked repository by doing:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:253 +#: ../../source/contributor-tutorial-contribute-on-github.rst:254 msgid "Example of first contribution" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:256 +#: ../../source/contributor-tutorial-contribute-on-github.rst:257 msgid "Problem" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:258 +#: ../../source/contributor-tutorial-contribute-on-github.rst:259 msgid "" "For our documentation, we've started to use the `Diàtaxis framework " "`_." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:260 +#: ../../source/contributor-tutorial-contribute-on-github.rst:261 msgid "" "Our \"How to\" guides should have titles that continue the sentence \"How" " to …\", for example, \"How to upgrade to Flower 1.0\"." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:262 +#: ../../source/contributor-tutorial-contribute-on-github.rst:263 msgid "" "Most of our guides do not follow this new format yet, and changing their " "title is (unfortunately) more involved than one might think." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:264 +#: ../../source/contributor-tutorial-contribute-on-github.rst:265 msgid "" "This issue is about changing the title of a doc from present continuous " "to present simple." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:266 +#: ../../source/contributor-tutorial-contribute-on-github.rst:267 msgid "" "Let's take the example of \"Saving Progress\" which we changed to \"Save " "Progress\". Does this pass our check?" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:268 +#: ../../source/contributor-tutorial-contribute-on-github.rst:269 msgid "Before: \"How to saving progress\" ❌" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:270 +#: ../../source/contributor-tutorial-contribute-on-github.rst:271 msgid "After: \"How to save progress\" ✅" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:273 +#: ../../source/contributor-tutorial-contribute-on-github.rst:274 msgid "Solution" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:275 +#: ../../source/contributor-tutorial-contribute-on-github.rst:276 msgid "" "This is a tiny change, but it'll allow us to test your end-to-end setup. " "After cloning and setting up the Flower repo, here's what you should do:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:277 +#: ../../source/contributor-tutorial-contribute-on-github.rst:278 msgid "Find the source file in ``doc/source``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:278 +#: ../../source/contributor-tutorial-contribute-on-github.rst:279 msgid "" "Make the change in the ``.rst`` file (beware, the dashes under the title " "should be the same length as the title itself)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:279 +#: ../../source/contributor-tutorial-contribute-on-github.rst:280 msgid "" "Build the docs and `check the result `_" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:282 +#: ../../source/contributor-tutorial-contribute-on-github.rst:283 msgid "Rename file" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:284 +#: ../../source/contributor-tutorial-contribute-on-github.rst:285 msgid "" "You might have noticed that the file name still reflects the old wording." " If we just change the file, then we break all existing links to it - it " @@ -1783,142 +1829,68 @@ msgid "" "engine ranking." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:287 +#: ../../source/contributor-tutorial-contribute-on-github.rst:288 msgid "Here's how to change the file name:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:289 +#: ../../source/contributor-tutorial-contribute-on-github.rst:290 msgid "Change the file name to ``save-progress.rst``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:290 +#: ../../source/contributor-tutorial-contribute-on-github.rst:291 msgid "Add a redirect rule to ``doc/source/conf.py``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:292 +#: ../../source/contributor-tutorial-contribute-on-github.rst:293 msgid "" "This will cause a redirect from ``saving-progress.html`` to ``save-" "progress.html``, old links will continue to work." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:295 +#: ../../source/contributor-tutorial-contribute-on-github.rst:296 msgid "Apply changes in the index file" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:297 +#: ../../source/contributor-tutorial-contribute-on-github.rst:298 msgid "" "For the lateral navigation bar to work properly, it is very important to " "update the ``index.rst`` file as well. This is where we define the whole " "arborescence of the navbar." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:300 +#: ../../source/contributor-tutorial-contribute-on-github.rst:301 msgid "Find and modify the file name in ``index.rst``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:303 +#: ../../source/contributor-tutorial-contribute-on-github.rst:304 msgid "Open PR" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:305 +#: ../../source/contributor-tutorial-contribute-on-github.rst:306 msgid "" "Commit the changes (commit messages are always imperative: \"Do " "something\", in this case \"Change …\")" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:306 -msgid "Push the changes to your fork" -msgstr "" - #: ../../source/contributor-tutorial-contribute-on-github.rst:307 -msgid "Open a PR (as shown above)" +msgid "Push the changes to your fork" msgstr "" #: ../../source/contributor-tutorial-contribute-on-github.rst:308 -msgid "Wait for it to be approved!" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:309 -msgid "Congrats! 🥳 You're now officially a Flower contributor!" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:313 -msgid "How to write a good PR title" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:315 -msgid "" -"A well-crafted PR title helps team members quickly understand the purpose" -" and scope of the changes being proposed. Here's a guide to help you " -"write a good GitHub PR title:" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:317 -msgid "" -"1. Be Clear and Concise: Provide a clear summary of the changes in a " -"concise manner. 1. Use Actionable Verbs: Start with verbs like \"Add,\" " -"\"Update,\" or \"Fix\" to indicate the purpose. 1. Include Relevant " -"Information: Mention the affected feature or module for context. 1. Keep " -"it Short: Avoid lengthy titles for easy readability. 1. Use Proper " -"Capitalization and Punctuation: Follow grammar rules for clarity." -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:323 msgid "" -"Let's start with a few examples for titles that should be avoided because" -" they do not provide meaningful information:" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:325 -msgid "Implement Algorithm" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:326 -msgid "Database" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:327 -msgid "Add my_new_file.py to codebase" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:328 -msgid "Improve code in module" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:329 -msgid "Change SomeModule" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:331 -msgid "" -"Here are a few positive examples which provide helpful information " -"without repeating how they do it, as that is already visible in the " -"\"Files changed\" section of the PR:" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:333 -msgid "Update docs banner to mention Flower Summit 2023" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:334 -msgid "Remove unnecessary XGBoost dependency" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:335 -msgid "Remove redundant attributes in strategies subclassing FedAvg" +"Open a PR (as shown above) with title ``docs(framework) Update how-to " +"guide title``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:336 -msgid "Add CI job to deploy the staging system when the ``main`` branch changes" +#: ../../source/contributor-tutorial-contribute-on-github.rst:309 +msgid "Wait for it to be approved!" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:337 -msgid "" -"Add new amazing library which will be used to improve the simulation " -"engine" +#: ../../source/contributor-tutorial-contribute-on-github.rst:310 +msgid "Congrats! 🥳 You're now officially a Flower contributor!" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:341 +#: ../../source/contributor-tutorial-contribute-on-github.rst:314 #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:548 #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:946 #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:727 @@ -1927,153 +1899,92 @@ msgstr "" msgid "Next steps" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:343 +#: ../../source/contributor-tutorial-contribute-on-github.rst:316 msgid "" "Once you have made your first PR, and want to contribute more, be sure to" " check out the following :" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:345 +#: ../../source/contributor-tutorial-contribute-on-github.rst:318 msgid "" ":doc:`Good first contributions `, where you should particularly look into the " ":code:`baselines` contributions." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:349 +#: ../../source/contributor-tutorial-contribute-on-github.rst:322 #: ../../source/fed/0000-20200102-fed-template.md:60 msgid "Appendix" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:354 -msgid "Changelog entry" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:356 -msgid "" -"When opening a new PR, inside its description, there should be a " -"``Changelog entry`` header." -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:358 -msgid "" -"Above this header you should see the following comment that explains how " -"to write your changelog entry:" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:360 -msgid "" -"Inside the following 'Changelog entry' section, you should put the " -"description of your changes that will be added to the changelog alongside" -" your PR title." +#: ../../source/contributor-tutorial-contribute-on-github.rst:327 +msgid "PR title format" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:363 -msgid "" -"If the section is completely empty (without any token) or non-existent, " -"the changelog will just contain the title of the PR for the changelog " -"entry, without any description." +#: ../../source/contributor-tutorial-contribute-on-github.rst:329 +msgid "We enforce the following PR title format:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:366 +#: ../../source/contributor-tutorial-contribute-on-github.rst:335 msgid "" -"If the section contains some text other than tokens, it will use it to " -"add a description to the change." +"(or ``(:skip) `` to ignore the PR in the " +"changelog)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:368 +#: ../../source/contributor-tutorial-contribute-on-github.rst:337 msgid "" -"If the section contains one of the following tokens it will ignore any " -"other text and put the PR under the corresponding section of the " -"changelog:" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:370 -msgid " is for classifying a PR as a general improvement." +"Where ```` needs to be in ``{ci, fix, feat, docs, refactor, " +"break}``, ```` should be in ``{framework, baselines, datasets, " +"examples, or '*' when modifying multiple projects which requires the " +"':skip' flag to be used}``, and ```` starts with a capitalised " +"verb in the imperative mood." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:372 -msgid " is to not add the PR to the changelog" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:374 -msgid " is to add a general baselines change to the PR" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:376 -msgid " is to add a general examples change to the PR" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:378 -msgid " is to add a general sdk change to the PR" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:380 -msgid " is to add a general simulations change to the PR" -msgstr "" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:382 -msgid "Note that only one token should be used." -msgstr "" +#: ../../source/contributor-tutorial-contribute-on-github.rst:341 +#, fuzzy +msgid "Valid examples:" +msgstr "Exemplo" -#: ../../source/contributor-tutorial-contribute-on-github.rst:384 -msgid "" -"Its content must have a specific format. We will break down what each " -"possibility does:" +#: ../../source/contributor-tutorial-contribute-on-github.rst:343 +msgid "``feat(framework) Add flwr build CLI command``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:386 -msgid "" -"If the ``### Changelog entry`` section contains nothing or doesn't exist," -" the following text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:344 +msgid "``refactor(examples:skip) Improve quickstart-pytorch logging``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:390 -msgid "" -"If the ``### Changelog entry`` section contains a description (and no " -"token), the following text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:345 +msgid "``ci(*:skip) Enforce PR title format``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:396 -msgid "" -"If the ``### Changelog entry`` section contains ````, nothing will " -"change in the changelog." +#: ../../source/contributor-tutorial-contribute-on-github.rst:347 +msgid "Invalid examples:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:398 -msgid "" -"If the ``### Changelog entry`` section contains ````, the " -"following text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:349 +msgid "``feat(framework): Add flwr build CLI command`` (extra ``:``)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:402 +#: ../../source/contributor-tutorial-contribute-on-github.rst:350 msgid "" -"If the ``### Changelog entry`` section contains ````, the " -"following text will be added to the changelog::" +"``feat(*) Add flwr build CLI command`` (missing ``skip`` flag along with " +"``*``)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:406 -msgid "" -"If the ``### Changelog entry`` section contains ````, the " -"following text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:351 +msgid "``feat(skip) Add flwr build CLI command`` (missing ````)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:410 -msgid "" -"If the ``### Changelog entry`` section contains ````, the following " -"text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:352 +msgid "``feat(framework) add flwr build CLI command`` (non capitalised verb)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:414 -msgid "" -"If the ``### Changelog entry`` section contains ````, the " -"following text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:353 +msgid "``feat(framework) Add flwr build CLI command.`` (dot at the end)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:418 -msgid "" -"Note that only one token must be provided, otherwise, only the first " -"action (in the order listed above), will be performed." +#: ../../source/contributor-tutorial-contribute-on-github.rst:354 +msgid "``Add flwr build CLI command.`` (missing ``()``)" msgstr "" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:2 @@ -2081,6 +1992,7 @@ msgid "Get started as a contributor" msgstr "" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:5 +#: ../../source/how-to-run-flower-using-docker.rst:132 msgid "Prerequisites" msgstr "" @@ -2213,51 +2125,96 @@ msgid "Run Linters and Tests" msgstr "" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:106 -msgid "Run Github Actions (CI) locally" +msgid "Add a pre-commit hook" msgstr "" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:108 msgid "" +"Developers may integrate a pre-commit hook into their workflow utilizing " +"the `pre-commit `_ library. The pre-" +"commit hook is configured to execute two primary operations: " +"``./dev/format.sh`` and ``./dev/test.sh`` scripts." +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:110 +msgid "There are multiple ways developers can use this:" +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:112 +msgid "Install the pre-commit hook to your local git directory by simply running:" +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:118 +msgid "" +"Each ``git commit`` will trigger the execution of formatting and " +"linting/test scripts." +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:119 +msgid "" +"If in a hurry, bypass the hook using ``--no-verify`` with the ``git " +"commit`` command. ::" +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:124 +msgid "" +"For developers who prefer not to install the hook permanently, it is " +"possible to execute a one-time check prior to committing changes by using" +" the following command:" +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:130 +msgid "" +"This executes the formatting and linting checks/tests on all the files " +"without modifying the default behavior of ``git commit``." +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:133 +msgid "Run Github Actions (CI) locally" +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:135 +msgid "" "Developers could run the full set of Github Actions workflows under their" " local environment by using `Act `_. " "Please refer to the installation instructions under the linked repository" " and run the next command under Flower main cloned repository folder::" msgstr "" -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:115 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:142 msgid "" "The Flower default workflow would run by setting up the required Docker " "machines underneath." msgstr "" -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:120 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:147 msgid "Build Release" msgstr "" -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:122 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:149 msgid "" "Flower uses Poetry to build releases. The necessary command is wrapped in" " a simple script::" msgstr "" -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:127 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:154 msgid "" "The resulting :code:`.whl` and :code:`.tar.gz` releases will be stored in" " the :code:`/dist` subdirectory." msgstr "" -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:132 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:159 msgid "Build Documentation" msgstr "" -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:134 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:161 msgid "" "Flower's documentation uses `Sphinx `_. " "There's no convenience script to re-build the documentation yet, but it's" " pretty easy::" msgstr "" -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:140 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:167 msgid "This will generate HTML documentation in ``doc/build/html``." msgstr "" @@ -2351,7 +2308,6 @@ msgstr "" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:94 #: ../../source/example-jax-from-centralized-to-federated.rst:277 -#: ../../source/example-mxnet-walk-through.rst:356 #: ../../source/example-pytorch-from-centralized-to-federated.rst:310 #: ../../source/tutorial-quickstart-jax.rst:283 msgid "Next Steps" @@ -2494,7 +2450,6 @@ msgid "" msgstr "" #: ../../source/example-jax-from-centralized-to-federated.rst:123 -#: ../../source/example-mxnet-walk-through.rst:204 #: ../../source/example-pytorch-from-centralized-to-federated.rst:181 #: ../../source/tutorial-quickstart-jax.rst:129 msgid "" @@ -2505,7 +2460,6 @@ msgid "" msgstr "" #: ../../source/example-jax-from-centralized-to-federated.rst:133 -#: ../../source/example-mxnet-walk-through.rst:214 #: ../../source/example-pytorch-from-centralized-to-federated.rst:191 #: ../../source/tutorial-quickstart-jax.rst:139 msgid "We can already start the *server*:" @@ -2536,13 +2490,11 @@ msgid "" msgstr "" #: ../../source/example-jax-from-centralized-to-federated.rst:161 -#: ../../source/example-mxnet-walk-through.rst:242 #: ../../source/tutorial-quickstart-jax.rst:167 msgid ":code:`set_parameters (optional)`" msgstr "" #: ../../source/example-jax-from-centralized-to-federated.rst:160 -#: ../../source/example-mxnet-walk-through.rst:241 #: ../../source/example-pytorch-from-centralized-to-federated.rst:219 #: ../../source/tutorial-quickstart-jax.rst:166 msgid "" @@ -2556,7 +2508,6 @@ msgid "transform parameters to NumPy :code:`ndarray`'s" msgstr "" #: ../../source/example-jax-from-centralized-to-federated.rst:162 -#: ../../source/example-mxnet-walk-through.rst:243 #: ../../source/example-pytorch-from-centralized-to-federated.rst:220 #: ../../source/tutorial-quickstart-jax.rst:168 msgid "" @@ -2565,17 +2516,14 @@ msgid "" msgstr "" #: ../../source/example-jax-from-centralized-to-federated.rst:163 -#: ../../source/example-mxnet-walk-through.rst:244 #: ../../source/example-pytorch-from-centralized-to-federated.rst:221 #: ../../source/tutorial-quickstart-jax.rst:169 -#: ../../source/tutorial-quickstart-mxnet.rst:171 #: ../../source/tutorial-quickstart-pytorch.rst:155 -#: ../../source/tutorial-quickstart-scikitlearn.rst:108 +#: ../../source/tutorial-quickstart-scikitlearn.rst:118 msgid ":code:`get_parameters`" msgstr "" #: ../../source/example-jax-from-centralized-to-federated.rst:164 -#: ../../source/example-mxnet-walk-through.rst:245 #: ../../source/example-pytorch-from-centralized-to-federated.rst:222 #: ../../source/tutorial-quickstart-jax.rst:170 msgid "" @@ -2584,19 +2532,15 @@ msgid "" msgstr "" #: ../../source/example-jax-from-centralized-to-federated.rst:167 -#: ../../source/example-mxnet-walk-through.rst:248 #: ../../source/example-pytorch-from-centralized-to-federated.rst:225 #: ../../source/tutorial-quickstart-jax.rst:173 -#: ../../source/tutorial-quickstart-mxnet.rst:177 #: ../../source/tutorial-quickstart-pytorch.rst:161 -#: ../../source/tutorial-quickstart-scikitlearn.rst:115 +#: ../../source/tutorial-quickstart-scikitlearn.rst:125 msgid ":code:`fit`" msgstr "" #: ../../source/example-jax-from-centralized-to-federated.rst:166 #: ../../source/example-jax-from-centralized-to-federated.rst:170 -#: ../../source/example-mxnet-walk-through.rst:247 -#: ../../source/example-mxnet-walk-through.rst:251 #: ../../source/example-pytorch-from-centralized-to-federated.rst:224 #: ../../source/example-pytorch-from-centralized-to-federated.rst:228 #: ../../source/tutorial-quickstart-jax.rst:172 @@ -2607,7 +2551,6 @@ msgid "" msgstr "" #: ../../source/example-jax-from-centralized-to-federated.rst:167 -#: ../../source/example-mxnet-walk-through.rst:248 #: ../../source/example-pytorch-from-centralized-to-federated.rst:225 #: ../../source/tutorial-quickstart-jax.rst:173 msgid "train the model on the local training set" @@ -2619,17 +2562,14 @@ msgid "get the updated local model parameters and return them to the server" msgstr "" #: ../../source/example-jax-from-centralized-to-federated.rst:172 -#: ../../source/example-mxnet-walk-through.rst:253 #: ../../source/example-pytorch-from-centralized-to-federated.rst:230 #: ../../source/tutorial-quickstart-jax.rst:178 -#: ../../source/tutorial-quickstart-mxnet.rst:180 #: ../../source/tutorial-quickstart-pytorch.rst:164 -#: ../../source/tutorial-quickstart-scikitlearn.rst:118 +#: ../../source/tutorial-quickstart-scikitlearn.rst:128 msgid ":code:`evaluate`" msgstr "" #: ../../source/example-jax-from-centralized-to-federated.rst:171 -#: ../../source/example-mxnet-walk-through.rst:252 #: ../../source/example-pytorch-from-centralized-to-federated.rst:229 #: ../../source/tutorial-quickstart-jax.rst:177 msgid "evaluate the updated model on the local test set" @@ -2666,7 +2606,6 @@ msgid "Having defined the federation process, we can run it." msgstr "" #: ../../source/example-jax-from-centralized-to-federated.rst:268 -#: ../../source/example-mxnet-walk-through.rst:347 #: ../../source/example-pytorch-from-centralized-to-federated.rst:301 #: ../../source/tutorial-quickstart-jax.rst:274 msgid "And that's it. You can now open two additional terminal windows and run" @@ -2697,332 +2636,143 @@ msgid "" "clients?" msgstr "" -#: ../../source/example-mxnet-walk-through.rst:2 -msgid "Example: MXNet - Run MXNet Federated" +#: ../../source/example-pytorch-from-centralized-to-federated.rst:2 +msgid "Example: PyTorch - From Centralized To Federated" msgstr "" -#: ../../source/example-mxnet-walk-through.rst:4 +#: ../../source/example-pytorch-from-centralized-to-federated.rst:4 msgid "" "This tutorial will show you how to use Flower to build a federated " -"version of an existing MXNet workload. We are using MXNet to train a " -"Sequential model on the MNIST dataset. We will structure the example " -"similar to our `PyTorch - From Centralized To Federated " -"`_ walkthrough. MXNet and PyTorch are very " -"similar and a very good comparison between MXNet and PyTorch is given " -"`here `_. First, we build a centralized " -"training approach based on the `Handwritten Digit Recognition " -"`_" -" tutorial. Then, we build upon the centralized training code to run the " +"version of an existing machine learning workload. We are using PyTorch to" +" train a Convolutional Neural Network on the CIFAR-10 dataset. First, we " +"introduce this machine learning task with a centralized training approach" +" based on the `Deep Learning with PyTorch " +"`_ " +"tutorial. Then, we build upon the centralized training code to run the " "training in a federated fashion." msgstr "" -#: ../../source/example-mxnet-walk-through.rst:10 +#: ../../source/example-pytorch-from-centralized-to-federated.rst:12 msgid "" -"Before we start setting up our MXNet example, we install the " -":code:`mxnet` and :code:`flwr` packages:" -msgstr "" - -#: ../../source/example-mxnet-walk-through.rst:19 -msgid "MNIST Training with MXNet" +"We begin with a brief description of the centralized CNN training code. " +"If you want a more in-depth explanation of what's going on then have a " +"look at the official `PyTorch tutorial " +"`_." msgstr "" -#: ../../source/example-mxnet-walk-through.rst:21 +#: ../../source/example-pytorch-from-centralized-to-federated.rst:15 msgid "" -"We begin with a brief description of the centralized training code based " -"on a :code:`Sequential` model. If you want a more in-depth explanation of" -" what's going on then have a look at the official `MXNet tutorial " -"`_." +"Let's create a new file called :code:`cifar.py` with all the components " +"required for a traditional (centralized) training on CIFAR-10. First, all" +" required packages (such as :code:`torch` and :code:`torchvision`) need " +"to be imported. You can see that we do not import any package for " +"federated learning. You can keep all these imports as they are even when " +"we add the federated learning components at a later point." msgstr "" -#: ../../source/example-mxnet-walk-through.rst:24 +#: ../../source/example-pytorch-from-centralized-to-federated.rst:32 msgid "" -"Let's create a new file called:code:`mxnet_mnist.py` with all the " -"components required for a traditional (centralized) MNIST training. " -"First, the MXNet package :code:`mxnet` needs to be imported. You can see " -"that we do not yet import the :code:`flwr` package for federated " -"learning. This will be done later." -msgstr "" - -#: ../../source/example-mxnet-walk-through.rst:42 -msgid "The :code:`load_data()` function loads the MNIST training and test sets." +"As already mentioned we will use the CIFAR-10 dataset for this machine " +"learning workload. The model architecture (a very simple Convolutional " +"Neural Network) is defined in :code:`class Net()`." msgstr "" -#: ../../source/example-mxnet-walk-through.rst:57 +#: ../../source/example-pytorch-from-centralized-to-federated.rst:56 msgid "" -"As already mentioned, we will use the MNIST dataset for this machine " -"learning workload. The model architecture (a very simple " -":code:`Sequential` model) is defined in :code:`model()`." +"The :code:`load_data()` function loads the CIFAR-10 training and test " +"sets. The :code:`transform` normalized the data after loading." msgstr "" -#: ../../source/example-mxnet-walk-through.rst:70 +#: ../../source/example-pytorch-from-centralized-to-federated.rst:74 msgid "" "We now need to define the training (function :code:`train()`) which loops" -" over the training set and measures the loss for each batch of training " -"examples." +" over the training set, measures the loss, backpropagates it, and then " +"takes one optimizer step for each batch of training examples." msgstr "" -#: ../../source/example-mxnet-walk-through.rst:123 +#: ../../source/example-pytorch-from-centralized-to-federated.rst:76 msgid "" -"The evaluation of the model is defined in function :code:`test()`. The " -"function loops over all test samples and measures the loss and accuracy " -"of the model based on the test dataset." +"The evaluation of the model is defined in the function :code:`test()`. " +"The function loops over all test samples and measures the loss of the " +"model based on the test dataset." msgstr "" -#: ../../source/example-mxnet-walk-through.rst:158 +#: ../../source/example-pytorch-from-centralized-to-federated.rst:136 msgid "" "Having defined the data loading, model architecture, training, and " -"evaluation we can put everything together and train our model on MNIST. " -"Note that the GPU/CPU device for the training and testing is defined " -"within the :code:`ctx` (context)." -msgstr "" - -#: ../../source/example-mxnet-walk-through.rst:184 -msgid "You can now run your (centralized) MXNet machine learning workload:" +"evaluation we can put everything together and train our CNN on CIFAR-10." msgstr "" -#: ../../source/example-mxnet-walk-through.rst:190 +#: ../../source/example-pytorch-from-centralized-to-federated.rst:163 msgid "" -"So far this should all look fairly familiar if you've used MXNet (or even" -" PyTorch) before. Let's take the next step and use what we've built to " -"create a simple federated learning system consisting of one server and " -"two clients." +"So far, this should all look fairly familiar if you've used PyTorch " +"before. Let's take the next step and use what we've built to create a " +"simple federated learning system consisting of one server and two " +"clients." msgstr "" -#: ../../source/example-mxnet-walk-through.rst:194 -msgid "MXNet meets Flower" +#: ../../source/example-pytorch-from-centralized-to-federated.rst:169 +msgid "" +"The simple machine learning project discussed in the previous section " +"trains the model on a single dataset (CIFAR-10), we call this centralized" +" learning. This concept of centralized learning, as shown in the previous" +" section, is probably known to most of you, and many of you have used it " +"previously. Normally, if you'd want to run machine learning workloads in " +"a federated fashion, then you'd have to change most of your code and set " +"everything up from scratch. This can be a considerable effort." msgstr "" -#: ../../source/example-mxnet-walk-through.rst:196 +#: ../../source/example-pytorch-from-centralized-to-federated.rst:173 msgid "" -"So far, it was not easily possible to use MXNet workloads for federated " -"learning because federated learning is not supported in MXNet. Since " -"Flower is fully agnostic towards the underlying machine learning " -"framework, it can be used to federated arbitrary machine learning " -"workloads. This section will show you how Flower can be used to federate " -"our centralized MXNet workload." +"However, with Flower you can evolve your pre-existing code into a " +"federated learning setup without the need for a major rewrite." msgstr "" -#: ../../source/example-mxnet-walk-through.rst:198 +#: ../../source/example-pytorch-from-centralized-to-federated.rst:175 msgid "" -"The concept to federate an existing workload is always the same and easy " -"to understand. We have to start a *server* and then use the code in " -":code:`mxnet_mnist.py` for the *clients* that are connected to the " -"*server*. The *server* sends model parameters to the clients. The " +"The concept is easy to understand. We have to start a *server* and then " +"use the code in :code:`cifar.py` for the *clients* that are connected to " +"the *server*. The *server* sends model parameters to the clients. The " "*clients* run the training and update the parameters. The updated " "parameters are sent back to the *server* which averages all received " "parameter updates. This describes one round of the federated learning " "process and we repeat this for multiple rounds." msgstr "" -#: ../../source/example-mxnet-walk-through.rst:220 +#: ../../source/example-pytorch-from-centralized-to-federated.rst:197 msgid "" "Finally, we will define our *client* logic in :code:`client.py` and build" -" upon the previously defined MXNet training in :code:`mxnet_mnist.py`. " -"Our *client* needs to import :code:`flwr`, but also :code:`mxnet` to " -"update the parameters on our MXNet model:" +" upon the previously defined centralized training in :code:`cifar.py`. " +"Our *client* needs to import :code:`flwr`, but also :code:`torch` to " +"update the parameters on our PyTorch model:" msgstr "" -#: ../../source/example-mxnet-walk-through.rst:235 +#: ../../source/example-pytorch-from-centralized-to-federated.rst:213 msgid "" "Implementing a Flower *client* basically means implementing a subclass of" " either :code:`flwr.client.Client` or :code:`flwr.client.NumPyClient`. " "Our implementation will be based on :code:`flwr.client.NumPyClient` and " -"we'll call it :code:`MNISTClient`. :code:`NumPyClient` is slightly easier" +"we'll call it :code:`CifarClient`. :code:`NumPyClient` is slightly easier" " to implement than :code:`Client` if you use a framework with good NumPy " -"interoperability (like PyTorch or MXNet) because it avoids some of the " -"boilerplate that would otherwise be necessary. :code:`MNISTClient` needs " -"to implement four methods, two methods for getting/setting model " -"parameters, one method for training the model, and one method for testing" -" the model:" +"interoperability (like PyTorch or TensorFlow/Keras) because it avoids " +"some of the boilerplate that would otherwise be necessary. " +":code:`CifarClient` needs to implement four methods, two methods for " +"getting/setting model parameters, one method for training the model, and " +"one method for testing the model:" msgstr "" -#: ../../source/example-mxnet-walk-through.rst:242 -msgid "transform MXNet :code:`NDArray`'s to NumPy :code:`ndarray`'s" +#: ../../source/example-pytorch-from-centralized-to-federated.rst:219 +msgid ":code:`set_parameters`" msgstr "" -#: ../../source/example-mxnet-walk-through.rst:249 #: ../../source/example-pytorch-from-centralized-to-federated.rst:226 msgid "get the updated local model weights and return them to the server" msgstr "" -#: ../../source/example-mxnet-walk-through.rst:253 #: ../../source/example-pytorch-from-centralized-to-federated.rst:230 msgid "return the local loss and accuracy to the server" msgstr "" -#: ../../source/example-mxnet-walk-through.rst:255 -msgid "" -"The challenging part is to transform the MXNet parameters from " -":code:`NDArray` to :code:`NumPy Arrays` to make it readable for Flower." -msgstr "" - -#: ../../source/example-mxnet-walk-through.rst:257 -msgid "" -"The two :code:`NumPyClient` methods :code:`fit` and :code:`evaluate` make" -" use of the functions :code:`train()` and :code:`test()` previously " -"defined in :code:`mxnet_mnist.py`. So what we really do here is we tell " -"Flower through our :code:`NumPyClient` subclass which of our already " -"defined functions to call for training and evaluation. We included type " -"annotations to give you a better understanding of the data types that get" -" passed around." -msgstr "" - -#: ../../source/example-mxnet-walk-through.rst:319 -msgid "" -"Having defined data loading, model architecture, training, and evaluation" -" we can put everything together and train our :code:`Sequential` model on" -" MNIST." -msgstr "" - -#: ../../source/example-mxnet-walk-through.rst:353 -msgid "" -"in each window (make sure that the server is still running before you do " -"so) and see your MXNet project run federated learning across two clients." -" Congratulations!" -msgstr "" - -#: ../../source/example-mxnet-walk-through.rst:358 -msgid "" -"The full source code for this example: `MXNet: From Centralized To " -"Federated (Code) `_. Our example is of course " -"somewhat over-simplified because both clients load the exact same " -"dataset, which isn't realistic. You're now prepared to explore this topic" -" further. How about using a CNN or using a different dataset? How about " -"adding more clients?" -msgstr "" - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:2 -msgid "Example: PyTorch - From Centralized To Federated" -msgstr "" - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:4 -msgid "" -"This tutorial will show you how to use Flower to build a federated " -"version of an existing machine learning workload. We are using PyTorch to" -" train a Convolutional Neural Network on the CIFAR-10 dataset. First, we " -"introduce this machine learning task with a centralized training approach" -" based on the `Deep Learning with PyTorch " -"`_ " -"tutorial. Then, we build upon the centralized training code to run the " -"training in a federated fashion." -msgstr "" - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:12 -msgid "" -"We begin with a brief description of the centralized CNN training code. " -"If you want a more in-depth explanation of what's going on then have a " -"look at the official `PyTorch tutorial " -"`_." -msgstr "" - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:15 -msgid "" -"Let's create a new file called :code:`cifar.py` with all the components " -"required for a traditional (centralized) training on CIFAR-10. First, all" -" required packages (such as :code:`torch` and :code:`torchvision`) need " -"to be imported. You can see that we do not import any package for " -"federated learning. You can keep all these imports as they are even when " -"we add the federated learning components at a later point." -msgstr "" - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:32 -msgid "" -"As already mentioned we will use the CIFAR-10 dataset for this machine " -"learning workload. The model architecture (a very simple Convolutional " -"Neural Network) is defined in :code:`class Net()`." -msgstr "" - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:56 -msgid "" -"The :code:`load_data()` function loads the CIFAR-10 training and test " -"sets. The :code:`transform` normalized the data after loading." -msgstr "" - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:74 -msgid "" -"We now need to define the training (function :code:`train()`) which loops" -" over the training set, measures the loss, backpropagates it, and then " -"takes one optimizer step for each batch of training examples." -msgstr "" - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:76 -msgid "" -"The evaluation of the model is defined in the function :code:`test()`. " -"The function loops over all test samples and measures the loss of the " -"model based on the test dataset." -msgstr "" - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:136 -msgid "" -"Having defined the data loading, model architecture, training, and " -"evaluation we can put everything together and train our CNN on CIFAR-10." -msgstr "" - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:163 -msgid "" -"So far, this should all look fairly familiar if you've used PyTorch " -"before. Let's take the next step and use what we've built to create a " -"simple federated learning system consisting of one server and two " -"clients." -msgstr "" - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:169 -msgid "" -"The simple machine learning project discussed in the previous section " -"trains the model on a single dataset (CIFAR-10), we call this centralized" -" learning. This concept of centralized learning, as shown in the previous" -" section, is probably known to most of you, and many of you have used it " -"previously. Normally, if you'd want to run machine learning workloads in " -"a federated fashion, then you'd have to change most of your code and set " -"everything up from scratch. This can be a considerable effort." -msgstr "" - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:173 -msgid "" -"However, with Flower you can evolve your pre-existing code into a " -"federated learning setup without the need for a major rewrite." -msgstr "" - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:175 -msgid "" -"The concept is easy to understand. We have to start a *server* and then " -"use the code in :code:`cifar.py` for the *clients* that are connected to " -"the *server*. The *server* sends model parameters to the clients. The " -"*clients* run the training and update the parameters. The updated " -"parameters are sent back to the *server* which averages all received " -"parameter updates. This describes one round of the federated learning " -"process and we repeat this for multiple rounds." -msgstr "" - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:197 -msgid "" -"Finally, we will define our *client* logic in :code:`client.py` and build" -" upon the previously defined centralized training in :code:`cifar.py`. " -"Our *client* needs to import :code:`flwr`, but also :code:`torch` to " -"update the parameters on our PyTorch model:" -msgstr "" - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:213 -msgid "" -"Implementing a Flower *client* basically means implementing a subclass of" -" either :code:`flwr.client.Client` or :code:`flwr.client.NumPyClient`. " -"Our implementation will be based on :code:`flwr.client.NumPyClient` and " -"we'll call it :code:`CifarClient`. :code:`NumPyClient` is slightly easier" -" to implement than :code:`Client` if you use a framework with good NumPy " -"interoperability (like PyTorch or TensorFlow/Keras) because it avoids " -"some of the boilerplate that would otherwise be necessary. " -":code:`CifarClient` needs to implement four methods, two methods for " -"getting/setting model parameters, one method for training the model, and " -"one method for testing the model:" -msgstr "" - -#: ../../source/example-pytorch-from-centralized-to-federated.rst:219 -msgid ":code:`set_parameters`" -msgstr "" - #: ../../source/example-pytorch-from-centralized-to-federated.rst:232 msgid "" "The two :code:`NumPyClient` methods :code:`fit` and :code:`evaluate` make" @@ -3965,6 +3715,162 @@ msgid "" "provided in these dictionaries:" msgstr "" +#: ../../source/how-to-authenticate-supernodes.rst:2 +msgid "Authenticate SuperNodes" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:4 +msgid "" +"Flower has built-in support for authenticated SuperNodes that you can use" +" to verify the identities of each SuperNode connecting to a SuperLink. " +"Flower node authentication works similar to how GitHub SSH authentication" +" works:" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:7 +msgid "SuperLink (server) stores a list of known (client) node public keys" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:8 +msgid "" +"Using ECDH, both SuperNode and SuperLink independently derive a shared " +"secret" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:9 +msgid "" +"Shared secret is used to compute the HMAC value of the message sent from " +"SuperNode to SuperLink as a token" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:10 +msgid "SuperLink verifies the token" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:12 +msgid "" +"We recommend you to check out the complete `code example " +"`_ demonstrating federated learning with Flower in an " +"authenticated setting." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:15 +msgid "" +"This guide covers a preview feature that might change in future versions " +"of Flower." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:18 +msgid "" +"For increased security, node authentication can only be used when " +"encrypted connections (SSL/TLS) are enabled." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:21 +msgid "Enable node authentication in :code:`SuperLink`" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:23 +msgid "" +"To enable node authentication, first you need to configure SSL/TLS " +"connections to secure the SuperLink<>SuperNode communication. You can " +"find the complete guide `here `_. After configuring secure connections, you" +" can enable client authentication in a long-running Flower " +":code:`SuperLink`. Use the following terminal command to start a Flower " +":code:`SuperNode` that has both secure connections and node " +"authentication enabled:" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:36 +msgid "Let's break down the authentication flags:" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:38 +msgid "" +"The first flag :code:`--auth-list-public-keys` expects a path to a CSV " +"file storing all known node public keys. You need to store all known node" +" public keys that are allowed to participate in a federation in one CSV " +"file (:code:`.csv`)." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:40 +msgid "" +"A valid CSV file storing known node public keys should list the keys in " +"OpenSSH format, separated by commas and without any comments. For an " +"example, refer to our code sample, which contains a CSV file with two " +"known node public keys." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:42 +msgid "" +"The second and third flags :code:`--auth-superlink-private-key` and :code" +":`--auth-superlink-public-key` expect paths to the server's private and " +"public keys. For development purposes, you can generate a private and " +"public key pair using :code:`ssh-keygen -t ecdsa -b 384`." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:45 +msgid "" +"In Flower 1.9, there is no support for dynamically removing, editing, or " +"adding known node public keys to the SuperLink. To change the set of " +"known nodes, you need to shut the server down, edit the CSV file, and " +"start the server again. Support for dynamically changing the set of known" +" nodes is on the roadmap to be released in Flower 1.10 (ETA: June)." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:51 +msgid "Enable node authentication in :code:`SuperNode`" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:53 +msgid "" +"Similar to the long-running Flower server (:code:`SuperLink`), you can " +"easily enable node authentication in the long-running Flower client " +"(:code:`SuperNode`). Use the following terminal command to start an " +"authenticated :code:`SuperNode`:" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:64 +msgid "" +"The :code:`--auth-supernode-private-key` flag expects a path to the " +"node's private key file and the :code:`--auth-supernode-public-key` flag " +"expects a path to the node's public key file. For development purposes, " +"you can generate a private and public key pair using :code:`ssh-keygen -t" +" ecdsa -b 384`." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:68 +msgid "Security notice" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:70 +msgid "" +"The system's security relies on the credentials of the SuperLink and each" +" SuperNode. Therefore, it is imperative to safeguard and safely store the" +" credentials to avoid security risks such as Public Key Infrastructure " +"(PKI) impersonation attacks. The node authentication mechanism also " +"involves human interaction, so please ensure that all of the " +"communication is done in a secure manner, using trusted communication " +"methods." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:75 +#: ../../source/how-to-enable-ssl-connections.rst:65 +#: ../../source/how-to-use-built-in-mods.rst:85 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:287 +msgid "Conclusion" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:77 +msgid "" +"You should now have learned how to start a long-running Flower server " +"(:code:`SuperLink`) and client (:code:`SuperNode`) with node " +"authentication enabled. You should also know the significance of the " +"private key and store it safely to minimize security risks." +msgstr "" + #: ../../source/how-to-configure-clients.rst:2 msgid "Configure clients" msgstr "" @@ -4173,8 +4079,9 @@ msgstr "" #: ../../source/how-to-enable-ssl-connections.rst:4 msgid "" -"This guide describes how to a SSL-enabled secure Flower server can be " -"started and how a Flower client can establish a secure connections to it." +"This guide describes how to a SSL-enabled secure Flower server " +"(:code:`SuperLink`) can be started and how a Flower client " +"(:code:`SuperNode`) can establish a secure connections to it." msgstr "" #: ../../source/how-to-enable-ssl-connections.rst:7 @@ -4186,10 +4093,10 @@ msgstr "" #: ../../source/how-to-enable-ssl-connections.rst:10 msgid "" -"The code example comes with a README.md file which will explain how to " -"start it. Although it is already SSL-enabled, it might be less " -"descriptive on how. Stick to this guide for a deeper introduction to the " -"topic." +"The code example comes with a :code:`README.md` file which explains how " +"to start it. Although it is already SSL-enabled, it might be less " +"descriptive on how it does so. Stick to this guide for a deeper " +"introduction to the topic." msgstr "" #: ../../source/how-to-enable-ssl-connections.rst:16 @@ -4202,100 +4109,81 @@ msgid "" "server and client. For the purpose of this guide we are going to generate" " self-signed certificates. As this can become quite complex we are going " "to ask you to run the script in :code:`examples/advanced-" -"tensorflow/certificates/generate.sh`" -msgstr "" - -#: ../../source/how-to-enable-ssl-connections.rst:23 -msgid "with the following command sequence:" +"tensorflow/certificates/generate.sh` with the following command sequence:" msgstr "" -#: ../../source/how-to-enable-ssl-connections.rst:30 +#: ../../source/how-to-enable-ssl-connections.rst:29 msgid "" "This will generate the certificates in :code:`examples/advanced-" "tensorflow/.cache/certificates`." msgstr "" -#: ../../source/how-to-enable-ssl-connections.rst:32 +#: ../../source/how-to-enable-ssl-connections.rst:31 msgid "" -"The approach how the SSL certificates are generated in this example can " -"serve as an inspiration and starting point but should not be taken as " -"complete for production environments. Please refer to other sources " -"regarding the issue of correctly generating certificates for production " -"environments." +"The approach for generating SSL certificates in the context of this " +"example can serve as an inspiration and starting point, but it should not" +" be used as a reference for production environments. Please refer to " +"other sources regarding the issue of correctly generating certificates " +"for production environments. For non-critical prototyping or research " +"projects, it might be sufficient to use the self-signed certificates " +"generated using the scripts mentioned in this guide." msgstr "" -#: ../../source/how-to-enable-ssl-connections.rst:36 -msgid "" -"In case you are a researcher you might be just fine using the self-signed" -" certificates generated using the scripts which are part of this guide." +#: ../../source/how-to-enable-ssl-connections.rst:39 +msgid "Server (SuperLink)" msgstr "" #: ../../source/how-to-enable-ssl-connections.rst:41 -#: ../../source/ref-api/flwr.server.Server.rst:2 -msgid "Server" -msgstr "" - -#: ../../source/how-to-enable-ssl-connections.rst:43 msgid "" -"We are now going to show how to write a sever which uses the previously " -"generated scripts." +"Use the following terminal command to start a sever (SuperLink) that uses" +" the previously generated certificates:" msgstr "" -#: ../../source/how-to-enable-ssl-connections.rst:61 +#: ../../source/how-to-enable-ssl-connections.rst:47 msgid "" "When providing certificates, the server expects a tuple of three " -"certificates. :code:`Path` can be used to easily read the contents of " -"those files into byte strings, which is the data type " -":code:`start_server` expects." +"certificates paths: CA certificate, server certificate and server private" +" key." msgstr "" -#: ../../source/how-to-enable-ssl-connections.rst:65 -#: ../../source/how-to-upgrade-to-flower-1.0.rst:37 -#: ../../source/ref-api/flwr.client.Client.rst:2 -msgid "Client" +#: ../../source/how-to-enable-ssl-connections.rst:51 +msgid "Client (SuperNode)" msgstr "" -#: ../../source/how-to-enable-ssl-connections.rst:67 +#: ../../source/how-to-enable-ssl-connections.rst:53 msgid "" -"We are now going to show how to write a client which uses the previously " -"generated scripts:" +"Use the following terminal command to start a client (SuperNode) that " +"uses the previously generated certificates:" msgstr "" -#: ../../source/how-to-enable-ssl-connections.rst:84 +#: ../../source/how-to-enable-ssl-connections.rst:61 msgid "" -"When setting :code:`root_certificates`, the client expects the PEM-" -"encoded root certificates as a byte string. We are again using " -":code:`Path` to simplify reading those as byte strings." +"When setting :code:`root_certificates`, the client expects a file path to" +" PEM-encoded root certificates." msgstr "" -#: ../../source/how-to-enable-ssl-connections.rst:89 -#: ../../source/how-to-use-built-in-mods.rst:85 -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:287 -msgid "Conclusion" -msgstr "" - -#: ../../source/how-to-enable-ssl-connections.rst:91 +#: ../../source/how-to-enable-ssl-connections.rst:67 msgid "" "You should now have learned how to generate self-signed certificates " -"using the given script, start a SSL-enabled server, and have a client " +"using the given script, start an SSL-enabled server and have a client " "establish a secure connection to it." msgstr "" -#: ../../source/how-to-enable-ssl-connections.rst:96 +#: ../../source/how-to-enable-ssl-connections.rst:72 msgid "Additional resources" msgstr "" -#: ../../source/how-to-enable-ssl-connections.rst:98 +#: ../../source/how-to-enable-ssl-connections.rst:74 msgid "" "These additional sources might be relevant if you would like to dive " "deeper into the topic of certificates:" msgstr "" -#: ../../source/how-to-enable-ssl-connections.rst:100 +#: ../../source/how-to-enable-ssl-connections.rst:76 msgid "`Let's Encrypt `_" msgstr "" -#: ../../source/how-to-enable-ssl-connections.rst:101 +#: ../../source/how-to-enable-ssl-connections.rst:77 msgid "`certbot `_" msgstr "" @@ -4567,6 +4455,7 @@ msgid "Install stable release" msgstr "" #: ../../source/how-to-install-flower.rst:15 +#: ../../source/how-to-upgrade-to-flower-next.rst:46 msgid "Using pip" msgstr "" @@ -4904,7 +4793,7 @@ msgstr "" msgid "" "The simplest way to get started with Flower is by using the pre-made " "Docker images, which you can find on `Docker Hub " -"`_." +"`__." msgstr "" #: ../../source/how-to-run-flower-using-docker.rst:7 @@ -4927,149 +4816,466 @@ msgid "" "official Docker website." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:25 -msgid "Flower server" +#: ../../source/how-to-run-flower-using-docker.rst:26 +msgid "" +"To ensure optimal performance and compatibility, the SuperLink, SuperNode" +" and ServerApp image must have the same version when running together. " +"This guarantees seamless integration and avoids potential conflicts or " +"issues that may arise from using different versions." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:31 +msgid "Flower SuperLink" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:28 +#: ../../source/how-to-run-flower-using-docker.rst:34 msgid "Quickstart" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:30 +#: ../../source/how-to-run-flower-using-docker.rst:36 msgid "If you're looking to try out Flower, you can use the following command:" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:37 +#: ../../source/how-to-run-flower-using-docker.rst:42 msgid "" -"The command will pull the Docker image with the tag " -"``1.7.0-py3.11-ubuntu22.04`` from Docker Hub. The tag contains the " -"information which Flower, Python and Ubuntu is used. In this case, it " -"uses Flower 1.7.0, Python 3.11 and Ubuntu 22.04. The ``--rm`` flag tells " -"Docker to remove the container after it exits." +"The command pulls the Docker image with the tag ``1.8.0`` from Docker " +"Hub. The tag specifies the Flower version. In this case, Flower 1.8.0. " +"The ``--rm`` flag tells Docker to remove the container after it exits." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:44 +#: ../../source/how-to-run-flower-using-docker.rst:48 msgid "" -"By default, the Flower server keeps state in-memory. When using the " +"By default, the Flower SuperLink keeps state in-memory. When using the " "Docker flag ``--rm``, the state is not persisted between container " "starts. We will show below how to save the state in a file on your host " "system." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:48 +#: ../../source/how-to-run-flower-using-docker.rst:52 msgid "" "The ``-p :`` flag tells Docker to map the ports " "``9091``/``9092`` of the host to ``9091``/``9092`` of the container, " "allowing you to access the Driver API on ``http://localhost:9091`` and " "the Fleet API on ``http://localhost:9092``. Lastly, any flag that comes " -"after the tag is passed to the Flower server. Here, we are passing the " -"flag ``--insecure``." +"after the tag is passed to the Flower SuperLink. Here, we are passing the" +" flag ``--insecure``." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:55 +#: ../../source/how-to-run-flower-using-docker.rst:59 +#: ../../source/how-to-run-flower-using-docker.rst:238 +#: ../../source/how-to-run-flower-using-docker.rst:354 msgid "" "The ``--insecure`` flag enables insecure communication (using HTTP, not " "HTTPS) and should only be used for testing purposes. We strongly " "recommend enabling `SSL `_ when " +"flower-using-docker.html#enabling-ssl-for-secure-connections>`__ when " "deploying to a production environment." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:60 +#: ../../source/how-to-run-flower-using-docker.rst:64 msgid "" -"You can use ``--help`` to view all available flags that the server " +"You can use ``--help`` to view all available flags that the SuperLink " "supports:" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:67 +#: ../../source/how-to-run-flower-using-docker.rst:71 msgid "Mounting a volume to store the state on the host system" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:69 +#: ../../source/how-to-run-flower-using-docker.rst:73 msgid "" -"If you want to persist the state of the server on your host system, all " -"you need to do is specify a path where you want to save the file on your " -"host system and a name for the database file. In the example below, we " -"tell Docker via the flag ``-v`` to mount the user's home directory " -"(``~/`` on your host) into the ``/app/`` directory of the container. " -"Furthermore, we use the flag ``--database`` to specify the name of the " -"database file." +"If you want to persist the state of the SuperLink on your host system, " +"all you need to do is specify a path where you want to save the file on " +"your host system and a name for the database file. In the example below, " +"we tell Docker via the flag ``--volume`` to mount the user's home " +"directory (``~/`` on your host) into the ``/app/`` directory of the " +"container. Furthermore, we use the flag ``--database`` to specify the " +"name of the database file." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:82 +#: ../../source/how-to-run-flower-using-docker.rst:86 msgid "" -"As soon as the server starts, the file ``state.db`` is created in the " +"As soon as the SuperLink starts, the file ``state.db`` is created in the " "user's home directory on your host system. If the file already exists, " -"the server tries to restore the state from the file. To start the server " -"with an empty database, simply remove the ``state.db`` file." +"the SuperLink tries to restore the state from the file. To start the " +"SuperLink with an empty database, simply remove the ``state.db`` file." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:87 +#: ../../source/how-to-run-flower-using-docker.rst:91 +#: ../../source/how-to-run-flower-using-docker.rst:260 +#: ../../source/how-to-run-flower-using-docker.rst:375 msgid "Enabling SSL for secure connections" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:89 +#: ../../source/how-to-run-flower-using-docker.rst:93 msgid "" -"To enable SSL, you will need a CA certificate, a server certificate and a" -" server private key." +"To enable SSL, you will need a PEM-encoded root certificate, a PEM-" +"encoded private key and a PEM-encoded certificate chain." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:92 +#: ../../source/how-to-run-flower-using-docker.rst:97 msgid "" "For testing purposes, you can generate your own self-signed certificates." " The `Enable SSL connections `_ page contains a section that " -"will guide you through the process." +"enable-ssl-connections.html#certificates>`__ page contains a section that" +" will guide you through the process." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:96 +#: ../../source/how-to-run-flower-using-docker.rst:101 msgid "" "Assuming all files we need are in the local ``certificates`` directory, " -"we can use the flag ``-v`` to mount the local directory into the " -"``/app/`` directory of the container. This allows the server to access " -"the files within the container. Finally, we pass the names of the " -"certificates to the server with the ``--certificates`` flag." +"we can use the flag ``--volume`` to mount the local directory into the " +"``/app/`` directory of the container. This allows the SuperLink to access" +" the files within the container. Finally, we pass the names of the " +"certificates to the SuperLink with the ``--certificates`` flag." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:108 -msgid "Using a different Flower or Python version" +#: ../../source/how-to-run-flower-using-docker.rst:113 +msgid "Flower SuperNode" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:110 +#: ../../source/how-to-run-flower-using-docker.rst:115 msgid "" -"If you want to use a different version of Flower or Python, you can do so" -" by changing the tag. All versions we provide are available on `Docker " -"Hub `_." +"The SuperNode Docker image comes with a pre-installed version of Flower " +"and serves as a base for building your own SuperNode image." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:114 -msgid "Pinning a Docker image to a specific version" +#: ../../source/how-to-run-flower-using-docker.rst:120 +msgid "" +"The SuperNode Docker image currently works only with the 1.9.0-nightly " +"release. A stable version will be available when Flower 1.9.0 (stable) " +"gets released (ETA: May). A SuperNode nightly image must be paired with " +"the corresponding SuperLink and ServerApp nightly images released on the " +"same day. To ensure the versions are in sync, using the concrete tag, " +"e.g., ``1.9.0.dev20240501`` instead of ``nightly`` is recommended." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:116 +#: ../../source/how-to-run-flower-using-docker.rst:126 msgid "" -"It may happen that we update the images behind the tags. Such updates " -"usually include security updates of system dependencies that should not " -"change the functionality of Flower. However, if you want to ensure that " +"We will use the ``quickstart-pytorch`` example, which you can find in the" +" Flower repository, to illustrate how you can dockerize your ClientApp." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:134 +#, fuzzy +msgid "" +"Before we can start, we need to meet a few prerequisites in our local " +"development environment. You can skip the first part if you want to run " +"your ClientApp instead of the ``quickstart-pytorch`` example." +msgstr "" +"Antes de começarmos, precisamos encontrar alguns pré-requisitos em nosso " +"ambiente de desenvolvimento local." + +#: ../../source/how-to-run-flower-using-docker.rst:138 +#, fuzzy +msgid "Clone the Flower repository." +msgstr "Clone o repositório do flower." + +#: ../../source/how-to-run-flower-using-docker.rst:152 +msgid "Creating a SuperNode Dockerfile" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:154 +#: ../../source/how-to-run-flower-using-docker.rst:289 +msgid "Let's assume the following project layout:" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:163 +msgid "" +"First, we need to create a ``requirements.txt`` file in the directory " +"where the ``ClientApp`` code is located. In the file, we list all the " +"dependencies that the ClientApp requires." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:175 +msgid "" +"Note that `flwr `__ is already installed " +"in the ``flwr/supernode`` base image, so you only need to include other " +"package dependencies in your ``requirements.txt``, such as ``torch``, " +"``tensorflow``, etc." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:179 +msgid "" +"Next, we create a Dockerfile. If you use the ``quickstart-pytorch`` " +"example, create a new file called ``Dockerfile.supernode`` in ``examples" +"/quickstart-pytorch``." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:182 +msgid "" +"The ``Dockerfile.supernode`` contains the instructions that assemble the " +"SuperNode image." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:196 +msgid "" +"In the first two lines, we instruct Docker to use the SuperNode image " +"tagged ``nightly`` as a base image and set our working directory to " +"``/app``. The following instructions will now be executed in the ``/app``" +" directory. Next, we install the ClientApp dependencies by copying the " +"``requirements.txt`` file into the image and run ``pip install``. In the " +"last two lines, we copy the ``client.py`` module into the image and set " +"the entry point to ``flower-client-app`` with the argument " +"``client:app``. The argument is the object reference of the ClientApp " +"(``:``) that will be run inside the ClientApp." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:205 +#, fuzzy +msgid "Building the SuperNode Docker image" +msgstr "Construindo a imagem do servidor" + +#: ../../source/how-to-run-flower-using-docker.rst:207 +msgid "" +"Next, we build the SuperNode Docker image by running the following " +"command in the directory where Dockerfile and ClientApp code are located." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:214 +msgid "" +"We gave the image the name ``flwr_supernode``, and the tag ``0.0.1``. " +"Remember that the here chosen values only serve as an example. You can " +"change them to your needs." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:219 +#, fuzzy +msgid "Running the SuperNode Docker image" +msgstr "Construindo a imagem do servidor" + +#: ../../source/how-to-run-flower-using-docker.rst:221 +msgid "Now that we have built the SuperNode image, we can finally run it." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:229 +#: ../../source/how-to-run-flower-using-docker.rst:345 +msgid "Let's break down each part of this command:" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:231 +#: ../../source/how-to-run-flower-using-docker.rst:347 +msgid "``docker run``: This is the command to run a new Docker container." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:232 +#: ../../source/how-to-run-flower-using-docker.rst:348 +msgid "" +"``--rm``: This option specifies that the container should be " +"automatically removed when it stops." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:233 +msgid "``flwr_supernode:0.0.1``: The name the tag of the Docker image to use." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:234 +#: ../../source/how-to-run-flower-using-docker.rst:350 +msgid "``--insecure``: This option enables insecure communication." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst +msgid "" +"``--server 192.168.1.100:9092``: This option specifies the address of the" +" SuperLinks Fleet" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst +msgid "API to connect to. Remember to update it with your SuperLink IP." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:248 +msgid "" +"To test running Flower locally, you can create a `bridge network " +"`__, use the ``--network`` argument and pass the " +"name of the Docker network to run your SuperNodes." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:252 +msgid "" +"Any argument that comes after the tag is passed to the Flower SuperNode " +"binary. To see all available flags that the SuperNode supports, run:" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:262 +msgid "" +"To enable SSL, we will need to mount a PEM-encoded root certificate into " +"your SuperNode container." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:264 +msgid "" +"Assuming the certificate already exists locally, we can use the flag " +"``--volume`` to mount the local certificate into the container's " +"``/app/`` directory. This allows the SuperNode to access the certificate " +"within the container. Use the ``--certificates`` flag when starting the " +"container." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:275 +msgid "Flower ServerApp" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:277 +msgid "" +"The procedure for building and running a ServerApp image is almost " +"identical to the SuperNode image." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:279 +msgid "" +"Similar to the SuperNode image, the ServerApp Docker image comes with a " +"pre-installed version of Flower and serves as a base for building your " +"own ServerApp image." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:282 +msgid "" +"We will use the same ``quickstart-pytorch`` example as we do in the " +"Flower SuperNode section. If you have not already done so, please follow " +"the `SuperNode Prerequisites`_ before proceeding." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:287 +msgid "Creating a ServerApp Dockerfile" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:298 +msgid "" +"First, we need to create a Dockerfile in the directory where the " +"``ServerApp`` code is located. If you use the ``quickstart-pytorch`` " +"example, create a new file called ``Dockerfile.serverapp`` in ``examples" +"/quickstart-pytorch``." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:302 +msgid "" +"The ``Dockerfile.serverapp`` contains the instructions that assemble the " +"ServerApp image." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:313 +msgid "" +"In the first two lines, we instruct Docker to use the ServerApp image " +"tagged ``1.8.0`` as a base image and set our working directory to " +"``/app``. The following instructions will now be executed in the ``/app``" +" directory. In the last two lines, we copy the ``server.py`` module into " +"the image and set the entry point to ``flower-server-app`` with the " +"argument ``server:app``. The argument is the object reference of the " +"ServerApp (``:``) that will be run inside the " +"ServerApp container." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:321 +#, fuzzy +msgid "Building the ServerApp Docker image" +msgstr "Construindo a imagem do servidor" + +#: ../../source/how-to-run-flower-using-docker.rst:323 +msgid "" +"Next, we build the ServerApp Docker image by running the following " +"command in the directory where Dockerfile and ServerApp code are located." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:330 +msgid "" +"We gave the image the name ``flwr_serverapp``, and the tag ``0.0.1``. " +"Remember that the here chosen values only serve as an example. You can " +"change them to your needs." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:335 +#, fuzzy +msgid "Running the ServerApp Docker image" +msgstr "Construindo a imagem do servidor" + +#: ../../source/how-to-run-flower-using-docker.rst:337 +msgid "Now that we have built the ServerApp image, we can finally run it." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:349 +msgid "``flwr_serverapp:0.0.1``: The name the tag of the Docker image to use." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst +msgid "" +"``--server 192.168.1.100:9091``: This option specifies the address of the" +" SuperLinks Driver" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:363 +msgid "" +"To test running Flower locally, you can create a `bridge network " +"`__, use the ``--network`` argument and pass the " +"name of the Docker network to run your ServerApps." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:367 +msgid "" +"Any argument that comes after the tag is passed to the Flower ServerApp " +"binary. To see all available flags that the ServerApp supports, run:" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:377 +msgid "" +"To enable SSL, we will need to mount a PEM-encoded root certificate into " +"your ServerApp container." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:379 +msgid "" +"Assuming the certificate already exists locally, we can use the flag " +"``--volume`` to mount the local certificate into the container's " +"``/app/`` directory. This allows the ServerApp to access the certificate " +"within the container. Use the ``--certificates`` flag when starting the " +"container." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:390 +msgid "Advanced Docker options" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:393 +msgid "Using a different Flower version" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:395 +msgid "" +"If you want to use a different version of Flower, for example Flower " +"nightly, you can do so by changing the tag. All available versions are on" +" `Docker Hub `__." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:400 +msgid "Pinning a Docker image to a specific version" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:402 +msgid "" +"It may happen that we update the images behind the tags. Such updates " +"usually include security updates of system dependencies that should not " +"change the functionality of Flower. However, if you want to ensure that " "you always use the same image, you can specify the hash of the image " "instead of the tag." msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:121 +#: ../../source/how-to-run-flower-using-docker.rst:407 msgid "" "The following command returns the current image hash referenced by the " -"``server:1.7.0-py3.11-ubuntu22.04`` tag:" +"``superlink:1.8.0`` tag:" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:128 -msgid "Next, we can pin the hash when running a new server container:" +#: ../../source/how-to-run-flower-using-docker.rst:414 +msgid "Next, we can pin the hash when running a new SuperLink container:" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:137 +#: ../../source/how-to-run-flower-using-docker.rst:423 msgid "Setting environment variables" msgstr "" -#: ../../source/how-to-run-flower-using-docker.rst:139 +#: ../../source/how-to-run-flower-using-docker.rst:425 msgid "" "To set a variable inside a Docker container, you can use the ``-e " "=`` flag." @@ -5542,6 +5748,7 @@ msgid "" msgstr "" #: ../../source/how-to-upgrade-to-flower-1.0.rst:8 +#: ../../source/how-to-upgrade-to-flower-next.rst:43 msgid "Install update" msgstr "" @@ -5585,6 +5792,7 @@ msgid "" msgstr "" #: ../../source/how-to-upgrade-to-flower-1.0.rst:24 +#: ../../source/how-to-upgrade-to-flower-next.rst:100 msgid "Required changes" msgstr "" @@ -5615,6 +5823,11 @@ msgid "" "client=FlowerClient())``" msgstr "" +#: ../../source/how-to-upgrade-to-flower-1.0.rst:37 +#: ../../source/ref-api/flwr.client.Client.rst:2 +msgid "Client" +msgstr "" + #: ../../source/how-to-upgrade-to-flower-1.0.rst:39 msgid "" "Subclasses of ``NumPyClient``: change ``def get_parameters(self):``` to " @@ -5783,6 +5996,7 @@ msgid "" msgstr "" #: ../../source/how-to-upgrade-to-flower-1.0.rst:89 +#: ../../source/how-to-upgrade-to-flower-next.rst:317 msgid "Further help" msgstr "" @@ -5795,6 +6009,188 @@ msgid "" "`_ and use the channel ``#questions``." msgstr "" +#: ../../source/how-to-upgrade-to-flower-next.rst:2 +msgid "Upgrade to Flower Next" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:4 +msgid "" +"Welcome to the migration guide for updating Flower to Flower Next! " +"Whether you're a seasoned user or just getting started, this guide will " +"help you smoothly transition your existing setup to take advantage of the" +" latest features and improvements in Flower Next, starting from version " +"1.8." +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:9 +msgid "" +"This guide shows how to reuse pre-``1.8`` Flower code with minimum code " +"changes by using the *compatibility layer* in Flower Next. In another " +"guide, we will show how to run Flower Next end-to-end with pure Flower " +"Next APIs." +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:13 +msgid "Let's dive in!" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:48 +msgid "" +"Here's how to update an existing installation of Flower to Flower Next " +"with ``pip``:" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:54 +msgid "or if you need Flower Next with simulation:" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:61 +msgid "" +"Ensure you set the following version constraint in your " +"``requirements.txt``" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:71 +msgid "or ``pyproject.toml``:" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:82 +msgid "Using Poetry" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:84 +msgid "" +"Update the ``flwr`` dependency in ``pyproject.toml`` and then reinstall " +"(don't forget to delete ``poetry.lock`` via ``rm poetry.lock`` before " +"running ``poetry install``)." +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:86 +msgid "" +"Ensure you set the following version constraint in your " +"``pyproject.toml``:" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:102 +msgid "" +"In Flower Next, the *infrastructure* and *application layers* have been " +"decoupled. Instead of starting a client in code via ``start_client()``, " +"you create a |clientapp_link|_ and start it via the command line. Instead" +" of starting a server in code via ``start_server()``, you create a " +"|serverapp_link|_ and start it via the command line. The long-running " +"components of server and client are called SuperLink and SuperNode. The " +"following non-breaking changes that require manual updates and allow you " +"to run your project both in the traditional way and in the Flower Next " +"way:" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:109 +msgid "|clientapp_link|_" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:110 +msgid "" +"Wrap your existing client with |clientapp_link|_ instead of launching it " +"via |startclient_link|_. Here's an example:" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:132 +msgid "|serverapp_link|_" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:133 +msgid "" +"Wrap your existing strategy with |serverapp_link|_ instead of starting " +"the server via |startserver_link|_. Here's an example:" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:154 +msgid "Deployment" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:155 +msgid "" +"Run the ``SuperLink`` using |flowernext_superlink_link|_ before running, " +"in sequence, |flowernext_clientapp_link|_ (2x) and " +"|flowernext_serverapp_link|_. There is no need to execute `client.py` and" +" `server.py` as Python scripts." +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:158 +msgid "" +"Here's an example to start the server without HTTPS (only for " +"prototyping):" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:174 +msgid "" +"Here's another example to start with HTTPS. Use the ``--certificates`` " +"command line argument to pass paths to (CA certificate, server " +"certificate, and server private key)." +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:201 +msgid "Simulation in CLI" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:202 +msgid "" +"Wrap your existing client and strategy with |clientapp_link|_ and " +"|serverapp_link|_, respectively. There is no need to use |startsim_link|_" +" anymore. Here's an example:" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:232 +msgid "" +"Run |flower_simulation_link|_ in CLI and point to the ``server_app`` / " +"``client_app`` object in the code instead of executing the Python script." +" Here's an example (assuming the ``server_app`` and ``client_app`` " +"objects are in a ``sim.py`` module):" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:249 +msgid "" +"Set default resources for each |clientapp_link|_ using the ``--backend-" +"config`` command line argument instead of setting the " +"``client_resources`` argument in |startsim_link|_. Here's an example:" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:275 +msgid "Simulation in a Notebook" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:276 +msgid "" +"Run |runsim_link|_ in your notebook instead of |startsim_link|_. Here's " +"an example:" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:319 +msgid "" +"Some official `Flower code examples `_ " +"are already updated to Flower Next so they can serve as a reference for " +"using the Flower Next API. If there are further questions, `join the " +"Flower Slack `_ and use the channel " +"``#questions``. You can also `participate in Flower Discuss " +"`_ where you can find us answering questions," +" or share and learn from others about migrating to Flower Next." +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:325 +msgid "Important" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:328 +msgid "" +"As we continuously enhance Flower Next at a rapid pace, we'll be " +"periodically updating this guide. Please feel free to share any feedback " +"with us!" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:334 +msgid "Happy migrating! 🚀" +msgstr "" + #: ../../source/how-to-use-built-in-mods.rst:2 msgid "Use Built-in Mods" msgstr "" @@ -6170,15 +6566,15 @@ msgstr "" msgid "Quickstart tutorials" msgstr "" -#: ../../source/index.rst:75 ../../source/index.rst:79 +#: ../../source/index.rst:74 ../../source/index.rst:78 msgid "How-to guides" msgstr "" -#: ../../source/index.rst:98 +#: ../../source/index.rst:99 msgid "Legacy example guides" msgstr "" -#: ../../source/index.rst:109 ../../source/index.rst:113 +#: ../../source/index.rst:108 ../../source/index.rst:112 msgid "Explanations" msgstr "" @@ -6186,23 +6582,23 @@ msgstr "" msgid "API reference" msgstr "" -#: ../../source/index.rst:138 +#: ../../source/index.rst:137 msgid "Reference docs" msgstr "" -#: ../../source/index.rst:154 +#: ../../source/index.rst:153 msgid "Contributor tutorials" msgstr "" -#: ../../source/index.rst:161 +#: ../../source/index.rst:160 msgid "Contributor how-to guides" msgstr "" -#: ../../source/index.rst:174 +#: ../../source/index.rst:173 msgid "Contributor explanations" msgstr "" -#: ../../source/index.rst:180 +#: ../../source/index.rst:179 msgid "Contributor references" msgstr "" @@ -6259,60 +6655,60 @@ msgid "" "place to start." msgstr "" -#: ../../source/index.rst:62 +#: ../../source/index.rst:61 msgid "" "QUICKSTART TUTORIALS: :doc:`PyTorch ` | " ":doc:`TensorFlow ` | :doc:`🤗 Transformers" " ` | :doc:`JAX ` | :doc:`Pandas ` | :doc:`fastai " "` | :doc:`PyTorch Lightning ` | :doc:`MXNet `" -" | :doc:`scikit-learn ` | :doc:`XGBoost " -"` | :doc:`Android ` | :doc:`iOS `" +"quickstart-pytorch-lightning>` | :doc:`scikit-learn ` | :doc:`XGBoost ` | " +":doc:`Android ` | :doc:`iOS `" msgstr "" -#: ../../source/index.rst:64 +#: ../../source/index.rst:63 msgid "We also made video tutorials for PyTorch:" msgstr "" -#: ../../source/index.rst:69 +#: ../../source/index.rst:68 msgid "And TensorFlow:" msgstr "" -#: ../../source/index.rst:77 +#: ../../source/index.rst:76 msgid "" "Problem-oriented how-to guides show step-by-step how to achieve a " "specific goal." msgstr "" -#: ../../source/index.rst:111 +#: ../../source/index.rst:110 msgid "" "Understanding-oriented concept guides explain and discuss key topics and " "underlying ideas behind Flower and collaborative AI." msgstr "" -#: ../../source/index.rst:121 +#: ../../source/index.rst:120 msgid "References" msgstr "" -#: ../../source/index.rst:123 +#: ../../source/index.rst:122 msgid "Information-oriented API reference and other reference material." msgstr "" -#: ../../source/index.rst:132::1 +#: ../../source/index.rst:131::1 msgid ":py:obj:`flwr `\\" msgstr "" -#: ../../source/index.rst:132::1 flwr:1 of +#: ../../source/index.rst:131::1 flwr:1 of msgid "Flower main package." msgstr "" -#: ../../source/index.rst:149 +#: ../../source/index.rst:148 msgid "Contributor docs" msgstr "" -#: ../../source/index.rst:151 +#: ../../source/index.rst:150 msgid "" "The Flower community welcomes contributions. The following docs are " "intended to help along the way." @@ -6323,22 +6719,18 @@ msgid "Flower CLI reference" msgstr "" #: ../../source/ref-api-cli.rst:7 -msgid "flower-superlink" +msgid "flower-simulation" msgstr "" #: ../../source/ref-api-cli.rst:17 -msgid "flower-driver-api" +msgid "flower-superlink" msgstr "" #: ../../source/ref-api-cli.rst:27 -msgid "flower-fleet-api" -msgstr "" - -#: ../../source/ref-api-cli.rst:37 msgid "flower-client-app" msgstr "" -#: ../../source/ref-api-cli.rst:47 +#: ../../source/ref-api-cli.rst:37 msgid "flower-server-app" msgstr "" @@ -6346,7 +6738,7 @@ msgstr "" msgid "flwr" msgstr "" -#: ../../source/ref-api/flwr.rst:25 ../../source/ref-api/flwr.server.rst:52 +#: ../../source/ref-api/flwr.rst:25 ../../source/ref-api/flwr.server.rst:51 msgid "Modules" msgstr "" @@ -6371,7 +6763,7 @@ msgid ":py:obj:`flwr.server `\\" msgstr "" #: ../../source/ref-api/flwr.rst:35::1 -#: ../../source/ref-api/flwr.server.rst:41::1 flwr.server:1 +#: ../../source/ref-api/flwr.server.rst:40::1 flwr.server:1 #: flwr.server.server.Server:1 of msgid "Flower server." msgstr "" @@ -6395,77 +6787,87 @@ msgstr "" msgid "Functions" msgstr "" -#: ../../source/ref-api/flwr.client.rst:24::1 +#: ../../source/ref-api/flwr.client.rst:25::1 msgid ":py:obj:`run_client_app `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.client.rst:24::1 -#: flwr.client.app.run_client_app:1 of +#: ../../source/ref-api/flwr.client.rst:25::1 +#: flwr.client.supernode.app.run_client_app:1 of msgid "Run Flower client app." msgstr "" -#: ../../source/ref-api/flwr.client.rst:24::1 +#: ../../source/ref-api/flwr.client.rst:25::1 +msgid ":py:obj:`run_supernode `\\ \\(\\)" +msgstr "" + +#: ../../source/ref-api/flwr.client.rst:25::1 +#: flwr.client.supernode.app.run_supernode:1 of +msgid "Run Flower SuperNode." +msgstr "" + +#: ../../source/ref-api/flwr.client.rst:25::1 msgid "" ":py:obj:`start_client `\\ \\(\\*\\, " "server\\_address\\[\\, client\\_fn\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.client.rst:24::1 +#: ../../source/ref-api/flwr.client.rst:25::1 #: flwr.client.app.start_client:1 of msgid "Start a Flower client node which connects to a Flower server." msgstr "" -#: ../../source/ref-api/flwr.client.rst:24::1 +#: ../../source/ref-api/flwr.client.rst:25::1 msgid "" ":py:obj:`start_numpy_client `\\ \\(\\*\\," " server\\_address\\, client\\)" msgstr "" -#: ../../source/ref-api/flwr.client.rst:24::1 +#: ../../source/ref-api/flwr.client.rst:25::1 #: flwr.client.app.start_numpy_client:1 of msgid "Start a Flower NumPyClient which connects to a gRPC server." msgstr "" -#: ../../source/ref-api/flwr.client.rst:26 +#: ../../source/ref-api/flwr.client.rst:27 #: ../../source/ref-api/flwr.common.rst:32 -#: ../../source/ref-api/flwr.server.rst:29 +#: ../../source/ref-api/flwr.server.rst:28 #: ../../source/ref-api/flwr.server.strategy.rst:17 #: ../../source/ref-api/flwr.server.workflow.rst:17 msgid "Classes" msgstr "" -#: ../../source/ref-api/flwr.client.rst:33::1 +#: ../../source/ref-api/flwr.client.rst:34::1 msgid ":py:obj:`Client `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.client.rst:33::1 +#: ../../source/ref-api/flwr.client.rst:34::1 #: flwr.client.client.Client:1 of msgid "Abstract base class for Flower clients." msgstr "" -#: ../../source/ref-api/flwr.client.rst:33::1 +#: ../../source/ref-api/flwr.client.rst:34::1 msgid "" ":py:obj:`ClientApp `\\ \\(\\[client\\_fn\\, " "mods\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.client.rst:33::1 +#: ../../source/ref-api/flwr.client.rst:34::1 #: flwr.client.client_app.ClientApp:1 of msgid "Flower ClientApp." msgstr "" -#: ../../source/ref-api/flwr.client.rst:33::1 +#: ../../source/ref-api/flwr.client.rst:34::1 msgid ":py:obj:`NumPyClient `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.client.rst:33::1 +#: ../../source/ref-api/flwr.client.rst:34::1 #: flwr.client.numpy_client.NumPyClient:1 of msgid "Abstract base class for Flower clients using NumPy." msgstr "" #: flwr.client.client.Client:1 flwr.client.numpy_client.NumPyClient:1 #: flwr.server.client_manager.ClientManager:1 -#: flwr.server.strategy.strategy.Strategy:1 of +#: flwr.server.driver.driver.Driver:1 flwr.server.strategy.strategy.Strategy:1 +#: of msgid "Bases: :py:class:`~abc.ABC`" msgstr "" @@ -6480,6 +6882,7 @@ msgstr "" #: ../../source/ref-api/flwr.common.Error.rst:15 #: ../../source/ref-api/flwr.common.EvaluateIns.rst:15 #: ../../source/ref-api/flwr.common.EvaluateRes.rst:15 +#: ../../source/ref-api/flwr.common.EventType.rst:15 #: ../../source/ref-api/flwr.common.FitIns.rst:15 #: ../../source/ref-api/flwr.common.FitRes.rst:15 #: ../../source/ref-api/flwr.common.GetParametersIns.rst:15 @@ -6616,7 +7019,7 @@ msgstr "" #: ../../source/ref-api/flwr.common.Error.rst:25 #: ../../source/ref-api/flwr.common.EvaluateIns.rst:25 #: ../../source/ref-api/flwr.common.EvaluateRes.rst:25 -#: ../../source/ref-api/flwr.common.EventType.rst:19 +#: ../../source/ref-api/flwr.common.EventType.rst:165 #: ../../source/ref-api/flwr.common.FitIns.rst:25 #: ../../source/ref-api/flwr.common.FitRes.rst:25 #: ../../source/ref-api/flwr.common.GetParametersIns.rst:25 @@ -6659,7 +7062,6 @@ msgstr "" #: flwr.server.client_manager.SimpleClientManager.register #: flwr.server.client_manager.SimpleClientManager.unregister #: flwr.server.client_manager.SimpleClientManager.wait_for -#: flwr.server.compat.app.start_driver flwr.server.driver.driver.Driver #: flwr.server.driver.driver.Driver.create_message #: flwr.server.driver.driver.Driver.pull_messages #: flwr.server.driver.driver.Driver.push_messages @@ -6711,7 +7113,6 @@ msgstr "" #: flwr.server.client_manager.SimpleClientManager.num_available #: flwr.server.client_manager.SimpleClientManager.register #: flwr.server.client_manager.SimpleClientManager.wait_for -#: flwr.server.compat.app.start_driver #: flwr.server.driver.driver.Driver.create_message #: flwr.server.driver.driver.Driver.pull_messages #: flwr.server.driver.driver.Driver.push_messages @@ -6745,7 +7146,6 @@ msgstr "" #: flwr.server.client_manager.SimpleClientManager.num_available #: flwr.server.client_manager.SimpleClientManager.register #: flwr.server.client_manager.SimpleClientManager.wait_for -#: flwr.server.compat.app.start_driver #: flwr.server.driver.driver.Driver.create_message #: flwr.server.driver.driver.Driver.pull_messages #: flwr.server.driver.driver.Driver.push_messages @@ -6810,9 +7210,9 @@ msgstr "" #: flwr.common.typing.GetParametersRes:1 flwr.common.typing.GetPropertiesIns:1 #: flwr.common.typing.GetPropertiesRes:1 flwr.common.typing.Parameters:1 #: flwr.common.typing.ReconnectIns:1 flwr.common.typing.ServerMessage:1 -#: flwr.common.typing.Status:1 flwr.server.driver.driver.Driver:1 -#: flwr.server.history.History:1 flwr.server.server.Server:1 -#: flwr.server.server_app.ServerApp:1 flwr.server.server_config.ServerConfig:1 +#: flwr.common.typing.Status:1 flwr.server.history.History:1 +#: flwr.server.server.Server:1 flwr.server.server_app.ServerApp:1 +#: flwr.server.server_config.ServerConfig:1 #: flwr.server.workflow.default_workflows.DefaultWorkflow:1 #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:1 #: of @@ -6824,8 +7224,7 @@ msgstr "" #: flwr.client.client_app.ClientApp.evaluate:4 #: flwr.client.client_app.ClientApp.query:4 #: flwr.client.client_app.ClientApp.train:4 flwr.server.app.start_server:41 -#: flwr.server.compat.app.start_driver:32 flwr.server.server_app.ServerApp:4 -#: flwr.server.server_app.ServerApp.main:4 +#: flwr.server.server_app.ServerApp:4 flwr.server.server_app.ServerApp.main:4 #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:29 #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:22 #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:21 @@ -7045,6 +7444,10 @@ msgstr "" msgid "run\\_client\\_app" msgstr "" +#: ../../source/ref-api/flwr.client.run_supernode.rst:2 +msgid "run\\_supernode" +msgstr "" + #: ../../source/ref-api/flwr.client.start_client.rst:2 msgid "start\\_client" msgstr "" @@ -7076,8 +7479,7 @@ msgid "" "increased limit and block larger messages." msgstr "" -#: flwr.client.app.start_client:19 flwr.client.app.start_numpy_client:22 -#: flwr.server.compat.app.start_driver:21 of +#: flwr.client.app.start_client:19 flwr.client.app.start_numpy_client:22 of msgid "" "The PEM-encoded root certificates as a byte string or a path string. If " "provided, a secure connection using the certificates will be established " @@ -7643,12 +8045,11 @@ msgstr "" #: flwr.common.record.configsrecord.ConfigsRecord:1 of msgid "" "Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " -"[:py:class:`str`, :py:obj:`~typing.Union`\\ [:py:class:`int`, " -":py:class:`float`, :py:class:`str`, :py:class:`bytes`, :py:class:`bool`, " -":py:class:`~typing.List`\\ [:py:class:`int`], :py:class:`~typing.List`\\ " -"[:py:class:`float`], :py:class:`~typing.List`\\ [:py:class:`str`], " -":py:class:`~typing.List`\\ [:py:class:`bytes`], " -":py:class:`~typing.List`\\ [:py:class:`bool`]]]" +"[:py:class:`str`, :py:class:`int` | :py:class:`float` | :py:class:`str` |" +" :py:class:`bytes` | :py:class:`bool` | :py:class:`~typing.List`\\ " +"[:py:class:`int`] | :py:class:`~typing.List`\\ [:py:class:`float`] | " +":py:class:`~typing.List`\\ [:py:class:`str`] | :py:class:`~typing.List`\\" +" [:py:class:`bytes`] | :py:class:`~typing.List`\\ [:py:class:`bool`]]" msgstr "" #: flwr.common.record.typeddict.TypedDict.clear:1::1 of @@ -7810,11931 +8211,13894 @@ msgstr "" msgid "Bases: :py:class:`str`, :py:class:`~enum.Enum`" msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid ":py:obj:`PING `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`encode `\\ \\(\\[encoding\\, " +"errors\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid ":py:obj:`START_CLIENT_ENTER `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.encode:1 of +msgid "Encode the string using the codec registered for encoding." msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid ":py:obj:`START_CLIENT_LEAVE `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`replace `\\ \\(old\\, new\\[\\, " +"count\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid ":py:obj:`START_SERVER_ENTER `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.replace:1 of +msgid "Return a copy with all occurrences of substring old replaced by new." msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid ":py:obj:`START_SERVER_LEAVE `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`split `\\ \\(\\[sep\\, " +"maxsplit\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.rsplit:1 flwr.common.EventType.split:1 of msgid "" -":py:obj:`RUN_DRIVER_API_ENTER " -"`\\" +"Return a list of the substrings in the string, using sep as the separator" +" string." msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 msgid "" -":py:obj:`RUN_DRIVER_API_LEAVE " -"`\\" +":py:obj:`rsplit `\\ \\(\\[sep\\, " +"maxsplit\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid "" -":py:obj:`RUN_FLEET_API_ENTER " -"`\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`join `\\ \\(iterable\\, \\/\\)" msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid "" -":py:obj:`RUN_FLEET_API_LEAVE " -"`\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.join:1 of +msgid "Concatenate any number of strings." msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid "" -":py:obj:`RUN_SUPERLINK_ENTER " -"`\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`capitalize `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid "" -":py:obj:`RUN_SUPERLINK_LEAVE " -"`\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.capitalize:1 of +msgid "Return a capitalized version of the string." msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid "" -":py:obj:`START_SIMULATION_ENTER " -"`\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`casefold `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid "" -":py:obj:`START_SIMULATION_LEAVE " -"`\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.casefold:1 of +msgid "Return a version of the string suitable for caseless comparisons." msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid ":py:obj:`DRIVER_CONNECT `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`title `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid ":py:obj:`DRIVER_DISCONNECT `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.title:1 of +msgid "Return a version of the string where each word is titlecased." msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid ":py:obj:`START_DRIVER_ENTER `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`center `\\ \\(width\\[\\, " +"fillchar\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid ":py:obj:`START_DRIVER_LEAVE `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.center:1 of +msgid "Return a centered string of length width." msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 msgid "" -":py:obj:`RUN_CLIENT_APP_ENTER " -"`\\" +":py:obj:`count `\\ \\(sub\\[\\, start\\[\\, " +"end\\]\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 msgid "" -":py:obj:`RUN_CLIENT_APP_LEAVE " -"`\\" +"Return the number of non-overlapping occurrences of substring sub in " +"string S[start:end]." msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 msgid "" -":py:obj:`RUN_SERVER_APP_ENTER " -"`\\" +":py:obj:`expandtabs `\\ " +"\\(\\[tabsize\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -msgid "" -":py:obj:`RUN_SERVER_APP_LEAVE " -"`\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.expandtabs:1 of +msgid "Return a copy where all tab characters are expanded using spaces." msgstr "" -#: ../../source/ref-api/flwr.common.FitIns.rst:2 -msgid "FitIns" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`find `\\ \\(sub\\[\\, start\\[\\, " +"end\\]\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.FitIns.rst:29::1 -msgid ":py:obj:`parameters `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +"Return the lowest index in S where substring sub is found, such that sub " +"is contained within S[start:end]." msgstr "" -#: ../../source/ref-api/flwr.common.FitIns.rst:29::1 -msgid ":py:obj:`config `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`partition `\\ \\(sep\\, \\/\\)" msgstr "" -#: ../../source/ref-api/flwr.common.FitRes.rst:2 -msgid "FitRes" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.partition:1 flwr.common.EventType.rpartition:1 of +msgid "Partition the string into three parts using the given separator." msgstr "" -#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 -msgid ":py:obj:`status `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`index `\\ \\(sub\\[\\, start\\[\\, " +"end\\]\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 -msgid ":py:obj:`parameters `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`ljust `\\ \\(width\\[\\, " +"fillchar\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 -msgid ":py:obj:`num_examples `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.ljust:1 of +msgid "Return a left-justified string of length width." msgstr "" -#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 -msgid ":py:obj:`metrics `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`lower `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.common.GetParametersIns.rst:2 -msgid "GetParametersIns" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.lower:1 of +msgid "Return a copy of the string converted to lowercase." msgstr "" -#: ../../source/ref-api/flwr.common.GetParametersIns.rst:28::1 -msgid ":py:obj:`config `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`lstrip `\\ \\(\\[chars\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.GetParametersRes.rst:2 -msgid "GetParametersRes" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.lstrip:1 of +msgid "Return a copy of the string with leading whitespace removed." msgstr "" -#: ../../source/ref-api/flwr.common.GetParametersRes.rst:29::1 -msgid ":py:obj:`status `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`rfind `\\ \\(sub\\[\\, start\\[\\, " +"end\\]\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.GetParametersRes.rst:29::1 -msgid ":py:obj:`parameters `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +"Return the highest index in S where substring sub is found, such that sub" +" is contained within S[start:end]." msgstr "" -#: ../../source/ref-api/flwr.common.GetPropertiesIns.rst:2 -msgid "GetPropertiesIns" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`rindex `\\ \\(sub\\[\\, " +"start\\[\\, end\\]\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.GetPropertiesIns.rst:28::1 -msgid ":py:obj:`config `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`rjust `\\ \\(width\\[\\, " +"fillchar\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:2 -msgid "GetPropertiesRes" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.rjust:1 of +msgid "Return a right-justified string of length width." msgstr "" -#: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:29::1 -msgid ":py:obj:`status `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`rstrip `\\ \\(\\[chars\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:29::1 -msgid ":py:obj:`properties `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.rstrip:1 of +msgid "Return a copy of the string with trailing whitespace removed." msgstr "" -#: ../../source/ref-api/flwr.common.Message.rst:2 -msgid "Message" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`rpartition `\\ \\(sep\\, \\/\\)" msgstr "" -#: flwr.common.Message.content:1::1 flwr.common.Message.metadata:1 -#: flwr.common.message.Message:3 of -msgid "A dataclass including information about the message to be executed." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`splitlines `\\ " +"\\(\\[keepends\\]\\)" msgstr "" -#: flwr.common.message.Message:5 of -msgid "" -"Holds records either sent by another entity (e.g. sent by the server-side" -" logic to a client, or vice-versa) or that will be sent to it." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.splitlines:1 of +msgid "Return a list of the lines in the string, breaking at line boundaries." msgstr "" -#: flwr.common.message.Message:8 of -msgid "" -"A dataclass that captures information about an error that took place when" -" processing another message." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`strip `\\ \\(\\[chars\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 -msgid "" -":py:obj:`create_error_reply `\\ " -"\\(error\\, ttl\\)" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.strip:1 of +msgid "Return a copy of the string with leading and trailing whitespace removed." msgstr "" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 -#: flwr.common.message.Message.create_error_reply:1 of -msgid "Construct a reply message indicating an error happened." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`swapcase `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.swapcase:1 of msgid "" -":py:obj:`create_reply `\\ \\(content\\," -" ttl\\)" +"Convert uppercase characters to lowercase and lowercase characters to " +"uppercase." msgstr "" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 -#: flwr.common.message.Message.create_reply:1 of -msgid "Create a reply to this message with specified content and TTL." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`translate `\\ \\(table\\, \\/\\)" msgstr "" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 -msgid ":py:obj:`has_content `\\ \\(\\)" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.translate:1 of +msgid "Replace each character in the string using the given translation table." msgstr "" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 -#: flwr.common.message.Message.has_content:1 of -msgid "Return True if message has content, else False." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`upper `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 -msgid ":py:obj:`has_error `\\ \\(\\)" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.upper:1 of +msgid "Return a copy of the string converted to uppercase." msgstr "" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 -#: flwr.common.message.Message.has_error:1 of -msgid "Return True if message has an error, else False." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`startswith `\\ \\(prefix\\[\\," +" start\\[\\, end\\]\\]\\)" msgstr "" -#: flwr.common.Message.content:1::1 of -msgid ":py:obj:`content `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "Return True if S starts with the specified prefix, False otherwise." msgstr "" -#: flwr.common.Message.content:1 flwr.common.Message.content:1::1 -#: of -msgid "The content of this message." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`endswith `\\ \\(suffix\\[\\, " +"start\\[\\, end\\]\\]\\)" msgstr "" -#: flwr.common.Message.content:1::1 of -msgid ":py:obj:`error `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "Return True if S ends with the specified suffix, False otherwise." msgstr "" -#: flwr.common.Message.content:1::1 flwr.common.Message.error:1 of -msgid "Error captured by this message." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`removeprefix `\\ " +"\\(prefix\\, \\/\\)" msgstr "" -#: flwr.common.Message.content:1::1 of -msgid ":py:obj:`metadata `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.removeprefix:1 of +msgid "Return a str with the given prefix string removed if present." msgstr "" -#: flwr.common.message.Message.create_error_reply:3 of -msgid "The error that was encountered." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`removesuffix `\\ " +"\\(suffix\\, \\/\\)" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.ttl:1 flwr.common.message.Message.create_error_reply:5 -#: flwr.common.message.Message.create_reply:9 flwr.common.message.Metadata:16 -#: of -msgid "Time-to-live for this message." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.removesuffix:1 of +msgid "Return a str with the given suffix string removed if present." msgstr "" -#: flwr.common.message.Message.create_reply:3 of -msgid "" -"The method generates a new `Message` as a reply to this message. It " -"inherits 'run_id', 'src_node_id', 'dst_node_id', and 'message_type' from " -"this message and sets 'reply_to_message' to the ID of this message." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`isascii `\\ \\(\\)" msgstr "" -#: flwr.common.message.Message.create_reply:7 of -msgid "The content for the reply message." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isascii:1 of +msgid "Return True if all characters in the string are ASCII, False otherwise." msgstr "" -#: flwr.common.message.Message.create_reply:12 of -msgid "A new `Message` instance representing the reply." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`islower `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.common.MessageType.rst:2 -msgid "MessageType" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.islower:1 of +msgid "Return True if the string is a lowercase string, False otherwise." msgstr "" -#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 -msgid ":py:obj:`EVALUATE `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`isupper `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 -msgid ":py:obj:`QUERY `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isupper:1 of +msgid "Return True if the string is an uppercase string, False otherwise." msgstr "" -#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 -msgid ":py:obj:`TRAIN `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`istitle `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:2 -msgid "MessageTypeLegacy" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.istitle:1 of +msgid "Return True if the string is a title-cased string, False otherwise." msgstr "" -#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:29::1 -msgid ":py:obj:`GET_PARAMETERS `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`isspace `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:29::1 -msgid ":py:obj:`GET_PROPERTIES `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isspace:1 of +msgid "Return True if the string is a whitespace string, False otherwise." msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.run_id:1 flwr.common.message.Metadata:3 of -msgid "An identifier for the current run." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`isdecimal `\\ \\(\\)" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.message_id:1 flwr.common.message.Metadata:5 of -msgid "An identifier for the current message." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isdecimal:1 of +msgid "Return True if the string is a decimal string, False otherwise." msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.src_node_id:1 flwr.common.message.Metadata:7 of -msgid "An identifier for the node sending this message." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`isdigit `\\ \\(\\)" msgstr "" -#: flwr.common.Metadata.dst_node_id:1 -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.message.Metadata:9 of -msgid "An identifier for the node receiving this message." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isdigit:1 of +msgid "Return True if the string is a digit string, False otherwise." msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.reply_to_message:1 flwr.common.message.Metadata:11 of -msgid "An identifier for the message this message replies to." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`isnumeric `\\ \\(\\)" msgstr "" -#: flwr.common.message.Metadata:13 of -msgid "" -"An identifier for grouping messages. In some settings, this is used as " -"the FL round." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isnumeric:1 of +msgid "Return True if the string is a numeric string, False otherwise." msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.message_type:1 flwr.common.message.Metadata:18 of -msgid "A string that encodes the action to be executed on the receiving end." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`isalpha `\\ \\(\\)" msgstr "" -#: flwr.common.message.Metadata:21 of -msgid "" -"An identifier that can be used when loading a particular data partition " -"for a ClientApp. Making use of this identifier is more relevant when " -"conducting simulations." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isalpha:1 of +msgid "Return True if the string is an alphabetic string, False otherwise." msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 of -msgid ":py:obj:`dst_node_id `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`isalnum `\\ \\(\\)" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 of -msgid ":py:obj:`group_id `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isalnum:1 of +msgid "Return True if the string is an alpha-numeric string, False otherwise." msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.group_id:1 of -msgid "An identifier for grouping messages." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`isidentifier `\\ \\(\\)" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 of -msgid ":py:obj:`message_id `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isidentifier:1 of +msgid "Return True if the string is a valid Python identifier, False otherwise." msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 of -msgid ":py:obj:`message_type `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`isprintable `\\ \\(\\)" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 of -msgid ":py:obj:`partition_id `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isprintable:1 of +msgid "Return True if the string is printable, False otherwise." msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.partition_id:1 of -msgid "An identifier telling which data partition a ClientApp should use." +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`zfill `\\ \\(width\\, \\/\\)" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 of -msgid ":py:obj:`reply_to_message `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.zfill:1 of +msgid "" +"Pad a numeric string with zeros on the left, to fill a field of the given" +" width." msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 of -msgid ":py:obj:`run_id `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +":py:obj:`format `\\ \\(\\*args\\, " +"\\*\\*kwargs\\)" msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 of -msgid ":py:obj:`src_node_id `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "Return a formatted version of S, using substitutions from args and kwargs." msgstr "" -#: flwr.common.Metadata.dst_node_id:1::1 of -msgid ":py:obj:`ttl `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`format_map `\\ \\(mapping\\)" msgstr "" -#: ../../source/ref-api/flwr.common.MetricsRecord.rst:2 -msgid "MetricsRecord" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "Return a formatted version of S, using substitutions from mapping." msgstr "" -#: flwr.common.record.metricsrecord.MetricsRecord:1 of -msgid "" -"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " -"[:py:class:`str`, :py:obj:`~typing.Union`\\ [:py:class:`int`, " -":py:class:`float`, :py:class:`~typing.List`\\ [:py:class:`int`], " -":py:class:`~typing.List`\\ [:py:class:`float`]]]" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid ":py:obj:`maketrans `\\" msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`clear `\\ \\(\\)" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.maketrans:1 of +msgid "Return a translation table usable for str.translate()." msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`count_bytes `\\ \\(\\)" +#: flwr.common.EventType.capitalize:1::1 of +msgid ":py:obj:`PING `\\" msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" +#: flwr.common.EventType.capitalize:1::1 of +msgid ":py:obj:`START_CLIENT_ENTER `\\" msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`items `\\ \\(\\)" +#: flwr.common.EventType.capitalize:1::1 of +msgid ":py:obj:`START_CLIENT_LEAVE `\\" msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`keys `\\ \\(\\)" +#: flwr.common.EventType.capitalize:1::1 of +msgid ":py:obj:`START_SERVER_ENTER `\\" msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" +#: flwr.common.EventType.capitalize:1::1 of +msgid ":py:obj:`START_SERVER_LEAVE `\\" msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +#: flwr.common.EventType.capitalize:1::1 of msgid "" -":py:obj:`update `\\ \\(\\[E\\, " -"\\]\\*\\*F\\)" +":py:obj:`RUN_DRIVER_API_ENTER " +"`\\" msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`values `\\ \\(\\)" +#: flwr.common.EventType.capitalize:1::1 of +msgid "" +":py:obj:`RUN_DRIVER_API_LEAVE " +"`\\" msgstr "" -#: ../../source/ref-api/flwr.common.NDArray.rst:2 -msgid "NDArray" +#: flwr.common.EventType.capitalize:1::1 of +msgid "" +":py:obj:`RUN_FLEET_API_ENTER " +"`\\" msgstr "" -#: ../../source/ref-api/flwr.common.Parameters.rst:29::1 -msgid ":py:obj:`tensors `\\" +#: flwr.common.EventType.capitalize:1::1 of +msgid "" +":py:obj:`RUN_FLEET_API_LEAVE " +"`\\" msgstr "" -#: ../../source/ref-api/flwr.common.Parameters.rst:29::1 -msgid ":py:obj:`tensor_type `\\" +#: flwr.common.EventType.capitalize:1::1 of +msgid "" +":py:obj:`RUN_SUPERLINK_ENTER " +"`\\" msgstr "" -#: ../../source/ref-api/flwr.common.ParametersRecord.rst:2 -msgid "ParametersRecord" +#: flwr.common.EventType.capitalize:1::1 of +msgid "" +":py:obj:`RUN_SUPERLINK_LEAVE " +"`\\" msgstr "" -#: flwr.common.record.parametersrecord.ParametersRecord:1 of +#: flwr.common.EventType.capitalize:1::1 of msgid "" -"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " -"[:py:class:`str`, :py:class:`~flwr.common.record.parametersrecord.Array`]" +":py:obj:`START_SIMULATION_ENTER " +"`\\" msgstr "" -#: flwr.common.record.parametersrecord.ParametersRecord:3 of +#: flwr.common.EventType.capitalize:1::1 of msgid "" -"A dataclass storing named Arrays in order. This means that it holds " -"entries as an OrderedDict[str, Array]. ParametersRecord objects can be " -"viewed as an equivalent to PyTorch's state_dict, but holding serialised " -"tensors instead." +":py:obj:`START_SIMULATION_LEAVE " +"`\\" msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`clear `\\ \\(\\)" +#: flwr.common.EventType.capitalize:1::1 of +msgid ":py:obj:`DRIVER_CONNECT `\\" msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`count_bytes `\\ \\(\\)" +#: flwr.common.EventType.capitalize:1::1 of +msgid ":py:obj:`DRIVER_DISCONNECT `\\" msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" +#: flwr.common.EventType.capitalize:1::1 of +msgid ":py:obj:`START_DRIVER_ENTER `\\" msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`items `\\ \\(\\)" +#: flwr.common.EventType.capitalize:1::1 of +msgid ":py:obj:`START_DRIVER_LEAVE `\\" msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`keys `\\ \\(\\)" +#: flwr.common.EventType.capitalize:1::1 of +msgid "" +":py:obj:`RUN_CLIENT_APP_ENTER " +"`\\" msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" +#: flwr.common.EventType.capitalize:1::1 of +msgid "" +":py:obj:`RUN_CLIENT_APP_LEAVE " +"`\\" msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +#: flwr.common.EventType.capitalize:1::1 of msgid "" -":py:obj:`update `\\ \\(\\[E\\, " -"\\]\\*\\*F\\)" +":py:obj:`RUN_SERVER_APP_ENTER " +"`\\" msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -msgid ":py:obj:`values `\\ \\(\\)" +#: flwr.common.EventType.capitalize:1::1 of +msgid "" +":py:obj:`RUN_SERVER_APP_LEAVE " +"`\\" msgstr "" -#: flwr.common.record.parametersrecord.ParametersRecord.count_bytes:3 of +#: flwr.common.EventType.capitalize:1::1 of msgid "" -"Note that a small amount of Bytes might also be included in this counting" -" that correspond to metadata of the serialized object (e.g. of NumPy " -"array) needed for deseralization." +":py:obj:`RUN_SUPERNODE_ENTER " +"`\\" msgstr "" -#: ../../source/ref-api/flwr.common.ReconnectIns.rst:2 -msgid "ReconnectIns" +#: flwr.common.EventType.capitalize:1::1 of +msgid "" +":py:obj:`RUN_SUPERNODE_LEAVE " +"`\\" msgstr "" -#: ../../source/ref-api/flwr.common.ReconnectIns.rst:28::1 -msgid ":py:obj:`seconds `\\" +#: flwr.common.EventType.capitalize:3 of +msgid "" +"More specifically, make the first character have upper case and the rest " +"lower case." msgstr "" -#: ../../source/ref-api/flwr.common.RecordSet.rst:2 -msgid "RecordSet" +#: flwr.common.EventType.center:3 flwr.common.EventType.ljust:3 +#: flwr.common.EventType.rjust:3 of +msgid "Padding is done using the specified fill character (default is a space)." msgstr "" -#: flwr.common.RecordSet.configs_records:1::1 of -msgid ":py:obj:`configs_records `\\" +#: flwr.common.EventType.count:1 of +msgid "" +"Return the number of non-overlapping occurrences of substring sub in " +"string S[start:end]. Optional arguments start and end are interpreted as" +" in slice notation." msgstr "" -#: flwr.common.RecordSet.configs_records:1 -#: flwr.common.RecordSet.configs_records:1::1 of -msgid "Dictionary holding ConfigsRecord instances." +#: flwr.common.EventType.encode:3 of +msgid "encoding" msgstr "" -#: flwr.common.RecordSet.configs_records:1::1 of -msgid ":py:obj:`metrics_records `\\" +#: flwr.common.EventType.encode:4 of +msgid "The encoding in which to encode the string." msgstr "" -#: flwr.common.RecordSet.configs_records:1::1 -#: flwr.common.RecordSet.metrics_records:1 of -msgid "Dictionary holding MetricsRecord instances." +#: flwr.common.EventType.encode:9 of +msgid "errors" msgstr "" -#: flwr.common.RecordSet.configs_records:1::1 of -msgid ":py:obj:`parameters_records `\\" +#: flwr.common.EventType.encode:6 of +msgid "" +"The error handling scheme to use for encoding errors. The default is " +"'strict' meaning that encoding errors raise a UnicodeEncodeError. Other " +"possible values are 'ignore', 'replace' and 'xmlcharrefreplace' as well " +"as any other name registered with codecs.register_error that can handle " +"UnicodeEncodeErrors." msgstr "" -#: flwr.common.RecordSet.configs_records:1::1 -#: flwr.common.RecordSet.parameters_records:1 of -msgid "Dictionary holding ParametersRecord instances." +#: flwr.common.EventType.endswith:1 of +msgid "" +"Return True if S ends with the specified suffix, False otherwise. With " +"optional start, test S beginning at that position. With optional end, " +"stop comparing S at that position. suffix can also be a tuple of strings " +"to try." msgstr "" -#: ../../source/ref-api/flwr.common.ServerMessage.rst:2 -msgid "ServerMessage" +#: flwr.common.EventType.expandtabs:3 of +msgid "If tabsize is not given, a tab size of 8 characters is assumed." msgstr "" -#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 -msgid ":py:obj:`evaluate_ins `\\" +#: flwr.common.EventType.find:1 flwr.common.EventType.index:1 of +msgid "" +"Return the lowest index in S where substring sub is found, such that sub " +"is contained within S[start:end]. Optional arguments start and end are " +"interpreted as in slice notation." msgstr "" -#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 -msgid ":py:obj:`fit_ins `\\" +#: flwr.common.EventType.find:5 flwr.common.EventType.rfind:5 of +msgid "Return -1 on failure." msgstr "" -#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 +#: flwr.common.EventType.format:1 of msgid "" -":py:obj:`get_parameters_ins " -"`\\" +"Return a formatted version of S, using substitutions from args and " +"kwargs. The substitutions are identified by braces ('{' and '}')." msgstr "" -#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 +#: flwr.common.EventType.format_map:1 of msgid "" -":py:obj:`get_properties_ins " -"`\\" +"Return a formatted version of S, using substitutions from mapping. The " +"substitutions are identified by braces ('{' and '}')." msgstr "" -#: ../../source/ref-api/flwr.common.Status.rst:2 -msgid "Status" +#: flwr.common.EventType.index:5 flwr.common.EventType.rindex:5 of +msgid "Raises ValueError when the substring is not found." msgstr "" -#: ../../source/ref-api/flwr.common.Status.rst:29::1 -msgid ":py:obj:`code `\\" +#: flwr.common.EventType.isalnum:3 of +msgid "" +"A string is alpha-numeric if all characters in the string are alpha-" +"numeric and there is at least one character in the string." msgstr "" -#: ../../source/ref-api/flwr.common.Status.rst:29::1 -msgid ":py:obj:`message `\\" +#: flwr.common.EventType.isalpha:3 of +msgid "" +"A string is alphabetic if all characters in the string are alphabetic and" +" there is at least one character in the string." msgstr "" -#: ../../source/ref-api/flwr.common.array_from_numpy.rst:2 -msgid "array\\_from\\_numpy" +#: flwr.common.EventType.isascii:3 of +msgid "" +"ASCII characters have code points in the range U+0000-U+007F. Empty " +"string is ASCII too." msgstr "" -#: ../../source/ref-api/flwr.common.bytes_to_ndarray.rst:2 -msgid "bytes\\_to\\_ndarray" +#: flwr.common.EventType.isdecimal:3 of +msgid "" +"A string is a decimal string if all characters in the string are decimal " +"and there is at least one character in the string." msgstr "" -#: ../../source/ref-api/flwr.common.configure.rst:2 -msgid "configure" +#: flwr.common.EventType.isdigit:3 of +msgid "" +"A string is a digit string if all characters in the string are digits and" +" there is at least one character in the string." msgstr "" -#: ../../source/ref-api/flwr.common.event.rst:2 -msgid "event" +#: flwr.common.EventType.isidentifier:3 of +msgid "" +"Call keyword.iskeyword(s) to test whether string s is a reserved " +"identifier, such as \"def\" or \"class\"." msgstr "" -#: ../../source/ref-api/flwr.common.log.rst:2 -msgid "log" +#: flwr.common.EventType.islower:3 of +msgid "" +"A string is lowercase if all cased characters in the string are lowercase" +" and there is at least one cased character in the string." msgstr "" -#: logging.Logger.log:3 of +#: flwr.common.EventType.isnumeric:3 of msgid "" -"To pass exception information, use the keyword argument exc_info with a " -"true value, e.g." +"A string is numeric if all characters in the string are numeric and there" +" is at least one character in the string." msgstr "" -#: logging.Logger.log:6 of -#, python-format -msgid "logger.log(level, \"We have a %s\", \"mysterious problem\", exc_info=1)" +#: flwr.common.EventType.isprintable:3 of +msgid "" +"A string is printable if all of its characters are considered printable " +"in repr() or if it is empty." msgstr "" -#: ../../source/ref-api/flwr.common.ndarray_to_bytes.rst:2 -msgid "ndarray\\_to\\_bytes" +#: flwr.common.EventType.isspace:3 of +msgid "" +"A string is whitespace if all characters in the string are whitespace and" +" there is at least one character in the string." msgstr "" -#: ../../source/ref-api/flwr.common.ndarrays_to_parameters.rst:2 -msgid "ndarrays\\_to\\_parameters" +#: flwr.common.EventType.istitle:3 of +msgid "" +"In a title-cased string, upper- and title-case characters may only follow" +" uncased characters and lowercase characters only cased ones." msgstr "" -#: ../../source/ref-api/flwr.common.now.rst:2 -msgid "now" +#: flwr.common.EventType.isupper:3 of +msgid "" +"A string is uppercase if all cased characters in the string are uppercase" +" and there is at least one cased character in the string." msgstr "" -#: ../../source/ref-api/flwr.common.parameters_to_ndarrays.rst:2 -msgid "parameters\\_to\\_ndarrays" +#: flwr.common.EventType.join:3 of +msgid "" +"The string whose method is called is inserted in between each given " +"string. The result is returned as a new string." msgstr "" -#: ../../source/ref-api/flwr.server.rst:2 -msgid "server" +#: flwr.common.EventType.join:6 of +msgid "Example: '.'.join(['ab', 'pq', 'rs']) -> 'ab.pq.rs'" msgstr "" -#: ../../source/ref-api/flwr.server.rst:27::1 -msgid ":py:obj:`run_driver_api `\\ \\(\\)" +#: flwr.common.EventType.lstrip:3 flwr.common.EventType.rstrip:3 +#: flwr.common.EventType.strip:3 of +msgid "If chars is given and not None, remove characters in chars instead." msgstr "" -#: ../../source/ref-api/flwr.server.rst:27::1 -#: flwr.server.app.run_driver_api:1 of -msgid "Run Flower server (Driver API)." +#: flwr.common.EventType.maketrans:3 of +msgid "" +"If there is only one argument, it must be a dictionary mapping Unicode " +"ordinals (integers) or characters to Unicode ordinals, strings or None. " +"Character keys will be then converted to ordinals. If there are two " +"arguments, they must be strings of equal length, and in the resulting " +"dictionary, each character in x will be mapped to the character at the " +"same position in y. If there is a third argument, it must be a string, " +"whose characters will be mapped to None in the result." msgstr "" -#: ../../source/ref-api/flwr.server.rst:27::1 -msgid ":py:obj:`run_fleet_api `\\ \\(\\)" +#: flwr.common.EventType.partition:3 of +msgid "" +"This will search for the separator in the string. If the separator is " +"found, returns a 3-tuple containing the part before the separator, the " +"separator itself, and the part after it." msgstr "" -#: ../../source/ref-api/flwr.server.rst:27::1 -#: flwr.server.app.run_fleet_api:1 of -msgid "Run Flower server (Fleet API)." +#: flwr.common.EventType.partition:7 of +msgid "" +"If the separator is not found, returns a 3-tuple containing the original " +"string and two empty strings." msgstr "" -#: ../../source/ref-api/flwr.server.rst:27::1 -msgid ":py:obj:`run_server_app `\\ \\(\\)" +#: flwr.common.EventType.removeprefix:3 of +msgid "" +"If the string starts with the prefix string, return string[len(prefix):]." +" Otherwise, return a copy of the original string." msgstr "" -#: ../../source/ref-api/flwr.server.rst:27::1 -#: flwr.server.run_serverapp.run_server_app:1 of -msgid "Run Flower server app." +#: flwr.common.EventType.removesuffix:3 of +msgid "" +"If the string ends with the suffix string and that suffix is not empty, " +"return string[:-len(suffix)]. Otherwise, return a copy of the original " +"string." msgstr "" -#: ../../source/ref-api/flwr.server.rst:27::1 -msgid ":py:obj:`run_superlink `\\ \\(\\)" +#: flwr.common.EventType.replace:5 of +msgid "count" msgstr "" -#: ../../source/ref-api/flwr.server.rst:27::1 -#: flwr.server.app.run_superlink:1 of -msgid "Run Flower server (Driver API and Fleet API)." +#: flwr.common.EventType.replace:4 of +msgid "" +"Maximum number of occurrences to replace. -1 (the default value) means " +"replace all occurrences." msgstr "" -#: ../../source/ref-api/flwr.server.rst:27::1 +#: flwr.common.EventType.replace:7 of msgid "" -":py:obj:`start_driver `\\ \\(\\*\\[\\, " -"server\\_address\\, server\\, ...\\]\\)" +"If the optional argument count is given, only the first count occurrences" +" are replaced." msgstr "" -#: ../../source/ref-api/flwr.server.rst:27::1 -#: flwr.server.compat.app.start_driver:1 of -msgid "Start a Flower Driver API server." +#: flwr.common.EventType.rfind:1 flwr.common.EventType.rindex:1 of +msgid "" +"Return the highest index in S where substring sub is found, such that sub" +" is contained within S[start:end]. Optional arguments start and end are " +"interpreted as in slice notation." msgstr "" -#: ../../source/ref-api/flwr.server.rst:27::1 +#: flwr.common.EventType.rpartition:3 of msgid "" -":py:obj:`start_server `\\ \\(\\*\\[\\, " -"server\\_address\\, server\\, ...\\]\\)" +"This will search for the separator in the string, starting at the end. If" +" the separator is found, returns a 3-tuple containing the part before the" +" separator, the separator itself, and the part after it." msgstr "" -#: ../../source/ref-api/flwr.server.rst:27::1 -#: flwr.server.app.start_server:1 of -msgid "Start a Flower server using the gRPC transport layer." +#: flwr.common.EventType.rpartition:7 of +msgid "" +"If the separator is not found, returns a 3-tuple containing two empty " +"strings and the original string." msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 -msgid ":py:obj:`ClientManager `\\ \\(\\)" +#: flwr.common.EventType.rsplit:7 flwr.common.EventType.split:7 of +msgid "sep" msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 -#: flwr.server.client_manager.ClientManager:1 of -msgid "Abstract base class for managing Flower clients." +#: flwr.common.EventType.rsplit:4 flwr.common.EventType.split:4 of +msgid "The separator used to split the string." msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 +#: flwr.common.EventType.rsplit:6 flwr.common.EventType.split:6 of msgid "" -":py:obj:`Driver `\\ " -"\\(\\[driver\\_service\\_address\\, ...\\]\\)" +"When set to None (the default value), will split on any whitespace " +"character (including \\\\n \\\\r \\\\t \\\\f and spaces) and will discard" +" empty strings from the result." msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 -#: flwr.server.driver.driver.Driver:1 of -msgid "`Driver` class provides an interface to the Driver API." +#: flwr.common.EventType.rsplit:11 flwr.common.EventType.split:11 of +msgid "maxsplit" msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 -msgid ":py:obj:`History `\\ \\(\\)" +#: flwr.common.EventType.rsplit:10 flwr.common.EventType.split:10 of +msgid "" +"Maximum number of splits (starting from the left). -1 (the default value)" +" means no limit." msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 -#: flwr.server.history.History:1 of -msgid "History class for training and/or evaluation metrics collection." +#: flwr.common.EventType.rsplit:13 of +msgid "Splitting starts at the end of the string and works to the front." msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 +#: flwr.common.EventType.split:13 of msgid "" -":py:obj:`LegacyContext `\\ \\(state\\[\\, " -"config\\, strategy\\, ...\\]\\)" +"Note, str.split() is mainly useful for data that has been intentionally " +"delimited. With natural text that includes punctuation, consider using " +"the regular expression module." msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 -#: flwr.server.compat.legacy_context.LegacyContext:1 of -msgid "Legacy Context." +#: flwr.common.EventType.splitlines:3 of +msgid "" +"Line breaks are not included in the resulting list unless keepends is " +"given and true." msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 +#: flwr.common.EventType.startswith:1 of msgid "" -":py:obj:`Server `\\ \\(\\*\\, client\\_manager\\[\\, " -"strategy\\]\\)" +"Return True if S starts with the specified prefix, False otherwise. With " +"optional start, test S beginning at that position. With optional end, " +"stop comparing S at that position. prefix can also be a tuple of strings " +"to try." msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 +#: flwr.common.EventType.title:3 of msgid "" -":py:obj:`ServerApp `\\ \\(\\[server\\, config\\, " -"strategy\\, ...\\]\\)" +"More specifically, words start with uppercased characters and all " +"remaining cased characters have lower case." msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 -#: flwr.server.server_app.ServerApp:1 of -msgid "Flower ServerApp." +#: flwr.common.EventType.translate:5 of +msgid "table" msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 +#: flwr.common.EventType.translate:4 of msgid "" -":py:obj:`ServerConfig `\\ \\(\\[num\\_rounds\\," -" round\\_timeout\\]\\)" +"Translation table, which must be a mapping of Unicode ordinals to Unicode" +" ordinals, strings, or None." msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 -#: flwr.server.server_config.ServerConfig:1 of -msgid "Flower server config." +#: flwr.common.EventType.translate:7 of +msgid "" +"The table must implement lookup/indexing via __getitem__, for instance a " +"dictionary or list. If this operation raises LookupError, the character " +"is left untouched. Characters mapped to None are deleted." msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 -msgid ":py:obj:`SimpleClientManager `\\ \\(\\)" +#: flwr.common.EventType.zfill:3 of +msgid "The string is never truncated." msgstr "" -#: ../../source/ref-api/flwr.server.rst:41::1 -#: flwr.server.client_manager.SimpleClientManager:1 of -msgid "Provides a pool of available clients." +#: ../../source/ref-api/flwr.common.FitIns.rst:2 +msgid "FitIns" msgstr "" -#: ../../source/ref-api/flwr.server.rst:60::1 -msgid ":py:obj:`flwr.server.strategy `\\" +#: ../../source/ref-api/flwr.common.FitIns.rst:29::1 +msgid ":py:obj:`parameters `\\" msgstr "" -#: ../../source/ref-api/flwr.server.rst:60::1 -#: flwr.server.strategy:1 of -msgid "Contains the strategy abstraction and different implementations." +#: ../../source/ref-api/flwr.common.FitIns.rst:29::1 +msgid ":py:obj:`config `\\" msgstr "" -#: ../../source/ref-api/flwr.server.rst:60::1 -msgid ":py:obj:`flwr.server.workflow `\\" +#: ../../source/ref-api/flwr.common.FitRes.rst:2 +msgid "FitRes" msgstr "" -#: ../../source/ref-api/flwr.server.rst:60::1 -#: flwr.server.workflow:1 of -msgid "Workflows." +#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 +msgid ":py:obj:`status `\\" msgstr "" -#: ../../source/ref-api/flwr.server.ClientManager.rst:2 -msgid "ClientManager" +#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 +msgid ":py:obj:`parameters `\\" msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 of -msgid ":py:obj:`all `\\ \\(\\)" +#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 +msgid ":py:obj:`num_examples `\\" msgstr "" -#: flwr.server.client_manager.ClientManager.all:1 -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.all:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of -msgid "Return all available clients." +#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 +msgid ":py:obj:`metrics `\\" msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 of -msgid ":py:obj:`num_available `\\ \\(\\)" +#: ../../source/ref-api/flwr.common.GetParametersIns.rst:2 +msgid "GetParametersIns" msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.ClientManager.num_available:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.num_available:1 of -msgid "Return the number of available clients." +#: ../../source/ref-api/flwr.common.GetParametersIns.rst:28::1 +msgid ":py:obj:`config `\\" msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 of -msgid ":py:obj:`register `\\ \\(client\\)" +#: ../../source/ref-api/flwr.common.GetParametersRes.rst:2 +msgid "GetParametersRes" msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.ClientManager.register:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.register:1 of -msgid "Register Flower ClientProxy instance." +#: ../../source/ref-api/flwr.common.GetParametersRes.rst:29::1 +msgid ":py:obj:`status `\\" msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 of -msgid "" -":py:obj:`sample `\\ " -"\\(num\\_clients\\[\\, min\\_num\\_clients\\, criterion\\]\\)" +#: ../../source/ref-api/flwr.common.GetParametersRes.rst:29::1 +msgid ":py:obj:`parameters `\\" msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.ClientManager.sample:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.sample:1 of -msgid "Sample a number of Flower ClientProxy instances." +#: ../../source/ref-api/flwr.common.GetPropertiesIns.rst:2 +msgid "GetPropertiesIns" msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 of -msgid ":py:obj:`unregister `\\ \\(client\\)" +#: ../../source/ref-api/flwr.common.GetPropertiesIns.rst:28::1 +msgid ":py:obj:`config `\\" msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.ClientManager.unregister:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.unregister:1 of -msgid "Unregister Flower ClientProxy instance." +#: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:2 +msgid "GetPropertiesRes" msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 of -msgid "" -":py:obj:`wait_for `\\ " -"\\(num\\_clients\\, timeout\\)" +#: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:29::1 +msgid ":py:obj:`status `\\" msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.ClientManager.wait_for:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.wait_for:1 of -msgid "Wait until at least `num_clients` are available." +#: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:29::1 +msgid ":py:obj:`properties `\\" msgstr "" -#: flwr.server.client_manager.ClientManager.num_available:3 -#: flwr.server.client_manager.SimpleClientManager.num_available:3 of -msgid "**num_available** -- The number of currently available clients." +#: ../../source/ref-api/flwr.common.Message.rst:2 +msgid "Message" msgstr "" -#: flwr.server.client_manager.ClientManager.register:6 -#: flwr.server.client_manager.SimpleClientManager.register:6 of -msgid "" -"**success** -- Indicating if registration was successful. False if " -"ClientProxy is already registered or can not be registered for any " -"reason." +#: flwr.common.Message.content:1::1 flwr.common.Message.metadata:1 +#: flwr.common.message.Message:3 of +msgid "A dataclass including information about the message to be executed." msgstr "" -#: flwr.server.client_manager.ClientManager.unregister:3 -#: flwr.server.client_manager.SimpleClientManager.unregister:3 of -msgid "This method is idempotent." +#: flwr.common.message.Message:5 of +msgid "" +"Holds records either sent by another entity (e.g. sent by the server-side" +" logic to a client, or vice-versa) or that will be sent to it." msgstr "" -#: ../../source/ref-api/flwr.server.Driver.rst:2 -msgid "Driver" +#: flwr.common.message.Message:8 of +msgid "" +"A dataclass that captures information about an error that took place when" +" processing another message." msgstr "" -#: flwr.server.driver.driver.Driver:3 of +#: ../../source/ref-api/flwr.common.Message.rst:35::1 msgid "" -"The IPv4 or IPv6 address of the Driver API server. Defaults to " -"`\"[::]:9091\"`." +":py:obj:`create_error_reply `\\ " +"\\(error\\[\\, ttl\\]\\)" msgstr "" -#: flwr.server.app.start_server:28 flwr.server.driver.driver.Driver:6 of -msgid "" -"Tuple containing root certificate, server certificate, and private key to" -" start a secure SSL-enabled server. The tuple is expected to have three " -"bytes elements in the following order: * CA certificate. * " -"server certificate. * server private key." +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: flwr.common.message.Message.create_error_reply:1 of +msgid "Construct a reply message indicating an error happened." msgstr "" -#: flwr.server.app.start_server:28 flwr.server.driver.driver.Driver:6 of +#: ../../source/ref-api/flwr.common.Message.rst:35::1 msgid "" -"Tuple containing root certificate, server certificate, and private key to" -" start a secure SSL-enabled server. The tuple is expected to have three " -"bytes elements in the following order:" +":py:obj:`create_reply `\\ " +"\\(content\\[\\, ttl\\]\\)" msgstr "" -#: flwr.server.app.start_server:32 flwr.server.driver.driver.Driver:10 of -msgid "CA certificate." +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: flwr.common.message.Message.create_reply:1 of +msgid "Create a reply to this message with specified content and TTL." msgstr "" -#: flwr.server.app.start_server:33 flwr.server.driver.driver.Driver:11 of -msgid "server certificate." +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +msgid ":py:obj:`has_content `\\ \\(\\)" msgstr "" -#: flwr.server.app.start_server:34 flwr.server.driver.driver.Driver:12 of -msgid "server private key." +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: flwr.common.message.Message.has_content:1 of +msgid "Return True if message has content, else False." msgstr "" -#: flwr.server.driver.driver.Driver.close:1::1 of -msgid ":py:obj:`close `\\ \\(\\)" +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +msgid ":py:obj:`has_error `\\ \\(\\)" msgstr "" -#: flwr.server.driver.driver.Driver.close:1 -#: flwr.server.driver.driver.Driver.close:1::1 of -msgid "Disconnect from the SuperLink if connected." +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: flwr.common.message.Message.has_error:1 of +msgid "Return True if message has an error, else False." msgstr "" -#: flwr.server.driver.driver.Driver.close:1::1 of -msgid "" -":py:obj:`create_message `\\ " -"\\(content\\, message\\_type\\, ...\\)" +#: flwr.common.Message.content:1::1 of +msgid ":py:obj:`content `\\" msgstr "" -#: flwr.server.driver.driver.Driver.close:1::1 -#: flwr.server.driver.driver.Driver.create_message:1 of -msgid "Create a new message with specified parameters." +#: flwr.common.Message.content:1 flwr.common.Message.content:1::1 +#: of +msgid "The content of this message." msgstr "" -#: flwr.server.driver.driver.Driver.close:1::1 of -msgid ":py:obj:`get_node_ids `\\ \\(\\)" +#: flwr.common.Message.content:1::1 of +msgid ":py:obj:`error `\\" msgstr "" -#: flwr.server.driver.driver.Driver.close:1::1 -#: flwr.server.driver.driver.Driver.get_node_ids:1 of -msgid "Get node IDs." +#: flwr.common.Message.content:1::1 flwr.common.Message.error:1 of +msgid "Error captured by this message." msgstr "" -#: flwr.server.driver.driver.Driver.close:1::1 of -msgid "" -":py:obj:`pull_messages `\\ " -"\\(message\\_ids\\)" +#: flwr.common.Message.content:1::1 of +msgid ":py:obj:`metadata `\\" msgstr "" -#: flwr.server.driver.driver.Driver.close:1::1 -#: flwr.server.driver.driver.Driver.pull_messages:1 of -msgid "Pull messages based on message IDs." +#: flwr.common.message.Message.create_error_reply:3 of +msgid "The error that was encountered." msgstr "" -#: flwr.server.driver.driver.Driver.close:1::1 of +#: flwr.common.message.Message.create_error_reply:5 +#: flwr.common.message.Message.create_reply:9 of msgid "" -":py:obj:`push_messages `\\ " -"\\(messages\\)" -msgstr "" - -#: flwr.server.driver.driver.Driver.close:1::1 -#: flwr.server.driver.driver.Driver.push_messages:1 of -msgid "Push messages to specified node IDs." +"Time-to-live for this message in seconds. If unset, it will be set based " +"on the remaining time for the received message before it expires. This " +"follows the equation: ttl = msg.meta.ttl - (reply.meta.created_at - " +"msg.meta.created_at)" msgstr "" -#: flwr.server.driver.driver.Driver.close:1::1 of +#: flwr.common.message.Message.create_error_reply:5 +#: flwr.common.message.Message.create_reply:9 of msgid "" -":py:obj:`send_and_receive `\\ " -"\\(messages\\, \\*\\[\\, timeout\\]\\)" +"Time-to-live for this message in seconds. If unset, it will be set based " +"on the remaining time for the received message before it expires. This " +"follows the equation:" msgstr "" -#: flwr.server.driver.driver.Driver.close:1::1 -#: flwr.server.driver.driver.Driver.send_and_receive:1 of -msgid "Push messages to specified node IDs and pull the reply messages." +#: flwr.common.message.Message.create_error_reply:9 +#: flwr.common.message.Message.create_reply:13 of +msgid "ttl = msg.meta.ttl - (reply.meta.created_at - msg.meta.created_at)" msgstr "" -#: flwr.server.driver.driver.Driver.create_message:3 of +#: flwr.common.message.Message.create_reply:3 of msgid "" -"This method constructs a new `Message` with given content and metadata. " -"The `run_id` and `src_node_id` will be set automatically." +"The method generates a new `Message` as a reply to this message. It " +"inherits 'run_id', 'src_node_id', 'dst_node_id', and 'message_type' from " +"this message and sets 'reply_to_message' to the ID of this message." msgstr "" -#: flwr.server.driver.driver.Driver.create_message:6 of -msgid "" -"The content for the new message. This holds records that are to be sent " -"to the destination node." +#: flwr.common.message.Message.create_reply:7 of +msgid "The content for the reply message." msgstr "" -#: flwr.server.driver.driver.Driver.create_message:9 of -msgid "" -"The type of the message, defining the action to be executed on the " -"receiving end." +#: flwr.common.message.Message.create_reply:16 of +msgid "A new `Message` instance representing the reply." msgstr "" -#: flwr.server.driver.driver.Driver.create_message:12 of -msgid "The ID of the destination node to which the message is being sent." +#: ../../source/ref-api/flwr.common.MessageType.rst:2 +msgid "MessageType" msgstr "" -#: flwr.server.driver.driver.Driver.create_message:14 of -msgid "" -"The ID of the group to which this message is associated. In some " -"settings, this is used as the FL round." +#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 +msgid ":py:obj:`EVALUATE `\\" msgstr "" -#: flwr.server.driver.driver.Driver.create_message:17 of -msgid "" -"Time-to-live for the round trip of this message, i.e., the time from " -"sending this message to receiving a reply. It specifies the duration for " -"which the message and its potential reply are considered valid." +#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 +msgid ":py:obj:`QUERY `\\" msgstr "" -#: flwr.server.driver.driver.Driver.create_message:22 of -msgid "" -"**message** -- A new `Message` instance with the specified content and " -"metadata." +#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 +msgid ":py:obj:`TRAIN `\\" msgstr "" -#: flwr.server.driver.driver.Driver.pull_messages:3 of -msgid "" -"This method is used to collect messages from the SuperLink that " -"correspond to a set of given message IDs." +#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:2 +msgid "MessageTypeLegacy" msgstr "" -#: flwr.server.driver.driver.Driver.pull_messages:6 of -msgid "An iterable of message IDs for which reply messages are to be retrieved." +#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:29::1 +msgid ":py:obj:`GET_PARAMETERS `\\" msgstr "" -#: flwr.server.driver.driver.Driver.pull_messages:9 of -msgid "**messages** -- An iterable of messages received." +#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:29::1 +msgid ":py:obj:`GET_PROPERTIES `\\" msgstr "" -#: flwr.server.driver.driver.Driver.push_messages:3 of -msgid "" -"This method takes an iterable of messages and sends each message to the " -"node specified in `dst_node_id`." +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.run_id:1 flwr.common.message.Metadata:3 of +msgid "An identifier for the current run." msgstr "" -#: flwr.server.driver.driver.Driver.push_messages:6 -#: flwr.server.driver.driver.Driver.send_and_receive:7 of -msgid "An iterable of messages to be sent." +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.message_id:1 flwr.common.message.Metadata:5 of +msgid "An identifier for the current message." msgstr "" -#: flwr.server.driver.driver.Driver.push_messages:9 of -msgid "" -"**message_ids** -- An iterable of IDs for the messages that were sent, " -"which can be used to pull replies." +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.src_node_id:1 flwr.common.message.Metadata:7 of +msgid "An identifier for the node sending this message." msgstr "" -#: flwr.server.driver.driver.Driver.send_and_receive:3 of -msgid "" -"This method sends a list of messages to their destination node IDs and " -"then waits for the replies. It continues to pull replies until either all" -" replies are received or the specified timeout duration is exceeded." +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.dst_node_id:1 flwr.common.message.Metadata:9 of +msgid "An identifier for the node receiving this message." msgstr "" -#: flwr.server.driver.driver.Driver.send_and_receive:9 of +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.reply_to_message:1 flwr.common.message.Metadata:11 of +msgid "An identifier for the message this message replies to." +msgstr "" + +#: flwr.common.message.Metadata:13 of msgid "" -"The timeout duration in seconds. If specified, the method will wait for " -"replies for this duration. If `None`, there is no time limit and the " -"method will wait until replies for all messages are received." +"An identifier for grouping messages. In some settings, this is used as " +"the FL round." msgstr "" -#: flwr.server.driver.driver.Driver.send_and_receive:14 of -msgid "**replies** -- An iterable of reply messages received from the SuperLink." +#: flwr.common.message.Metadata:16 of +msgid "Time-to-live for this message in seconds." msgstr "" -#: flwr.server.driver.driver.Driver.send_and_receive:18 -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:53 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:60 -#: of -msgid "Notes" +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.message_type:1 flwr.common.message.Metadata:18 of +msgid "A string that encodes the action to be executed on the receiving end." msgstr "" -#: flwr.server.driver.driver.Driver.send_and_receive:19 of +#: flwr.common.message.Metadata:21 of msgid "" -"This method uses `push_messages` to send the messages and `pull_messages`" -" to collect the replies. If `timeout` is set, the method may not return " -"replies for all sent messages. A message remains valid until its TTL, " -"which is not affected by `timeout`." +"An identifier that can be used when loading a particular data partition " +"for a ClientApp. Making use of this identifier is more relevant when " +"conducting simulations." msgstr "" -#: ../../source/ref-api/flwr.server.History.rst:2 -msgid "History" +#: flwr.common.Metadata.created_at:1::1 of +msgid ":py:obj:`created_at `\\" msgstr "" -#: flwr.server.history.History.add_loss_centralized:1::1 of -msgid "" -":py:obj:`add_loss_centralized " -"`\\ \\(server\\_round\\, " -"loss\\)" +#: flwr.common.Metadata.created_at:1 +#: flwr.common.Metadata.created_at:1::1 of +msgid "Unix timestamp when the message was created." msgstr "" -#: flwr.server.history.History.add_loss_centralized:1 -#: flwr.server.history.History.add_loss_centralized:1::1 of -msgid "Add one loss entry (from centralized evaluation)." +#: flwr.common.Metadata.created_at:1::1 of +msgid ":py:obj:`dst_node_id `\\" msgstr "" -#: flwr.server.history.History.add_loss_centralized:1::1 of -msgid "" -":py:obj:`add_loss_distributed " -"`\\ \\(server\\_round\\, " -"loss\\)" +#: flwr.common.Metadata.created_at:1::1 of +msgid ":py:obj:`group_id `\\" msgstr "" -#: flwr.server.history.History.add_loss_centralized:1::1 -#: flwr.server.history.History.add_loss_distributed:1 of -msgid "Add one loss entry (from distributed evaluation)." +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.group_id:1 of +msgid "An identifier for grouping messages." msgstr "" -#: flwr.server.history.History.add_loss_centralized:1::1 of -msgid "" -":py:obj:`add_metrics_centralized " -"`\\ \\(server\\_round\\, " -"metrics\\)" +#: flwr.common.Metadata.created_at:1::1 of +msgid ":py:obj:`message_id `\\" msgstr "" -#: flwr.server.history.History.add_loss_centralized:1::1 -#: flwr.server.history.History.add_metrics_centralized:1 of -msgid "Add metrics entries (from centralized evaluation)." +#: flwr.common.Metadata.created_at:1::1 of +msgid ":py:obj:`message_type `\\" msgstr "" -#: flwr.server.history.History.add_loss_centralized:1::1 of -msgid "" -":py:obj:`add_metrics_distributed " -"`\\ \\(server\\_round\\, " -"metrics\\)" +#: flwr.common.Metadata.created_at:1::1 of +msgid ":py:obj:`partition_id `\\" msgstr "" -#: flwr.server.history.History.add_loss_centralized:1::1 -#: flwr.server.history.History.add_metrics_distributed:1 of -msgid "Add metrics entries (from distributed evaluation)." +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.partition_id:1 of +msgid "An identifier telling which data partition a ClientApp should use." msgstr "" -#: flwr.server.history.History.add_loss_centralized:1::1 of -msgid "" -":py:obj:`add_metrics_distributed_fit " -"`\\ \\(server\\_round\\," -" ...\\)" +#: flwr.common.Metadata.created_at:1::1 of +msgid ":py:obj:`reply_to_message `\\" msgstr "" -#: flwr.server.history.History.add_loss_centralized:1::1 -#: flwr.server.history.History.add_metrics_distributed_fit:1 of -msgid "Add metrics entries (from distributed fit)." +#: flwr.common.Metadata.created_at:1::1 of +msgid ":py:obj:`run_id `\\" msgstr "" -#: ../../source/ref-api/flwr.server.LegacyContext.rst:2 -msgid "LegacyContext" +#: flwr.common.Metadata.created_at:1::1 of +msgid ":py:obj:`src_node_id `\\" msgstr "" -#: flwr.server.compat.legacy_context.LegacyContext:1 of -msgid "Bases: :py:class:`~flwr.common.context.Context`" +#: flwr.common.Metadata.created_at:1::1 of +msgid ":py:obj:`ttl `\\" msgstr "" -#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 -msgid ":py:obj:`config `\\" +#: flwr.common.Metadata.created_at:1::1 flwr.common.Metadata.ttl:1 +#: of +msgid "Time-to-live for this message." msgstr "" -#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 -msgid ":py:obj:`strategy `\\" +#: ../../source/ref-api/flwr.common.MetricsRecord.rst:2 +msgid "MetricsRecord" msgstr "" -#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 -msgid ":py:obj:`client_manager `\\" +#: flwr.common.record.metricsrecord.MetricsRecord:1 of +msgid "" +"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " +"[:py:class:`str`, :py:class:`int` | :py:class:`float` | " +":py:class:`~typing.List`\\ [:py:class:`int`] | :py:class:`~typing.List`\\" +" [:py:class:`float`]]" msgstr "" -#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 -msgid ":py:obj:`history `\\" +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`clear `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 -msgid ":py:obj:`state `\\" +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`count_bytes `\\ \\(\\)" msgstr "" -#: flwr.server.server.Server.client_manager:1::1 of -msgid ":py:obj:`client_manager `\\ \\(\\)" +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" msgstr "" -#: flwr.server.server.Server.client_manager:1 -#: flwr.server.server.Server.client_manager:1::1 of -msgid "Return ClientManager." +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`items `\\ \\(\\)" msgstr "" -#: flwr.server.server.Server.client_manager:1::1 of -msgid "" -":py:obj:`disconnect_all_clients " -"`\\ \\(timeout\\)" +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`keys `\\ \\(\\)" msgstr "" -#: flwr.server.server.Server.client_manager:1::1 -#: flwr.server.server.Server.disconnect_all_clients:1 of -msgid "Send shutdown signal to all clients." +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" msgstr "" -#: flwr.server.server.Server.client_manager:1::1 of +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of msgid "" -":py:obj:`evaluate_round `\\ " -"\\(server\\_round\\, timeout\\)" +":py:obj:`update `\\ \\(\\[E\\, " +"\\]\\*\\*F\\)" msgstr "" -#: flwr.server.server.Server.client_manager:1::1 -#: flwr.server.server.Server.evaluate_round:1 of -msgid "Validate current global model on a number of clients." +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`values `\\ \\(\\)" msgstr "" -#: flwr.server.server.Server.client_manager:1::1 of -msgid ":py:obj:`fit `\\ \\(num\\_rounds\\, timeout\\)" +#: ../../source/ref-api/flwr.common.NDArray.rst:2 +msgid "NDArray" msgstr "" -#: flwr.server.server.Server.client_manager:1::1 -#: flwr.server.server.Server.fit:1 of -msgid "Run federated averaging for a number of rounds." +#: ../../source/ref-api/flwr.common.Parameters.rst:29::1 +msgid ":py:obj:`tensors `\\" msgstr "" -#: flwr.server.server.Server.client_manager:1::1 of -msgid "" -":py:obj:`fit_round `\\ \\(server\\_round\\," -" timeout\\)" +#: ../../source/ref-api/flwr.common.Parameters.rst:29::1 +msgid ":py:obj:`tensor_type `\\" msgstr "" -#: flwr.server.server.Server.client_manager:1::1 -#: flwr.server.server.Server.fit_round:1 of -msgid "Perform a single round of federated averaging." +#: ../../source/ref-api/flwr.common.ParametersRecord.rst:2 +msgid "ParametersRecord" msgstr "" -#: flwr.server.server.Server.client_manager:1::1 of +#: flwr.common.record.parametersrecord.ParametersRecord:1 of msgid "" -":py:obj:`set_max_workers `\\ " -"\\(max\\_workers\\)" +"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " +"[:py:class:`str`, :py:class:`~flwr.common.record.parametersrecord.Array`]" msgstr "" -#: flwr.server.server.Server.client_manager:1::1 -#: flwr.server.server.Server.set_max_workers:1 of -msgid "Set the max_workers used by ThreadPoolExecutor." +#: flwr.common.record.parametersrecord.ParametersRecord:3 of +msgid "" +"A dataclass storing named Arrays in order. This means that it holds " +"entries as an OrderedDict[str, Array]. ParametersRecord objects can be " +"viewed as an equivalent to PyTorch's state_dict, but holding serialised " +"tensors instead." msgstr "" -#: flwr.server.server.Server.client_manager:1::1 of -msgid ":py:obj:`set_strategy `\\ \\(strategy\\)" +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`clear `\\ \\(\\)" msgstr "" -#: flwr.server.server.Server.client_manager:1::1 -#: flwr.server.server.Server.set_strategy:1 of -msgid "Replace server strategy." +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`count_bytes `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.ServerApp.rst:2 -msgid "ServerApp" +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" msgstr "" -#: flwr.server.server_app.ServerApp:5 of -msgid "Use the `ServerApp` with an existing `Strategy`:" +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`items `\\ \\(\\)" msgstr "" -#: flwr.server.server_app.ServerApp:15 of -msgid "Use the `ServerApp` with a custom main function:" +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`keys `\\ \\(\\)" msgstr "" -#: flwr.server.server_app.ServerApp.main:1::1 of -msgid ":py:obj:`main `\\ \\(\\)" +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" msgstr "" -#: flwr.server.server_app.ServerApp.main:1 -#: flwr.server.server_app.ServerApp.main:1::1 of -msgid "Return a decorator that registers the main fn with the server app." +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid "" +":py:obj:`update `\\ \\(\\[E\\, " +"\\]\\*\\*F\\)" msgstr "" -#: ../../source/ref-api/flwr.server.ServerConfig.rst:2 -msgid "ServerConfig" +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`values `\\ \\(\\)" msgstr "" -#: flwr.server.server_config.ServerConfig:3 of +#: flwr.common.record.parametersrecord.ParametersRecord.count_bytes:3 of msgid "" -"All attributes have default values which allows users to configure just " -"the ones they care about." +"Note that a small amount of Bytes might also be included in this counting" +" that correspond to metadata of the serialized object (e.g. of NumPy " +"array) needed for deseralization." msgstr "" -#: ../../source/ref-api/flwr.server.ServerConfig.rst:29::1 -msgid ":py:obj:`num_rounds `\\" +#: ../../source/ref-api/flwr.common.ReconnectIns.rst:2 +msgid "ReconnectIns" msgstr "" -#: ../../source/ref-api/flwr.server.ServerConfig.rst:29::1 -msgid ":py:obj:`round_timeout `\\" +#: ../../source/ref-api/flwr.common.ReconnectIns.rst:28::1 +msgid ":py:obj:`seconds `\\" msgstr "" -#: ../../source/ref-api/flwr.server.SimpleClientManager.rst:2 -msgid "SimpleClientManager" +#: ../../source/ref-api/flwr.common.RecordSet.rst:2 +msgid "RecordSet" msgstr "" -#: flwr.server.client_manager.SimpleClientManager:1 of -msgid "Bases: :py:class:`~flwr.server.client_manager.ClientManager`" +#: flwr.common.RecordSet.configs_records:1::1 of +msgid ":py:obj:`configs_records `\\" msgstr "" -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of -msgid ":py:obj:`all `\\ \\(\\)" +#: flwr.common.RecordSet.configs_records:1 +#: flwr.common.RecordSet.configs_records:1::1 of +msgid "Dictionary holding ConfigsRecord instances." msgstr "" -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of -msgid "" -":py:obj:`num_available `\\" -" \\(\\)" +#: flwr.common.RecordSet.configs_records:1::1 of +msgid ":py:obj:`metrics_records `\\" msgstr "" -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of -msgid "" -":py:obj:`register `\\ " -"\\(client\\)" +#: flwr.common.RecordSet.configs_records:1::1 +#: flwr.common.RecordSet.metrics_records:1 of +msgid "Dictionary holding MetricsRecord instances." msgstr "" -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of -msgid "" -":py:obj:`sample `\\ " -"\\(num\\_clients\\[\\, min\\_num\\_clients\\, criterion\\]\\)" +#: flwr.common.RecordSet.configs_records:1::1 of +msgid ":py:obj:`parameters_records `\\" msgstr "" -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of -msgid "" -":py:obj:`unregister `\\ " -"\\(client\\)" +#: flwr.common.RecordSet.configs_records:1::1 +#: flwr.common.RecordSet.parameters_records:1 of +msgid "Dictionary holding ParametersRecord instances." msgstr "" -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of -msgid "" -":py:obj:`wait_for `\\ " -"\\(num\\_clients\\[\\, timeout\\]\\)" +#: ../../source/ref-api/flwr.common.ServerMessage.rst:2 +msgid "ServerMessage" msgstr "" -#: flwr.server.client_manager.SimpleClientManager.wait_for:3 of -msgid "" -"Blocks until the requested number of clients is available or until a " -"timeout is reached. Current timeout default: 1 day." +#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 +msgid ":py:obj:`evaluate_ins `\\" msgstr "" -#: flwr.server.client_manager.SimpleClientManager.wait_for:6 of -msgid "The number of clients to wait for." +#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 +msgid ":py:obj:`fit_ins `\\" msgstr "" -#: flwr.server.client_manager.SimpleClientManager.wait_for:8 of -msgid "The time in seconds to wait for, defaults to 86400 (24h)." +#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 +msgid "" +":py:obj:`get_parameters_ins " +"`\\" msgstr "" -#: flwr.server.client_manager.SimpleClientManager.wait_for:11 of -msgid "**success**" +#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 +msgid "" +":py:obj:`get_properties_ins " +"`\\" msgstr "" -#: ../../source/ref-api/flwr.server.run_driver_api.rst:2 -msgid "run\\_driver\\_api" +#: ../../source/ref-api/flwr.common.Status.rst:2 +msgid "Status" msgstr "" -#: ../../source/ref-api/flwr.server.run_fleet_api.rst:2 -msgid "run\\_fleet\\_api" +#: ../../source/ref-api/flwr.common.Status.rst:29::1 +msgid ":py:obj:`code `\\" msgstr "" -#: ../../source/ref-api/flwr.server.run_server_app.rst:2 -msgid "run\\_server\\_app" +#: ../../source/ref-api/flwr.common.Status.rst:29::1 +msgid ":py:obj:`message `\\" msgstr "" -#: ../../source/ref-api/flwr.server.run_superlink.rst:2 -msgid "run\\_superlink" +#: ../../source/ref-api/flwr.common.array_from_numpy.rst:2 +msgid "array\\_from\\_numpy" msgstr "" -#: ../../source/ref-api/flwr.server.start_driver.rst:2 -msgid "start\\_driver" +#: ../../source/ref-api/flwr.common.bytes_to_ndarray.rst:2 +msgid "bytes\\_to\\_ndarray" msgstr "" -#: flwr.server.compat.app.start_driver:3 of -msgid "" -"The IPv4 or IPv6 address of the Driver API server. Defaults to " -"`\"[::]:8080\"`." +#: ../../source/ref-api/flwr.common.configure.rst:2 +msgid "configure" msgstr "" -#: flwr.server.compat.app.start_driver:6 of -msgid "" -"A server implementation, either `flwr.server.Server` or a subclass " -"thereof. If no instance is provided, then `start_driver` will create one." +#: ../../source/ref-api/flwr.common.event.rst:2 +msgid "event" msgstr "" -#: flwr.server.app.start_server:9 flwr.server.compat.app.start_driver:10 -#: flwr.simulation.app.start_simulation:28 of -msgid "" -"Currently supported values are `num_rounds` (int, default: 1) and " -"`round_timeout` in seconds (float, default: None)." +#: ../../source/ref-api/flwr.common.log.rst:2 +msgid "log" msgstr "" -#: flwr.server.app.start_server:12 flwr.server.compat.app.start_driver:13 of +#: logging.Logger.log:3 of msgid "" -"An implementation of the abstract base class " -"`flwr.server.strategy.Strategy`. If no strategy is provided, then " -"`start_server` will use `flwr.server.strategy.FedAvg`." +"To pass exception information, use the keyword argument exc_info with a " +"true value, e.g." msgstr "" -#: flwr.server.compat.app.start_driver:17 of -msgid "" -"An implementation of the class `flwr.server.ClientManager`. If no " -"implementation is provided, then `start_driver` will use " -"`flwr.server.SimpleClientManager`." +#: logging.Logger.log:6 of +#, python-format +msgid "logger.log(level, \"We have a %s\", \"mysterious problem\", exc_info=1)" msgstr "" -#: flwr.server.compat.app.start_driver:25 of -msgid "The Driver object to use." +#: ../../source/ref-api/flwr.common.ndarray_to_bytes.rst:2 +msgid "ndarray\\_to\\_bytes" msgstr "" -#: flwr.server.app.start_server:37 flwr.server.compat.app.start_driver:28 of -msgid "**hist** -- Object containing training and evaluation metrics." +#: ../../source/ref-api/flwr.common.ndarrays_to_parameters.rst:2 +msgid "ndarrays\\_to\\_parameters" msgstr "" -#: flwr.server.compat.app.start_driver:33 of -msgid "Starting a driver that connects to an insecure server:" +#: ../../source/ref-api/flwr.common.now.rst:2 +msgid "now" msgstr "" -#: flwr.server.compat.app.start_driver:37 of -msgid "Starting a driver that connects to an SSL-enabled server:" +#: ../../source/ref-api/flwr.common.parameters_to_ndarrays.rst:2 +msgid "parameters\\_to\\_ndarrays" msgstr "" -#: ../../source/ref-api/flwr.server.start_server.rst:2 -msgid "start\\_server" +#: ../../source/ref-api/flwr.server.rst:2 +msgid "server" msgstr "" -#: flwr.server.app.start_server:3 of -msgid "The IPv4 or IPv6 address of the server. Defaults to `\"[::]:8080\"`." +#: ../../source/ref-api/flwr.server.rst:26::1 +msgid ":py:obj:`run_driver_api `\\ \\(\\)" msgstr "" -#: flwr.server.app.start_server:5 of -msgid "" -"A server implementation, either `flwr.server.Server` or a subclass " -"thereof. If no instance is provided, then `start_server` will create one." -msgstr "" - -#: flwr.server.app.start_server:16 of -msgid "" -"An implementation of the abstract base class `flwr.server.ClientManager`." -" If no implementation is provided, then `start_server` will use " -"`flwr.server.client_manager.SimpleClientManager`." +#: ../../source/ref-api/flwr.server.rst:26::1 +#: flwr.server.app.run_driver_api:1 of +msgid "Run Flower server (Driver API)." msgstr "" -#: flwr.server.app.start_server:21 of -msgid "" -"The maximum length of gRPC messages that can be exchanged with the Flower" -" clients. The default should be sufficient for most models. Users who " -"train very large models might need to increase this value. Note that the " -"Flower clients need to be started with the same value (see " -"`flwr.client.start_client`), otherwise clients will not know about the " -"increased limit and block larger messages." +#: ../../source/ref-api/flwr.server.rst:26::1 +msgid ":py:obj:`run_fleet_api `\\ \\(\\)" msgstr "" -#: flwr.server.app.start_server:42 of -msgid "Starting an insecure server:" +#: ../../source/ref-api/flwr.server.rst:26::1 +#: flwr.server.app.run_fleet_api:1 of +msgid "Run Flower server (Fleet API)." msgstr "" -#: flwr.server.app.start_server:46 of -msgid "Starting an SSL-enabled server:" +#: ../../source/ref-api/flwr.server.rst:26::1 +msgid ":py:obj:`run_server_app `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:2 -msgid "strategy" +#: ../../source/ref-api/flwr.server.rst:26::1 +#: flwr.server.run_serverapp.run_server_app:1 of +msgid "Run Flower server app." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`Bulyan `\\ \\(\\*\\, " -"fraction\\_fit\\, fraction\\_evaluate\\, ...\\)" +#: ../../source/ref-api/flwr.server.rst:26::1 +msgid ":py:obj:`run_superlink `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.bulyan.Bulyan:1 of -msgid "Bulyan strategy." +#: ../../source/ref-api/flwr.server.rst:26::1 +#: flwr.server.app.run_superlink:1 of +msgid "Run Flower SuperLink (Driver API and Fleet API)." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: ../../source/ref-api/flwr.server.rst:26::1 msgid "" -":py:obj:`DPFedAvgAdaptive `\\ " -"\\(strategy\\, num\\_sampled\\_clients\\)" +":py:obj:`start_server `\\ \\(\\*\\[\\, " +"server\\_address\\, server\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive:1 of -msgid "Wrapper for configuring a Strategy for DP with Adaptive Clipping." +#: ../../source/ref-api/flwr.server.rst:26::1 +#: flwr.server.app.start_server:1 of +msgid "Start a Flower server using the gRPC transport layer." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`DPFedAvgFixed `\\ " -"\\(strategy\\, num\\_sampled\\_clients\\, ...\\)" +#: ../../source/ref-api/flwr.server.rst:40::1 +msgid ":py:obj:`ClientManager `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed:1 of -msgid "Wrapper for configuring a Strategy for DP with Fixed Clipping." +#: ../../source/ref-api/flwr.server.rst:40::1 +#: flwr.server.client_manager.ClientManager:1 of +msgid "Abstract base class for managing Flower clients." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`DifferentialPrivacyClientSideAdaptiveClipping " -"`\\ " -"\\(...\\)" +#: ../../source/ref-api/flwr.server.rst:40::1 +msgid ":py:obj:`Driver `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:1 -#: of -msgid "Strategy wrapper for central DP with client-side adaptive clipping." +#: ../../source/ref-api/flwr.server.rst:40::1 +#: flwr.server.driver.driver.Driver:1 of +msgid "Abstract base Driver class for the Driver API." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`DifferentialPrivacyServerSideAdaptiveClipping " -"`\\ " -"\\(...\\)" +#: ../../source/ref-api/flwr.server.rst:40::1 +msgid ":py:obj:`History `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:1 -#: of -msgid "Strategy wrapper for central DP with server-side adaptive clipping." +#: ../../source/ref-api/flwr.server.rst:40::1 +#: flwr.server.history.History:1 of +msgid "History class for training and/or evaluation metrics collection." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: ../../source/ref-api/flwr.server.rst:40::1 msgid "" -":py:obj:`DifferentialPrivacyClientSideFixedClipping " -"`\\ " -"\\(...\\)" +":py:obj:`LegacyContext `\\ \\(state\\[\\, " +"config\\, strategy\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:1 -#: of -msgid "Strategy wrapper for central DP with client-side fixed clipping." +#: ../../source/ref-api/flwr.server.rst:40::1 +#: flwr.server.compat.legacy_context.LegacyContext:1 of +msgid "Legacy Context." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: ../../source/ref-api/flwr.server.rst:40::1 msgid "" -":py:obj:`DifferentialPrivacyServerSideFixedClipping " -"`\\ " -"\\(...\\)" -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:1 -#: of -msgid "Strategy wrapper for central DP with server-side fixed clipping." +":py:obj:`Server `\\ \\(\\*\\, client\\_manager\\[\\, " +"strategy\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: ../../source/ref-api/flwr.server.rst:40::1 msgid "" -":py:obj:`FedAdagrad `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +":py:obj:`ServerApp `\\ \\(\\[server\\, config\\, " +"strategy\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedadagrad.FedAdagrad:1 of -msgid "FedAdagrad strategy - Adaptive Federated Optimization using Adagrad." +#: ../../source/ref-api/flwr.server.rst:40::1 +#: flwr.server.server_app.ServerApp:1 of +msgid "Flower ServerApp." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: ../../source/ref-api/flwr.server.rst:40::1 msgid "" -":py:obj:`FedAdam `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +":py:obj:`ServerConfig `\\ \\(\\[num\\_rounds\\," +" round\\_timeout\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedadam.FedAdam:1 of -msgid "FedAdam - Adaptive Federated Optimization using Adam." +#: ../../source/ref-api/flwr.server.rst:40::1 +#: flwr.server.server_config.ServerConfig:1 of +msgid "Flower server config." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`FedAvg `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" +#: ../../source/ref-api/flwr.server.rst:40::1 +msgid ":py:obj:`SimpleClientManager `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedavg.FedAvg:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:1 of -msgid "Federated Averaging strategy." +#: ../../source/ref-api/flwr.server.rst:40::1 +#: flwr.server.client_manager.SimpleClientManager:1 of +msgid "Provides a pool of available clients." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`FedAvgAndroid `\\ " -"\\(\\*\\[\\, fraction\\_fit\\, ...\\]\\)" +#: ../../source/ref-api/flwr.server.rst:59::1 +msgid ":py:obj:`flwr.server.strategy `\\" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`FedAvgM `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +#: ../../source/ref-api/flwr.server.rst:59::1 +#: flwr.server.strategy:1 of +msgid "Contains the strategy abstraction and different implementations." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedavgm.FedAvgM:1 of -msgid "Federated Averaging with Momentum strategy." +#: ../../source/ref-api/flwr.server.rst:59::1 +msgid ":py:obj:`flwr.server.workflow `\\" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`FedMedian `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +#: ../../source/ref-api/flwr.server.rst:59::1 +#: flwr.server.workflow:1 of +msgid "Workflows." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedmedian.FedMedian:1 of -msgid "Configurable FedMedian strategy implementation." +#: ../../source/ref-api/flwr.server.ClientManager.rst:2 +msgid "ClientManager" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`FedOpt `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" +#: flwr.server.client_manager.ClientManager.all:1::1 of +msgid ":py:obj:`all `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedopt.FedOpt:1 of -msgid "Federated Optim strategy." +#: flwr.server.client_manager.ClientManager.all:1 +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.all:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +msgid "Return all available clients." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`FedProx `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +#: flwr.server.client_manager.ClientManager.all:1::1 of +msgid ":py:obj:`num_available `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedprox.FedProx:1 of -msgid "Federated Optimization strategy." +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.ClientManager.num_available:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.num_available:1 of +msgid "Return the number of available clients." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`FedTrimmedAvg `\\ " -"\\(\\*\\[\\, fraction\\_fit\\, ...\\]\\)" +#: flwr.server.client_manager.ClientManager.all:1::1 of +msgid ":py:obj:`register `\\ \\(client\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:1 of -msgid "Federated Averaging with Trimmed Mean [Dong Yin, et al., 2021]." +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.ClientManager.register:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.register:1 of +msgid "Register Flower ClientProxy instance." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.client_manager.ClientManager.all:1::1 of msgid "" -":py:obj:`FedXgbBagging `\\ " -"\\(\\[evaluate\\_function\\]\\)" +":py:obj:`sample `\\ " +"\\(num\\_clients\\[\\, min\\_num\\_clients\\, criterion\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging:1 of -msgid "Configurable FedXgbBagging strategy implementation." +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.ClientManager.sample:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.sample:1 of +msgid "Sample a number of Flower ClientProxy instances." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`FedXgbCyclic `\\ " -"\\(\\*\\*kwargs\\)" +#: flwr.server.client_manager.ClientManager.all:1::1 of +msgid ":py:obj:`unregister `\\ \\(client\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic:1 of -msgid "Configurable FedXgbCyclic strategy implementation." +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.ClientManager.unregister:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.unregister:1 of +msgid "Unregister Flower ClientProxy instance." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.client_manager.ClientManager.all:1::1 of msgid "" -":py:obj:`FedXgbNnAvg `\\ \\(\\*args\\, " -"\\*\\*kwargs\\)" +":py:obj:`wait_for `\\ " +"\\(num\\_clients\\, timeout\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg:1 of -msgid "Configurable FedXgbNnAvg strategy implementation." +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.ClientManager.wait_for:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.wait_for:1 of +msgid "Wait until at least `num_clients` are available." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`FedYogi `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +#: flwr.server.client_manager.ClientManager.num_available:3 +#: flwr.server.client_manager.SimpleClientManager.num_available:3 of +msgid "**num_available** -- The number of currently available clients." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedyogi.FedYogi:1 of -msgid "FedYogi [Reddi et al., 2020] strategy." +#: flwr.server.client_manager.ClientManager.register:6 +#: flwr.server.client_manager.SimpleClientManager.register:6 of +msgid "" +"**success** -- Indicating if registration was successful. False if " +"ClientProxy is already registered or can not be registered for any " +"reason." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`FaultTolerantFedAvg " -"`\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +#: flwr.server.client_manager.ClientManager.unregister:3 +#: flwr.server.client_manager.SimpleClientManager.unregister:3 of +msgid "This method is idempotent." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg:1 of -msgid "Configurable fault-tolerant FedAvg strategy implementation." +#: ../../source/ref-api/flwr.server.Driver.rst:2 +msgid "Driver" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.create_message:1::1 of msgid "" -":py:obj:`Krum `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" +":py:obj:`create_message `\\ " +"\\(content\\, message\\_type\\, ...\\[\\, ttl\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.krum.Krum:1 of -msgid "Krum [Blanchard et al., 2017] strategy." +#: flwr.server.driver.driver.Driver.create_message:1 +#: flwr.server.driver.driver.Driver.create_message:1::1 of +msgid "Create a new message with specified parameters." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid "" -":py:obj:`QFedAvg `\\ \\(\\*\\[\\, " -"q\\_param\\, qffl\\_learning\\_rate\\, ...\\]\\)" +#: flwr.server.driver.driver.Driver.create_message:1::1 of +msgid ":py:obj:`get_node_ids `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.qfedavg.QFedAvg:1 of -msgid "Configurable QFedAvg strategy implementation." +#: flwr.server.driver.driver.Driver.create_message:1::1 +#: flwr.server.driver.driver.Driver.get_node_ids:1 of +msgid "Get node IDs." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -msgid ":py:obj:`Strategy `\\ \\(\\)" +#: flwr.server.driver.driver.Driver.create_message:1::1 of +msgid "" +":py:obj:`pull_messages `\\ " +"\\(message\\_ids\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.strategy.Strategy:1 of -msgid "Abstract base class for server strategy implementations." +#: flwr.server.driver.driver.Driver.create_message:1::1 +#: flwr.server.driver.driver.Driver.pull_messages:1 of +msgid "Pull messages based on message IDs." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.Bulyan.rst:2 -msgid "Bulyan" +#: flwr.server.driver.driver.Driver.create_message:1::1 of +msgid "" +":py:obj:`push_messages `\\ " +"\\(messages\\)" msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:1 -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg:1 -#: flwr.server.strategy.fedavgm.FedAvgM:1 -#: flwr.server.strategy.fedmedian.FedMedian:1 -#: flwr.server.strategy.fedopt.FedOpt:1 flwr.server.strategy.fedprox.FedProx:1 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging:1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic:1 -#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg:1 -#: flwr.server.strategy.krum.Krum:1 flwr.server.strategy.qfedavg.QFedAvg:1 of -msgid "Bases: :py:class:`~flwr.server.strategy.fedavg.FedAvg`" +#: flwr.server.driver.driver.Driver.create_message:1::1 +#: flwr.server.driver.driver.Driver.push_messages:1 of +msgid "Push messages to specified node IDs." msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:3 of -msgid "Implementation based on https://arxiv.org/abs/1802.07927." +#: flwr.server.driver.driver.Driver.create_message:1::1 of +msgid "" +":py:obj:`send_and_receive `\\ " +"\\(messages\\, \\*\\[\\, timeout\\]\\)" msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:5 -#: flwr.server.strategy.fedadagrad.FedAdagrad:5 -#: flwr.server.strategy.fedadam.FedAdam:5 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:5 -#: flwr.server.strategy.fedavgm.FedAvgM:5 flwr.server.strategy.fedopt.FedOpt:5 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:5 -#: flwr.server.strategy.fedyogi.FedYogi:5 flwr.server.strategy.krum.Krum:5 of -msgid "Fraction of clients used during training. Defaults to 1.0." +#: flwr.server.driver.driver.Driver.create_message:1::1 +#: flwr.server.driver.driver.Driver.send_and_receive:1 of +msgid "Push messages to specified node IDs and pull the reply messages." msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:7 -#: flwr.server.strategy.fedadagrad.FedAdagrad:7 -#: flwr.server.strategy.fedadam.FedAdam:7 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:7 -#: flwr.server.strategy.fedavgm.FedAvgM:7 flwr.server.strategy.fedopt.FedOpt:7 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:7 -#: flwr.server.strategy.fedyogi.FedYogi:7 flwr.server.strategy.krum.Krum:7 of -msgid "Fraction of clients used during validation. Defaults to 1.0." +#: flwr.server.driver.driver.Driver.create_message:3 of +msgid "" +"This method constructs a new `Message` with given content and metadata. " +"The `run_id` and `src_node_id` will be set automatically." msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:9 -#: flwr.server.strategy.fedadagrad.FedAdagrad:9 -#: flwr.server.strategy.fedadam.FedAdam:9 flwr.server.strategy.fedavg.FedAvg:13 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:9 -#: flwr.server.strategy.fedavgm.FedAvgM:9 flwr.server.strategy.fedopt.FedOpt:9 -#: flwr.server.strategy.fedprox.FedProx:45 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:9 -#: flwr.server.strategy.fedyogi.FedYogi:9 flwr.server.strategy.krum.Krum:9 of -msgid "Minimum number of clients used during training. Defaults to 2." +#: flwr.server.driver.driver.Driver.create_message:6 of +msgid "" +"The content for the new message. This holds records that are to be sent " +"to the destination node." msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:11 -#: flwr.server.strategy.fedadagrad.FedAdagrad:11 -#: flwr.server.strategy.fedadam.FedAdam:11 -#: flwr.server.strategy.fedavg.FedAvg:15 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:11 -#: flwr.server.strategy.fedavgm.FedAvgM:11 -#: flwr.server.strategy.fedopt.FedOpt:11 -#: flwr.server.strategy.fedprox.FedProx:47 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:11 -#: flwr.server.strategy.fedyogi.FedYogi:11 flwr.server.strategy.krum.Krum:11 of -msgid "Minimum number of clients used during validation. Defaults to 2." +#: flwr.server.driver.driver.Driver.create_message:9 of +msgid "" +"The type of the message, defining the action to be executed on the " +"receiving end." msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:13 -#: flwr.server.strategy.fedadagrad.FedAdagrad:13 -#: flwr.server.strategy.fedadam.FedAdam:13 -#: flwr.server.strategy.fedavg.FedAvg:17 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:13 -#: flwr.server.strategy.fedavgm.FedAvgM:13 -#: flwr.server.strategy.fedopt.FedOpt:13 -#: flwr.server.strategy.fedprox.FedProx:49 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:13 -#: flwr.server.strategy.fedyogi.FedYogi:13 flwr.server.strategy.krum.Krum:13 of -msgid "Minimum number of total clients in the system. Defaults to 2." +#: flwr.server.driver.driver.Driver.create_message:12 of +msgid "The ID of the destination node to which the message is being sent." msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:15 flwr.server.strategy.krum.Krum:15 of -msgid "Number of malicious clients in the system. Defaults to 0." +#: flwr.server.driver.driver.Driver.create_message:14 of +msgid "" +"The ID of the group to which this message is associated. In some " +"settings, this is used as the FL round." msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:17 -#: flwr.server.strategy.fedadagrad.FedAdagrad:15 -#: flwr.server.strategy.fedadam.FedAdam:15 -#: flwr.server.strategy.fedavg.FedAvg:19 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:15 -#: flwr.server.strategy.fedavgm.FedAvgM:15 -#: flwr.server.strategy.fedopt.FedOpt:15 -#: flwr.server.strategy.fedprox.FedProx:51 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:15 -#: flwr.server.strategy.fedyogi.FedYogi:17 -#: flwr.server.strategy.fedyogi.FedYogi:18 -#: flwr.server.strategy.fedyogi.FedYogi:19 flwr.server.strategy.krum.Krum:20 of -msgid "Optional function used for validation. Defaults to None." +#: flwr.server.driver.driver.Driver.create_message:17 of +msgid "" +"Time-to-live for the round trip of this message, i.e., the time from " +"sending this message to receiving a reply. It specifies in seconds the " +"duration for which the message and its potential reply are considered " +"valid. If unset, the default TTL (i.e., `common.DEFAULT_TTL`) will be " +"used." msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:19 -#: flwr.server.strategy.fedadagrad.FedAdagrad:17 -#: flwr.server.strategy.fedadam.FedAdam:17 -#: flwr.server.strategy.fedavg.FedAvg:21 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:17 -#: flwr.server.strategy.fedavgm.FedAvgM:17 -#: flwr.server.strategy.fedopt.FedOpt:17 -#: flwr.server.strategy.fedprox.FedProx:53 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:17 -#: flwr.server.strategy.fedyogi.FedYogi:20 flwr.server.strategy.krum.Krum:22 of -msgid "Function used to configure training. Defaults to None." +#: flwr.server.driver.driver.Driver.create_message:23 of +msgid "" +"**message** -- A new `Message` instance with the specified content and " +"metadata." msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:21 -#: flwr.server.strategy.fedadagrad.FedAdagrad:19 -#: flwr.server.strategy.fedadam.FedAdam:19 -#: flwr.server.strategy.fedavg.FedAvg:23 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:19 -#: flwr.server.strategy.fedavgm.FedAvgM:19 -#: flwr.server.strategy.fedopt.FedOpt:19 -#: flwr.server.strategy.fedprox.FedProx:55 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:19 -#: flwr.server.strategy.fedyogi.FedYogi:22 flwr.server.strategy.krum.Krum:24 of -msgid "Function used to configure validation. Defaults to None." +#: flwr.server.driver.driver.Driver.pull_messages:3 of +msgid "" +"This method is used to collect messages from the SuperLink that " +"correspond to a set of given message IDs." msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:23 -#: flwr.server.strategy.fedadagrad.FedAdagrad:25 -#: flwr.server.strategy.fedadam.FedAdam:21 -#: flwr.server.strategy.fedavg.FedAvg:25 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:21 -#: flwr.server.strategy.fedavgm.FedAvgM:21 -#: flwr.server.strategy.fedopt.FedOpt:21 -#: flwr.server.strategy.fedprox.FedProx:57 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:21 -#: flwr.server.strategy.fedyogi.FedYogi:24 flwr.server.strategy.krum.Krum:26 of -msgid "Whether or not accept rounds containing failures. Defaults to True." +#: flwr.server.driver.driver.Driver.pull_messages:6 of +msgid "An iterable of message IDs for which reply messages are to be retrieved." msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:25 -#: flwr.server.strategy.fedadagrad.FedAdagrad:27 -#: flwr.server.strategy.fedadam.FedAdam:23 -#: flwr.server.strategy.fedavg.FedAvg:27 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:24 -#: flwr.server.strategy.fedavgm.FedAvgM:23 -#: flwr.server.strategy.fedopt.FedOpt:23 -#: flwr.server.strategy.fedprox.FedProx:59 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:23 -#: flwr.server.strategy.fedyogi.FedYogi:26 flwr.server.strategy.krum.Krum:28 of -msgid "Initial global model parameters." +#: flwr.server.driver.driver.Driver.pull_messages:9 of +msgid "**messages** -- An iterable of messages received." msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:27 of +#: flwr.server.driver.driver.Driver.push_messages:3 of msgid "" -"Byzantine resilient aggregation rule that is used as the first step of " -"the Bulyan (e.g., Krum)" +"This method takes an iterable of messages and sends each message to the " +"node specified in `dst_node_id`." msgstr "" -#: flwr.server.strategy.bulyan.Bulyan:29 of -msgid "arguments to the first_aggregation rule" +#: flwr.server.driver.driver.Driver.push_messages:6 +#: flwr.server.driver.driver.Driver.send_and_receive:7 of +msgid "An iterable of messages to be sent." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.driver.driver.Driver.push_messages:9 of msgid "" -":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\, " -"results\\, ...\\)" +"**message_ids** -- An iterable of IDs for the messages that were sent, " +"which can be used to pull replies." msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1 -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of -msgid "Aggregate evaluation losses using weighted average." +#: flwr.server.driver.driver.Driver.send_and_receive:3 of +msgid "" +"This method sends a list of messages to their destination node IDs and " +"then waits for the replies. It continues to pull replies until either all" +" replies are received or the specified timeout duration is exceeded." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.driver.driver.Driver.send_and_receive:9 of msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +"The timeout duration in seconds. If specified, the method will wait for " +"replies for this duration. If `None`, there is no time limit and the " +"method will wait until replies for all messages are received." msgstr "" -#: flwr.server.strategy.bulyan.Bulyan.aggregate_fit:1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "Aggregate fit results using Bulyan." +#: flwr.server.driver.driver.Driver.send_and_receive:14 of +msgid "**replies** -- An iterable of reply messages received from the SuperLink." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\, " -"parameters\\, ...\\)" +#: flwr.server.driver.driver.Driver.send_and_receive:18 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:53 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:60 +#: of +msgid "Notes" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.configure_evaluate:1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.configure_evaluate:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.configure_evaluate:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.configure_evaluate:1 -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.configure_evaluate:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.configure_evaluate:1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.configure_evaluate:1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.qfedavg.QFedAvg.configure_evaluate:1 -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: flwr.server.strategy.strategy.Strategy.configure_evaluate:1 of -msgid "Configure the next round of evaluation." +#: flwr.server.driver.driver.Driver.send_and_receive:19 of +msgid "" +"This method uses `push_messages` to send the messages and `pull_messages`" +" to collect the replies. If `timeout` is set, the method may not return " +"replies for all sent messages. A message remains valid until its TTL, " +"which is not affected by `timeout`." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: ../../source/ref-api/flwr.server.History.rst:2 +msgid "History" +msgstr "" + +#: flwr.server.history.History.add_loss_centralized:1::1 of msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`add_loss_centralized " +"`\\ \\(server\\_round\\, " +"loss\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.configure_fit:1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.configure_fit:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.configure_fit:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.configure_fit:1 -#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive.configure_fit:1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.configure_fit:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.configure_fit:1 -#: flwr.server.strategy.fedprox.FedProx.configure_fit:1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.configure_fit:1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.qfedavg.QFedAvg.configure_fit:1 -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: flwr.server.strategy.strategy.Strategy.configure_fit:1 of -msgid "Configure the next round of training." +#: flwr.server.history.History.add_loss_centralized:1 +#: flwr.server.history.History.add_loss_centralized:1::1 of +msgid "Add one loss entry (from centralized evaluation)." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.history.History.add_loss_centralized:1::1 of msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +":py:obj:`add_loss_distributed " +"`\\ \\(server\\_round\\, " +"loss\\)" msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.evaluate:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.evaluate:1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.evaluate:1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg.evaluate:1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of -msgid "Evaluate model parameters using an evaluation function." +#: flwr.server.history.History.add_loss_centralized:1::1 +#: flwr.server.history.History.add_loss_distributed:1 of +msgid "Add one loss entry (from distributed evaluation)." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.history.History.add_loss_centralized:1::1 of msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +":py:obj:`add_metrics_centralized " +"`\\ \\(server\\_round\\, " +"metrics\\)" msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.initialize_parameters:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.initialize_parameters:1 -#: flwr.server.strategy.fedavgm.FedAvgM.initialize_parameters:1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of -msgid "Initialize global model parameters." +#: flwr.server.history.History.add_loss_centralized:1::1 +#: flwr.server.history.History.add_metrics_centralized:1 of +msgid "Add metrics entries (from centralized evaluation)." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.history.History.add_loss_centralized:1::1 of msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`add_metrics_distributed " +"`\\ \\(server\\_round\\, " +"metrics\\)" msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.num_evaluation_clients:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.num_evaluation_clients:1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.qfedavg.QFedAvg.num_evaluation_clients:1 of -msgid "Use a fraction of available clients for evaluation." +#: flwr.server.history.History.add_loss_centralized:1::1 +#: flwr.server.history.History.add_metrics_distributed:1 of +msgid "Add metrics entries (from distributed evaluation)." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.history.History.add_loss_centralized:1::1 of msgid "" -":py:obj:`num_fit_clients `\\" -" \\(num\\_available\\_clients\\)" +":py:obj:`add_metrics_distributed_fit " +"`\\ \\(server\\_round\\," +" ...\\)" msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.num_fit_clients:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.num_fit_clients:1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.qfedavg.QFedAvg.num_fit_clients:1 of -msgid "Return the sample size and the required number of available clients." +#: flwr.server.history.History.add_loss_centralized:1::1 +#: flwr.server.history.History.add_metrics_distributed_fit:1 of +msgid "Add metrics entries (from distributed fit)." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.DPFedAvgAdaptive.rst:2 -msgid "DPFedAvgAdaptive" +#: ../../source/ref-api/flwr.server.LegacyContext.rst:2 +msgid "LegacyContext" msgstr "" -#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive:1 of -msgid "Bases: :py:class:`~flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed`" +#: flwr.server.compat.legacy_context.LegacyContext:1 of +msgid "Bases: :py:class:`~flwr.common.context.Context`" msgstr "" -#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive:3 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed:3 of -msgid "This class is deprecated and will be removed in a future release." +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +msgid ":py:obj:`config `\\" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +msgid ":py:obj:`strategy `\\" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of -msgid "Aggregate evaluation losses using the given strategy." +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +msgid ":py:obj:`client_manager `\\" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`aggregate_fit " -"`\\ " -"\\(server\\_round\\, results\\, failures\\)" +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +msgid ":py:obj:`history `\\" msgstr "" -#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive.aggregate_fit:1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of -msgid "Aggregate training results as in DPFedAvgFixed and update clip norms." +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +msgid ":py:obj:`state `\\" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +#: ../../source/ref-api/flwr.server.Server.rst:2 +msgid "Server" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate:1 of -msgid "Configure the next round of evaluation using the specified strategy." +#: flwr.server.server.Server.client_manager:1::1 of +msgid ":py:obj:`client_manager `\\ \\(\\)" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`configure_fit " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +#: flwr.server.server.Server.client_manager:1 +#: flwr.server.server.Server.client_manager:1::1 of +msgid "Return ClientManager." msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of +#: flwr.server.server.Server.client_manager:1::1 of msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +":py:obj:`disconnect_all_clients " +"`\\ \\(timeout\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.evaluate:1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.evaluate:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.evaluate:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.evaluate:1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.evaluate:1 of -msgid "Evaluate model parameters using an evaluation function from the strategy." +#: flwr.server.server.Server.client_manager:1::1 +#: flwr.server.server.Server.disconnect_all_clients:1 of +msgid "Send shutdown signal to all clients." msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of +#: flwr.server.server.Server.client_manager:1::1 of msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +":py:obj:`evaluate_round `\\ " +"\\(server\\_round\\, timeout\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.initialize_parameters:1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.initialize_parameters:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.initialize_parameters:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.initialize_parameters:1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.initialize_parameters:1 of -msgid "Initialize global model parameters using given strategy." +#: flwr.server.server.Server.client_manager:1::1 +#: flwr.server.server.Server.evaluate_round:1 of +msgid "Validate current global model on a number of clients." msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate:3 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:6 -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:3 -#: flwr.server.strategy.strategy.Strategy.aggregate_fit:3 -#: flwr.server.strategy.strategy.Strategy.configure_evaluate:3 -#: flwr.server.strategy.strategy.Strategy.configure_fit:3 -#: flwr.server.strategy.strategy.Strategy.evaluate:6 of -msgid "The current round of federated learning." +#: flwr.server.server.Server.client_manager:1::1 of +msgid ":py:obj:`fit `\\ \\(num\\_rounds\\, timeout\\)" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate:7 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:10 -#: flwr.server.strategy.strategy.Strategy.configure_evaluate:7 -#: flwr.server.strategy.strategy.Strategy.configure_fit:7 -#: flwr.server.strategy.strategy.Strategy.initialize_parameters:3 of -msgid "The client manager which holds all currently connected clients." +#: flwr.server.server.Server.client_manager:1::1 +#: flwr.server.server.Server.fit:1 of +msgid "Run federated averaging for a number of rounds." msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate:10 -#: flwr.server.strategy.strategy.Strategy.configure_evaluate:10 of +#: flwr.server.server.Server.client_manager:1::1 of msgid "" -"**evaluate_configuration** -- A list of tuples. Each tuple in the list " -"identifies a `ClientProxy` and the `EvaluateIns` for this particular " -"`ClientProxy`. If a particular `ClientProxy` is not included in this " -"list, it means that this `ClientProxy` will not participate in the next " -"round of federated evaluation." -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.DPFedAvgFixed.rst:2 -msgid "DPFedAvgFixed" +":py:obj:`fit_round `\\ \\(server\\_round\\," +" timeout\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed:1 -#: flwr.server.strategy.fedavg.FedAvg:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:1 of -msgid "Bases: :py:class:`~flwr.server.strategy.strategy.Strategy`" +#: flwr.server.server.Server.client_manager:1::1 +#: flwr.server.server.Server.fit_round:1 of +msgid "Perform a single round of federated averaging." msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of +#: flwr.server.server.Server.client_manager:1::1 of msgid "" -":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +":py:obj:`set_max_workers `\\ " +"\\(max\\_workers\\)" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`aggregate_fit " -"`\\ " -"\\(server\\_round\\, results\\, failures\\)" +#: flwr.server.server.Server.client_manager:1::1 +#: flwr.server.server.Server.set_max_workers:1 of +msgid "Set the max_workers used by ThreadPoolExecutor." msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_fit:1 of -msgid "Aggregate training results using unweighted aggregation." +#: flwr.server.server.Server.client_manager:1::1 of +msgid ":py:obj:`set_strategy `\\ \\(strategy\\)" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +#: flwr.server.server.Server.client_manager:1::1 +#: flwr.server.server.Server.set_strategy:1 of +msgid "Replace server strategy." msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`configure_fit " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +#: ../../source/ref-api/flwr.server.ServerApp.rst:2 +msgid "ServerApp" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:1 of -msgid "" -"Configure the next round of training incorporating Differential Privacy " -"(DP)." +#: flwr.server.server_app.ServerApp:5 of +msgid "Use the `ServerApp` with an existing `Strategy`:" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +#: flwr.server.server_app.ServerApp:15 of +msgid "Use the `ServerApp` with a custom main function:" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +#: flwr.server.server_app.ServerApp.main:1::1 of +msgid ":py:obj:`main `\\ \\(\\)" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:3 of -msgid "" -"Configuration of the next training round includes information related to " -"DP, such as clip norm and noise stddev." +#: flwr.server.server_app.ServerApp.main:1 +#: flwr.server.server_app.ServerApp.main:1::1 of +msgid "Return a decorator that registers the main fn with the server app." msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:13 -#: flwr.server.strategy.strategy.Strategy.configure_fit:10 of -msgid "" -"**fit_configuration** -- A list of tuples. Each tuple in the list " -"identifies a `ClientProxy` and the `FitIns` for this particular " -"`ClientProxy`. If a particular `ClientProxy` is not included in this " -"list, it means that this `ClientProxy` will not participate in the next " -"round of federated learning." +#: ../../source/ref-api/flwr.server.ServerConfig.rst:2 +msgid "ServerConfig" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyClientSideAdaptiveClipping.rst:2 -msgid "DifferentialPrivacyClientSideAdaptiveClipping" +#: flwr.server.server_config.ServerConfig:3 of +msgid "" +"All attributes have default values which allows users to configure just " +"the ones they care about." msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:3 -#: of -msgid "Use `adaptiveclipping_mod` modifier at the client side." +#: ../../source/ref-api/flwr.server.ServerConfig.rst:29::1 +msgid ":py:obj:`num_rounds `\\" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:5 -#: of -msgid "" -"In comparison to `DifferentialPrivacyServerSideAdaptiveClipping`, which " -"performs clipping on the server-side, " -"`DifferentialPrivacyClientSideAdaptiveClipping` expects clipping to " -"happen on the client-side, usually by using the built-in " -"`adaptiveclipping_mod`." +#: ../../source/ref-api/flwr.server.ServerConfig.rst:29::1 +msgid ":py:obj:`round_timeout `\\" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:10 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:3 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:10 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:3 -#: of -msgid "The strategy to which DP functionalities will be added by this wrapper." +#: ../../source/ref-api/flwr.server.SimpleClientManager.rst:2 +msgid "SimpleClientManager" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:12 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:5 -#: of -msgid "The noise multiplier for the Gaussian mechanism for model updates." +#: flwr.server.client_manager.SimpleClientManager:1 of +msgid "Bases: :py:class:`~flwr.server.client_manager.ClientManager`" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:14 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:7 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:17 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:10 -#: of -msgid "The number of clients that are sampled on each round." +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +msgid ":py:obj:`all `\\ \\(\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:16 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:9 -#: of +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of msgid "" -"The initial value of clipping norm. Defaults to 0.1. Andrew et al. " -"recommends to set to 0.1." +":py:obj:`num_available `\\" +" \\(\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:19 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:12 -#: of -msgid "The desired quantile of updates which should be clipped. Defaults to 0.5." +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +msgid "" +":py:obj:`register `\\ " +"\\(client\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:21 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:14 -#: of +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of msgid "" -"The learning rate for the clipping norm adaptation. Defaults to 0.2. " -"Andrew et al. recommends to set to 0.2." +":py:obj:`sample `\\ " +"\\(num\\_clients\\[\\, min\\_num\\_clients\\, criterion\\]\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:24 -#: of +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of msgid "" -"The stddev of the noise added to the count of updates currently below the" -" estimate. Andrew et al. recommends to set to `expected_num_records/20`" +":py:obj:`unregister `\\ " +"\\(client\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:30 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:23 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:22 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:15 -#: of -msgid "Create a strategy:" +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +msgid "" +":py:obj:`wait_for `\\ " +"\\(num\\_clients\\[\\, timeout\\]\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:34 -#: of +#: flwr.server.client_manager.SimpleClientManager.wait_for:3 of msgid "" -"Wrap the strategy with the " -"`DifferentialPrivacyClientSideAdaptiveClipping` wrapper:" +"Blocks until the requested number of clients is available or until a " +"timeout is reached. Current timeout default: 1 day." msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:40 -#: of -msgid "On the client, add the `adaptiveclipping_mod` to the client-side mods:" +#: flwr.server.client_manager.SimpleClientManager.wait_for:6 of +msgid "The number of clients to wait for." msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`aggregate_evaluate " -"`\\" -" \\(server\\_round\\, results\\, ...\\)" +#: flwr.server.client_manager.SimpleClientManager.wait_for:8 of +msgid "The time in seconds to wait for, defaults to 86400 (24h)." msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`aggregate_fit " -"`\\" -" \\(server\\_round\\, results\\, failures\\)" +#: flwr.server.client_manager.SimpleClientManager.wait_for:11 of +msgid "**success**" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_fit:1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_fit:1 -#: of -msgid "Aggregate training results and update clip norms." +#: ../../source/ref-api/flwr.server.run_driver_api.rst:2 +msgid "run\\_driver\\_api" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`configure_evaluate " -"`\\" -" \\(server\\_round\\, parameters\\, ...\\)" -msgstr "" - -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`configure_fit " -"`\\" -" \\(server\\_round\\, parameters\\, ...\\)" +#: ../../source/ref-api/flwr.server.run_fleet_api.rst:2 +msgid "run\\_fleet\\_api" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`evaluate " -"`\\" -" \\(server\\_round\\, parameters\\)" +#: ../../source/ref-api/flwr.server.run_server_app.rst:2 +msgid "run\\_server\\_app" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`initialize_parameters " -"`\\" -" \\(client\\_manager\\)" +#: ../../source/ref-api/flwr.server.run_superlink.rst:2 +msgid "run\\_superlink" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyClientSideFixedClipping.rst:2 -msgid "DifferentialPrivacyClientSideFixedClipping" +#: ../../source/ref-api/flwr.server.start_server.rst:2 +msgid "start\\_server" msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:3 -#: of -msgid "Use `fixedclipping_mod` modifier at the client side." +#: flwr.server.app.start_server:3 of +msgid "The IPv4 or IPv6 address of the server. Defaults to `\"[::]:8080\"`." msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:5 -#: of +#: flwr.server.app.start_server:5 of msgid "" -"In comparison to `DifferentialPrivacyServerSideFixedClipping`, which " -"performs clipping on the server-side, " -"`DifferentialPrivacyClientSideFixedClipping` expects clipping to happen " -"on the client-side, usually by using the built-in `fixedclipping_mod`." +"A server implementation, either `flwr.server.Server` or a subclass " +"thereof. If no instance is provided, then `start_server` will create one." msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:12 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:5 -#: of +#: flwr.server.app.start_server:9 flwr.simulation.app.start_simulation:28 of msgid "" -"The noise multiplier for the Gaussian mechanism for model updates. A " -"value of 1.0 or higher is recommended for strong privacy." +"Currently supported values are `num_rounds` (int, default: 1) and " +"`round_timeout` in seconds (float, default: None)." msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:15 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:8 -#: of -msgid "The value of the clipping norm." +#: flwr.server.app.start_server:12 of +msgid "" +"An implementation of the abstract base class " +"`flwr.server.strategy.Strategy`. If no strategy is provided, then " +"`start_server` will use `flwr.server.strategy.FedAvg`." msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:26 -#: of +#: flwr.server.app.start_server:16 of msgid "" -"Wrap the strategy with the `DifferentialPrivacyClientSideFixedClipping` " -"wrapper:" +"An implementation of the abstract base class `flwr.server.ClientManager`." +" If no implementation is provided, then `start_server` will use " +"`flwr.server.client_manager.SimpleClientManager`." msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:32 -#: of -msgid "On the client, add the `fixedclipping_mod` to the client-side mods:" +#: flwr.server.app.start_server:21 of +msgid "" +"The maximum length of gRPC messages that can be exchanged with the Flower" +" clients. The default should be sufficient for most models. Users who " +"train very large models might need to increase this value. Note that the " +"Flower clients need to be started with the same value (see " +"`flwr.client.start_client`), otherwise clients will not know about the " +"increased limit and block larger messages." msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: of +#: flwr.server.app.start_server:28 of msgid "" -":py:obj:`aggregate_evaluate " -"`\\" -" \\(server\\_round\\, results\\, ...\\)" +"Tuple containing root certificate, server certificate, and private key to" +" start a secure SSL-enabled server. The tuple is expected to have three " +"bytes elements in the following order: * CA certificate. * " +"server certificate. * server private key." msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: of +#: flwr.server.app.start_server:28 of msgid "" -":py:obj:`aggregate_fit " -"`\\" -" \\(server\\_round\\, results\\, failures\\)" +"Tuple containing root certificate, server certificate, and private key to" +" start a secure SSL-enabled server. The tuple is expected to have three " +"bytes elements in the following order:" msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_fit:1 -#: of -msgid "Add noise to the aggregated parameters." +#: flwr.server.app.start_server:32 of +msgid "CA certificate." msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`configure_evaluate " -"`\\" -" \\(server\\_round\\, parameters\\, ...\\)" +#: flwr.server.app.start_server:33 of +msgid "server certificate." msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`configure_fit " -"`\\" -" \\(server\\_round\\, parameters\\, ...\\)" +#: flwr.server.app.start_server:34 of +msgid "server private key." msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`evaluate " -"`\\" -" \\(server\\_round\\, parameters\\)" +#: flwr.server.app.start_server:37 of +msgid "**hist** -- Object containing training and evaluation metrics." msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`initialize_parameters " -"`\\" -" \\(client\\_manager\\)" +#: flwr.server.app.start_server:42 of +msgid "Starting an insecure server:" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyServerSideAdaptiveClipping.rst:2 -msgid "DifferentialPrivacyServerSideAdaptiveClipping" +#: flwr.server.app.start_server:46 of +msgid "Starting an SSL-enabled server:" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:17 -#: of -msgid "" -"The standard deviation of the noise added to the count of updates below " -"the estimate. Andrew et al. recommends to set to " -"`expected_num_records/20`" +#: ../../source/ref-api/flwr.server.strategy.rst:2 +msgid "strategy" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:27 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -"Wrap the strategy with the DifferentialPrivacyServerSideAdaptiveClipping " -"wrapper" +":py:obj:`Bulyan `\\ \\(\\*\\, " +"fraction\\_fit\\, fraction\\_evaluate\\, ...\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`aggregate_evaluate " -"`\\" -" \\(server\\_round\\, results\\, ...\\)" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.bulyan.Bulyan:1 of +msgid "Bulyan strategy." msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`aggregate_fit " -"`\\" -" \\(server\\_round\\, results\\, failures\\)" +":py:obj:`DPFedAvgAdaptive `\\ " +"\\(strategy\\, num\\_sampled\\_clients\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`configure_evaluate " -"`\\" -" \\(server\\_round\\, parameters\\, ...\\)" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive:1 of +msgid "Wrapper for configuring a Strategy for DP with Adaptive Clipping." msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`configure_fit " -"`\\" -" \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`DPFedAvgFixed `\\ " +"\\(strategy\\, num\\_sampled\\_clients\\, ...\\)" msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`evaluate " -"`\\" -" \\(server\\_round\\, parameters\\)" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed:1 of +msgid "Wrapper for configuring a Strategy for DP with Fixed Clipping." msgstr "" -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`initialize_parameters " -"`\\" -" \\(client\\_manager\\)" +":py:obj:`DifferentialPrivacyClientSideAdaptiveClipping " +"`\\ " +"\\(...\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyServerSideFixedClipping.rst:2 -msgid "DifferentialPrivacyServerSideFixedClipping" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:1 +#: of +msgid "Strategy wrapper for central DP with client-side adaptive clipping." msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:19 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -"Wrap the strategy with the DifferentialPrivacyServerSideFixedClipping " -"wrapper" +":py:obj:`DifferentialPrivacyServerSideAdaptiveClipping " +"`\\ " +"\\(...\\)" msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:1 #: of -msgid "" -":py:obj:`aggregate_evaluate " -"`\\" -" \\(server\\_round\\, results\\, ...\\)" +msgid "Strategy wrapper for central DP with server-side adaptive clipping." msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`aggregate_fit " -"`\\" -" \\(server\\_round\\, results\\, failures\\)" +":py:obj:`DifferentialPrivacyClientSideFixedClipping " +"`\\ " +"\\(...\\)" msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_fit:1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:1 #: of -msgid "Compute the updates, clip, and pass them for aggregation." +msgid "Strategy wrapper for central DP with client-side fixed clipping." msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`configure_evaluate " -"`\\" -" \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`DifferentialPrivacyServerSideFixedClipping " +"`\\ " +"\\(...\\)" msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:1 #: of -msgid "" -":py:obj:`configure_fit " -"`\\" -" \\(server\\_round\\, parameters\\, ...\\)" +msgid "Strategy wrapper for central DP with server-side fixed clipping." msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`evaluate " -"`\\" -" \\(server\\_round\\, parameters\\)" +":py:obj:`FedAdagrad `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`initialize_parameters " -"`\\" -" \\(client\\_manager\\)" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedadagrad.FedAdagrad:1 of +msgid "FedAdagrad strategy - Adaptive Federated Optimization using Adagrad." msgstr "" -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_fit:3 -#: of -msgid "Afterward, add noise to the aggregated parameters." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +msgid "" +":py:obj:`FedAdam `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FaultTolerantFedAvg.rst:2 -msgid "FaultTolerantFedAvg" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedadam.FedAdam:1 of +msgid "FedAdam - Adaptive Federated Optimization using Adam." msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +":py:obj:`FedAvg `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`aggregate_fit " -"`\\ " -"\\(server\\_round\\, results\\, failures\\)" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedavg.FedAvg:1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:1 of +msgid "Federated Averaging strategy." msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_fit:1 -#: flwr.server.strategy.fedadagrad.FedAdagrad.aggregate_fit:1 -#: flwr.server.strategy.fedadam.FedAdam.aggregate_fit:1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_fit:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_fit:1 -#: flwr.server.strategy.fedavgm.FedAvgM.aggregate_fit:1 -#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg.aggregate_fit:1 -#: flwr.server.strategy.fedyogi.FedYogi.aggregate_fit:1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_fit:1 of -msgid "Aggregate fit results using weighted average." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +msgid "" +":py:obj:`FedAvgAndroid `\\ " +"\\(\\*\\[\\, fraction\\_fit\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`FedAvgM `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`configure_fit " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedavgm.FedAvgM:1 of +msgid "Federated Averaging with Momentum strategy." msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +":py:obj:`FedMedian `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedmedian.FedMedian:1 of +msgid "Configurable FedMedian strategy implementation." msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`FedOpt `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedopt.FedOpt:1 of +msgid "Federated Optim strategy." +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`FedProx `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedAdagrad.rst:2 -#: ../../source/ref-changelog.md:839 -msgid "FedAdagrad" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedprox.FedProx:1 of +msgid "Federated Optimization strategy." msgstr "" -#: flwr.server.strategy.fedadagrad.FedAdagrad:1 -#: flwr.server.strategy.fedadam.FedAdam:1 -#: flwr.server.strategy.fedyogi.FedYogi:1 of -msgid "Bases: :py:class:`~flwr.server.strategy.fedopt.FedOpt`" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +msgid "" +":py:obj:`FedTrimmedAvg `\\ " +"\\(\\*\\[\\, fraction\\_fit\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.fedadagrad.FedAdagrad:3 -#: flwr.server.strategy.fedadam.FedAdam:3 flwr.server.strategy.fedopt.FedOpt:3 -#: flwr.server.strategy.fedyogi.FedYogi:3 of -msgid "Implementation based on https://arxiv.org/abs/2003.00295v5" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:1 of +msgid "Federated Averaging with Trimmed Mean [Dong Yin, et al., 2021]." msgstr "" -#: flwr.server.strategy.fedadagrad.FedAdagrad:21 -#: flwr.server.strategy.fedadagrad.FedAdagrad:23 -#: flwr.server.strategy.fedadam.FedAdam:25 -#: flwr.server.strategy.fedadam.FedAdam:27 -#: flwr.server.strategy.fedavg.FedAvg:29 flwr.server.strategy.fedavg.FedAvg:31 -#: flwr.server.strategy.fedopt.FedOpt:25 flwr.server.strategy.fedopt.FedOpt:27 -#: flwr.server.strategy.fedprox.FedProx:61 -#: flwr.server.strategy.fedprox.FedProx:63 -#: flwr.server.strategy.fedyogi.FedYogi:28 -#: flwr.server.strategy.fedyogi.FedYogi:30 of -msgid "Metrics aggregation function, optional." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +msgid "" +":py:obj:`FedXgbBagging `\\ " +"\\(\\[evaluate\\_function\\]\\)" msgstr "" -#: flwr.server.strategy.fedadagrad.FedAdagrad:29 -#: flwr.server.strategy.fedadam.FedAdam:29 -#: flwr.server.strategy.fedopt.FedOpt:29 of -msgid "Server-side learning rate. Defaults to 1e-1." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging:1 of +msgid "Configurable FedXgbBagging strategy implementation." msgstr "" -#: flwr.server.strategy.fedadagrad.FedAdagrad:31 -#: flwr.server.strategy.fedadam.FedAdam:31 -#: flwr.server.strategy.fedopt.FedOpt:31 of -msgid "Client-side learning rate. Defaults to 1e-1." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +msgid "" +":py:obj:`FedXgbCyclic `\\ " +"\\(\\*\\*kwargs\\)" msgstr "" -#: flwr.server.strategy.fedadagrad.FedAdagrad:33 -#: flwr.server.strategy.fedadam.FedAdam:37 -#: flwr.server.strategy.fedopt.FedOpt:37 of -msgid "Controls the algorithm's degree of adaptability. Defaults to 1e-9." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic:1 of +msgid "Configurable FedXgbCyclic strategy implementation." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +":py:obj:`FedXgbNnAvg `\\ \\(\\*args\\, " +"\\*\\*kwargs\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`aggregate_fit `\\" -" \\(server\\_round\\, results\\, failures\\)" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg:1 of +msgid "Configurable FedXgbNnAvg strategy implementation." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`FedYogi `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`configure_fit `\\" -" \\(server\\_round\\, parameters\\, ...\\)" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedyogi.FedYogi:1 of +msgid "FedYogi [Reddi et al., 2020] strategy." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +":py:obj:`FaultTolerantFedAvg " +"`\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg:1 of +msgid "Configurable fault-tolerant FedAvg strategy implementation." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`Krum `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.krum.Krum:1 of +msgid "Krum [Blanchard et al., 2017] strategy." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedAdam.rst:2 -msgid "FedAdam" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +msgid "" +":py:obj:`QFedAvg `\\ \\(\\*\\[\\, " +"q\\_param\\, qffl\\_learning\\_rate\\, ...\\]\\)" msgstr "" -#: flwr.server.strategy.fedadam.FedAdam:33 -#: flwr.server.strategy.fedyogi.FedYogi:36 of -msgid "Momentum parameter. Defaults to 0.9." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.qfedavg.QFedAvg:1 of +msgid "Configurable QFedAvg strategy implementation." msgstr "" -#: flwr.server.strategy.fedadam.FedAdam:35 -#: flwr.server.strategy.fedyogi.FedYogi:38 of -msgid "Second moment parameter. Defaults to 0.99." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +msgid ":py:obj:`Strategy `\\ \\(\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\," -" results\\, ...\\)" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.strategy.Strategy:1 of +msgid "Abstract base class for server strategy implementations." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +#: ../../source/ref-api/flwr.server.strategy.Bulyan.rst:2 +msgid "Bulyan" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\," -" parameters\\, ...\\)" +#: flwr.server.strategy.bulyan.Bulyan:1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg:1 +#: flwr.server.strategy.fedavgm.FedAvgM:1 +#: flwr.server.strategy.fedmedian.FedMedian:1 +#: flwr.server.strategy.fedopt.FedOpt:1 flwr.server.strategy.fedprox.FedProx:1 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging:1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic:1 +#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg:1 +#: flwr.server.strategy.krum.Krum:1 flwr.server.strategy.qfedavg.QFedAvg:1 of +msgid "Bases: :py:class:`~flwr.server.strategy.fedavg.FedAvg`" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +#: flwr.server.strategy.bulyan.Bulyan:3 of +msgid "Implementation based on https://arxiv.org/abs/1802.07927." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +#: flwr.server.strategy.bulyan.Bulyan:5 +#: flwr.server.strategy.fedadagrad.FedAdagrad:5 +#: flwr.server.strategy.fedadam.FedAdam:5 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:5 +#: flwr.server.strategy.fedavgm.FedAvgM:5 flwr.server.strategy.fedopt.FedOpt:5 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:5 +#: flwr.server.strategy.fedyogi.FedYogi:5 flwr.server.strategy.krum.Krum:5 of +msgid "Fraction of clients used during training. Defaults to 1.0." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +#: flwr.server.strategy.bulyan.Bulyan:7 +#: flwr.server.strategy.fedadagrad.FedAdagrad:7 +#: flwr.server.strategy.fedadam.FedAdam:7 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:7 +#: flwr.server.strategy.fedavgm.FedAvgM:7 flwr.server.strategy.fedopt.FedOpt:7 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:7 +#: flwr.server.strategy.fedyogi.FedYogi:7 flwr.server.strategy.krum.Krum:7 of +msgid "Fraction of clients used during validation. Defaults to 1.0." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +#: flwr.server.strategy.bulyan.Bulyan:9 +#: flwr.server.strategy.fedadagrad.FedAdagrad:9 +#: flwr.server.strategy.fedadam.FedAdam:9 flwr.server.strategy.fedavg.FedAvg:13 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:9 +#: flwr.server.strategy.fedavgm.FedAvgM:9 flwr.server.strategy.fedopt.FedOpt:9 +#: flwr.server.strategy.fedprox.FedProx:45 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:9 +#: flwr.server.strategy.fedyogi.FedYogi:9 flwr.server.strategy.krum.Krum:9 of +msgid "Minimum number of clients used during training. Defaults to 2." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +#: flwr.server.strategy.bulyan.Bulyan:11 +#: flwr.server.strategy.fedadagrad.FedAdagrad:11 +#: flwr.server.strategy.fedadam.FedAdam:11 +#: flwr.server.strategy.fedavg.FedAvg:15 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:11 +#: flwr.server.strategy.fedavgm.FedAvgM:11 +#: flwr.server.strategy.fedopt.FedOpt:11 +#: flwr.server.strategy.fedprox.FedProx:47 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:11 +#: flwr.server.strategy.fedyogi.FedYogi:11 flwr.server.strategy.krum.Krum:11 of +msgid "Minimum number of clients used during validation. Defaults to 2." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedAvg.rst:2 -msgid "FedAvg" +#: flwr.server.strategy.bulyan.Bulyan:13 +#: flwr.server.strategy.fedadagrad.FedAdagrad:13 +#: flwr.server.strategy.fedadam.FedAdam:13 +#: flwr.server.strategy.fedavg.FedAvg:17 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:13 +#: flwr.server.strategy.fedavgm.FedAvgM:13 +#: flwr.server.strategy.fedopt.FedOpt:13 +#: flwr.server.strategy.fedprox.FedProx:49 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:13 +#: flwr.server.strategy.fedyogi.FedYogi:13 flwr.server.strategy.krum.Krum:13 of +msgid "Minimum number of total clients in the system. Defaults to 2." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg:3 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:3 of -msgid "Implementation based on https://arxiv.org/abs/1602.05629" +#: flwr.server.strategy.bulyan.Bulyan:15 flwr.server.strategy.krum.Krum:15 of +msgid "Number of malicious clients in the system. Defaults to 0." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg:5 flwr.server.strategy.fedprox.FedProx:37 -#: of -msgid "" -"Fraction of clients used during training. In case `min_fit_clients` is " -"larger than `fraction_fit * available_clients`, `min_fit_clients` will " -"still be sampled. Defaults to 1.0." +#: flwr.server.strategy.bulyan.Bulyan:17 +#: flwr.server.strategy.fedadagrad.FedAdagrad:15 +#: flwr.server.strategy.fedadam.FedAdam:15 +#: flwr.server.strategy.fedavg.FedAvg:19 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:15 +#: flwr.server.strategy.fedavgm.FedAvgM:15 +#: flwr.server.strategy.fedopt.FedOpt:15 +#: flwr.server.strategy.fedprox.FedProx:51 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:15 +#: flwr.server.strategy.fedyogi.FedYogi:17 +#: flwr.server.strategy.fedyogi.FedYogi:18 +#: flwr.server.strategy.fedyogi.FedYogi:19 flwr.server.strategy.krum.Krum:20 of +msgid "Optional function used for validation. Defaults to None." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg:9 flwr.server.strategy.fedprox.FedProx:41 -#: of -msgid "" -"Fraction of clients used during validation. In case " -"`min_evaluate_clients` is larger than `fraction_evaluate * " -"available_clients`, `min_evaluate_clients` will still be sampled. " -"Defaults to 1.0." +#: flwr.server.strategy.bulyan.Bulyan:19 +#: flwr.server.strategy.fedadagrad.FedAdagrad:17 +#: flwr.server.strategy.fedadam.FedAdam:17 +#: flwr.server.strategy.fedavg.FedAvg:21 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:17 +#: flwr.server.strategy.fedavgm.FedAvgM:17 +#: flwr.server.strategy.fedopt.FedOpt:17 +#: flwr.server.strategy.fedprox.FedProx:53 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:17 +#: flwr.server.strategy.fedyogi.FedYogi:20 flwr.server.strategy.krum.Krum:22 of +msgid "Function used to configure training. Defaults to None." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg:33 of -msgid "Enable (True) or disable (False) in-place aggregation of model updates." +#: flwr.server.strategy.bulyan.Bulyan:21 +#: flwr.server.strategy.fedadagrad.FedAdagrad:19 +#: flwr.server.strategy.fedadam.FedAdam:19 +#: flwr.server.strategy.fedavg.FedAvg:23 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:19 +#: flwr.server.strategy.fedavgm.FedAvgM:19 +#: flwr.server.strategy.fedopt.FedOpt:19 +#: flwr.server.strategy.fedprox.FedProx:55 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:19 +#: flwr.server.strategy.fedyogi.FedYogi:22 flwr.server.strategy.krum.Krum:24 of +msgid "Function used to configure validation. Defaults to None." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\, " -"results\\, ...\\)" +#: flwr.server.strategy.bulyan.Bulyan:23 +#: flwr.server.strategy.fedadagrad.FedAdagrad:25 +#: flwr.server.strategy.fedadam.FedAdam:21 +#: flwr.server.strategy.fedavg.FedAvg:25 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:21 +#: flwr.server.strategy.fedavgm.FedAvgM:21 +#: flwr.server.strategy.fedopt.FedOpt:21 +#: flwr.server.strategy.fedprox.FedProx:57 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:21 +#: flwr.server.strategy.fedyogi.FedYogi:24 flwr.server.strategy.krum.Krum:26 of +msgid "Whether or not accept rounds containing failures. Defaults to True." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +#: flwr.server.strategy.bulyan.Bulyan:25 +#: flwr.server.strategy.fedadagrad.FedAdagrad:27 +#: flwr.server.strategy.fedadam.FedAdam:23 +#: flwr.server.strategy.fedavg.FedAvg:27 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:24 +#: flwr.server.strategy.fedavgm.FedAvgM:23 +#: flwr.server.strategy.fedopt.FedOpt:23 +#: flwr.server.strategy.fedprox.FedProx:59 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:23 +#: flwr.server.strategy.fedyogi.FedYogi:26 flwr.server.strategy.krum.Krum:28 of +msgid "Initial global model parameters." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.bulyan.Bulyan:27 of msgid "" -":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\, " -"parameters\\, ...\\)" +"Byzantine resilient aggregation rule that is used as the first step of " +"the Bulyan (e.g., Krum)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +#: flwr.server.strategy.bulyan.Bulyan:29 of +msgid "arguments to the first_aggregation rule" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\, " +"results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +msgid "Aggregate evaluation losses using weighted average." msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" +#: flwr.server.strategy.bulyan.Bulyan.aggregate_fit:1 #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`num_fit_clients `\\" -" \\(num\\_available\\_clients\\)" -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.FedAvgAndroid.rst:2 -msgid "FedAvgAndroid" +msgid "Aggregate fit results using Bulyan." msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" msgstr "" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.configure_evaluate:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.configure_evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.configure_evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.configure_evaluate:1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.configure_evaluate:1 #: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`aggregate_fit " -"`\\ " -"\\(server\\_round\\, results\\, failures\\)" -msgstr "" - -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`bytes_to_ndarray " -"`\\ \\(tensor\\)" -msgstr "" - -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.bytes_to_ndarray:1 of -msgid "Deserialize NumPy array from bytes." +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.configure_evaluate:1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.configure_evaluate:1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.qfedavg.QFedAvg.configure_evaluate:1 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: flwr.server.strategy.strategy.Strategy.configure_evaluate:1 of +msgid "Configure the next round of evaluation." msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`configure_evaluate " -"`\\ " +":py:obj:`configure_fit `\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.configure_fit:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.configure_fit:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.configure_fit:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.configure_fit:1 +#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive.configure_fit:1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.configure_fit:1 #: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`configure_fit " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.configure_fit:1 +#: flwr.server.strategy.fedprox.FedProx.configure_fit:1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.configure_fit:1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.qfedavg.QFedAvg.configure_fit:1 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: flwr.server.strategy.strategy.Strategy.configure_fit:1 of +msgid "Configure the next round of training." msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.evaluate:1 #: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.evaluate:1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.evaluate:1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg.evaluate:1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +msgid "Evaluate model parameters using an evaluation function." msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`ndarray_to_bytes " -"`\\ \\(ndarray\\)" -msgstr "" - -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.ndarray_to_bytes:1 of -msgid "Serialize NumPy array to bytes." +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.initialize_parameters:1 #: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`ndarrays_to_parameters " -"`\\ " -"\\(ndarrays\\)" +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.initialize_parameters:1 +#: flwr.server.strategy.fedavgm.FedAvgM.initialize_parameters:1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +msgid "Initialize global model parameters." msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.num_evaluation_clients:1 #: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.num_evaluation_clients:1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.qfedavg.QFedAvg.num_evaluation_clients:1 of +msgid "Use a fraction of available clients for evaluation." msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`parameters_to_ndarrays " -"`\\ " -"\\(parameters\\)" +":py:obj:`num_fit_clients `\\" +" \\(num\\_available\\_clients\\)" msgstr "" +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.num_fit_clients:1 #: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.parameters_to_ndarrays:1 -#: of -msgid "Convert parameters object to NumPy weights." -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.FedAvgM.rst:2 -msgid "FedAvgM" +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.num_fit_clients:1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.qfedavg.QFedAvg.num_fit_clients:1 of +msgid "Return the sample size and the required number of available clients." msgstr "" -#: flwr.server.strategy.fedavgm.FedAvgM:3 of -msgid "Implementation based on https://arxiv.org/abs/1909.06335" +#: ../../source/ref-api/flwr.server.strategy.DPFedAvgAdaptive.rst:2 +msgid "DPFedAvgAdaptive" msgstr "" -#: flwr.server.strategy.fedavgm.FedAvgM:25 of -msgid "" -"Server-side learning rate used in server-side optimization. Defaults to " -"1.0." +#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive:1 of +msgid "Bases: :py:class:`~flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed`" msgstr "" -#: flwr.server.strategy.fedavgm.FedAvgM:28 of -msgid "Server-side momentum factor used for FedAvgM. Defaults to 0.0." +#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive:3 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed:3 of +msgid "This class is deprecated and will be removed in a future release." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\," -" results\\, ...\\)" +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of +msgid "Aggregate evaluation losses using the given strategy." +msgstr "" + +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit " +"`\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive.aggregate_fit:1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of +msgid "Aggregate training results as in DPFedAvgFixed and update clip norms." +msgstr "" + +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\," -" parameters\\, ...\\)" +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate:1 of +msgid "Configure the next round of evaluation using the specified strategy." +msgstr "" + +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit " +"`\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.evaluate:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.evaluate:1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.evaluate:1 of +msgid "Evaluate model parameters using an evaluation function from the strategy." +msgstr "" + +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.initialize_parameters:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.initialize_parameters:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.initialize_parameters:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.initialize_parameters:1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.initialize_parameters:1 of +msgid "Initialize global model parameters using given strategy." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate:3 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:6 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:3 +#: flwr.server.strategy.strategy.Strategy.aggregate_fit:3 +#: flwr.server.strategy.strategy.Strategy.configure_evaluate:3 +#: flwr.server.strategy.strategy.Strategy.configure_fit:3 +#: flwr.server.strategy.strategy.Strategy.evaluate:6 of +msgid "The current round of federated learning." +msgstr "" + +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate:7 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:10 +#: flwr.server.strategy.strategy.Strategy.configure_evaluate:7 +#: flwr.server.strategy.strategy.Strategy.configure_fit:7 +#: flwr.server.strategy.strategy.Strategy.initialize_parameters:3 of +msgid "The client manager which holds all currently connected clients." +msgstr "" + +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate:10 +#: flwr.server.strategy.strategy.Strategy.configure_evaluate:10 of msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +"**evaluate_configuration** -- A list of tuples. Each tuple in the list " +"identifies a `ClientProxy` and the `EvaluateIns` for this particular " +"`ClientProxy`. If a particular `ClientProxy` is not included in this " +"list, it means that this `ClientProxy` will not participate in the next " +"round of federated evaluation." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedMedian.rst:2 -msgid "FedMedian" +#: ../../source/ref-api/flwr.server.strategy.DPFedAvgFixed.rst:2 +msgid "DPFedAvgFixed" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed:1 +#: flwr.server.strategy.fedavg.FedAvg:1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:1 of +msgid "Bases: :py:class:`~flwr.server.strategy.strategy.Strategy`" +msgstr "" + +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ " +"`\\ " "\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit " +"`\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedmedian.FedMedian.aggregate_fit:1 of -msgid "Aggregate fit results using median." +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_fit:1 of +msgid "Aggregate training results using unweighted aggregation." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`configure_evaluate " -"`\\ " +"`\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit " +"`\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:1 of msgid "" -":py:obj:`evaluate `\\ " +"Configure the next round of training incorporating Differential Privacy " +"(DP)." +msgstr "" + +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:3 of msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +"Configuration of the next training round includes information related to " +"DP, such as clip norm and noise stddev." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:13 +#: flwr.server.strategy.strategy.Strategy.configure_fit:10 of msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.FedOpt.rst:2 -msgid "FedOpt" +"**fit_configuration** -- A list of tuples. Each tuple in the list " +"identifies a `ClientProxy` and the `FitIns` for this particular " +"`ClientProxy`. If a particular `ClientProxy` is not included in this " +"list, it means that this `ClientProxy` will not participate in the next " +"round of federated learning." msgstr "" -#: flwr.server.strategy.fedopt.FedOpt:33 of -msgid "Momentum parameter. Defaults to 0.0." +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyClientSideAdaptiveClipping.rst:2 +msgid "DifferentialPrivacyClientSideAdaptiveClipping" msgstr "" -#: flwr.server.strategy.fedopt.FedOpt:35 of -msgid "Second moment parameter. Defaults to 0.0." +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:3 +#: of +msgid "Use `adaptiveclipping_mod` modifier at the client side." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:5 +#: of msgid "" -":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\, " -"results\\, ...\\)" +"In comparison to `DifferentialPrivacyServerSideAdaptiveClipping`, which " +"performs clipping on the server-side, " +"`DifferentialPrivacyClientSideAdaptiveClipping` expects clipping to " +"happen on the client-side, usually by using the built-in " +"`adaptiveclipping_mod`." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:10 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:3 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:10 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:3 +#: of +msgid "The strategy to which DP functionalities will be added by this wrapper." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\, " -"parameters\\, ...\\)" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:12 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:5 +#: of +msgid "The noise multiplier for the Gaussian mechanism for model updates." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:14 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:7 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:17 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:10 +#: of +msgid "The number of clients that are sampled on each round." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:16 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:9 +#: of msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +"The initial value of clipping norm. Defaults to 0.1. Andrew et al. " +"recommends to set to 0.1." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:19 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:12 +#: of +msgid "The desired quantile of updates which should be clipped. Defaults to 0.5." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:21 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:14 +#: of msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +"The learning rate for the clipping norm adaptation. Defaults to 0.2. " +"Andrew et al. recommends to set to 0.2." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:24 +#: of msgid "" -":py:obj:`num_fit_clients `\\" -" \\(num\\_available\\_clients\\)" +"The stddev of the noise added to the count of updates currently below the" +" estimate. Andrew et al. recommends to set to `expected_num_records/20`" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedProx.rst:2 -msgid "FedProx" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:30 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:23 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:22 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:15 +#: of +msgid "Create a strategy:" msgstr "" -#: flwr.server.strategy.fedprox.FedProx:3 of -msgid "Implementation based on https://arxiv.org/abs/1812.06127" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:34 +#: of +msgid "" +"Wrap the strategy with the " +"`DifferentialPrivacyClientSideAdaptiveClipping` wrapper:" msgstr "" -#: flwr.server.strategy.fedprox.FedProx:5 of -msgid "" -"The strategy in itself will not be different than FedAvg, the client " -"needs to be adjusted. A proximal term needs to be added to the loss " -"function during the training:" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:40 +#: of +msgid "On the client, add the `adaptiveclipping_mod` to the client-side mods:" msgstr "" -#: flwr.server.strategy.fedprox.FedProx:9 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" -"\\\\frac{\\\\mu}{2} || w - w^t ||^2\n" -"\n" +":py:obj:`aggregate_evaluate " +"`\\" +" \\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedprox.FedProx:12 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" -"Where $w^t$ are the global parameters and $w$ are the local weights the " -"function will be optimized with." +":py:obj:`aggregate_fit " +"`\\" +" \\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedprox.FedProx:15 of -msgid "In PyTorch, for example, the loss would go from:" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_fit:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_fit:1 +#: of +msgid "Aggregate training results and update clip norms." msgstr "" -#: flwr.server.strategy.fedprox.FedProx:21 of -msgid "To:" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`configure_evaluate " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedprox.FedProx:30 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" -"With `global_params` being a copy of the parameters before the training " -"takes place." +":py:obj:`configure_fit " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedprox.FedProx:65 of -msgid "" -"The weight of the proximal term used in the optimization. 0.0 makes this " -"strategy equivalent to FedAvg, and the higher the coefficient, the more " -"regularization will be used (that is, the client parameters will need to " -"be closer to the server parameters during training)." -msgstr "" - -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\," -" results\\, ...\\)" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`initialize_parameters " +"`\\" +" \\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\," -" parameters\\, ...\\)" +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyClientSideFixedClipping.rst:2 +msgid "DifferentialPrivacyClientSideFixedClipping" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:3 +#: of +msgid "Use `fixedclipping_mod` modifier at the client side." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:5 +#: of msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +"In comparison to `DifferentialPrivacyServerSideFixedClipping`, which " +"performs clipping on the server-side, " +"`DifferentialPrivacyClientSideFixedClipping` expects clipping to happen " +"on the client-side, usually by using the built-in `fixedclipping_mod`." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:12 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:5 +#: of msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +"The noise multiplier for the Gaussian mechanism for model updates. A " +"value of 1.0 or higher is recommended for strong privacy." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:15 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:8 +#: of +msgid "The value of the clipping norm." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:26 +#: of msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" -msgstr "" - -#: flwr.server.strategy.fedprox.FedProx.configure_fit:3 of -msgid "Sends the proximal factor mu to the clients" -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.FedTrimmedAvg.rst:2 -msgid "FedTrimmedAvg" -msgstr "" - -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:3 of -msgid "Implemented based on: https://arxiv.org/abs/1803.01498" +"Wrap the strategy with the `DifferentialPrivacyClientSideFixedClipping` " +"wrapper:" msgstr "" -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:25 of -msgid "Fraction to cut off of both tails of the distribution. Defaults to 0.2." +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:32 +#: of +msgid "On the client, add the `fixedclipping_mod` to the client-side mods:" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +"`\\" +" \\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`aggregate_fit " -"`\\ " -"\\(server\\_round\\, results\\, failures\\)" +"`\\" +" \\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg.aggregate_fit:1 of -msgid "Aggregate fit results using trimmed average." +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_fit:1 +#: of +msgid "Add noise to the aggregated parameters." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`configure_fit " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +"`\\" +" \\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyServerSideAdaptiveClipping.rst:2 +msgid "DifferentialPrivacyServerSideAdaptiveClipping" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:17 +#: of msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.FedXgbBagging.rst:2 -msgid "FedXgbBagging" +"The standard deviation of the noise added to the count of updates below " +"the estimate. Andrew et al. recommends to set to " +"`expected_num_records/20`" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:27 #: of msgid "" -":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +"Wrap the strategy with the DifferentialPrivacyServerSideAdaptiveClipping " +"wrapper" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 #: of -msgid "Aggregate evaluation metrics using average." +msgid "" +":py:obj:`aggregate_evaluate " +"`\\" +" \\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 #: of msgid "" ":py:obj:`aggregate_fit " -"`\\ " -"\\(server\\_round\\, results\\, failures\\)" -msgstr "" - -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_fit:1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_fit:1 of -msgid "Aggregate fit results using bagging." +"`\\" +" \\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 #: of msgid "" ":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 #: of msgid "" ":py:obj:`configure_fit " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 #: of msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 #: of msgid "" ":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +"`\\" +" \\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyServerSideFixedClipping.rst:2 +msgid "DifferentialPrivacyServerSideFixedClipping" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:19 #: of msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.FedXgbCyclic.rst:2 -msgid "FedXgbCyclic" +"Wrap the strategy with the DifferentialPrivacyServerSideFixedClipping " +"wrapper" msgstr "" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 #: of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +"`\\" +" \\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 #: of msgid "" ":py:obj:`aggregate_fit " -"`\\ \\(server\\_round\\," -" results\\, failures\\)" +"`\\" +" \\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_fit:1 #: of -msgid "" -":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +msgid "Compute the updates, clip, and pass them for aggregation." msgstr "" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 #: of msgid "" -":py:obj:`configure_fit " -"`\\ \\(server\\_round\\," -" parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 #: of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`configure_fit " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" +msgstr "" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`initialize_parameters " +"`\\" +" \\(client\\_manager\\)" +msgstr "" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_fit:3 +#: of +msgid "Afterward, add noise to the aggregated parameters." +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.FaultTolerantFedAvg.rst:2 +msgid "FaultTolerantFedAvg" +msgstr "" + +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`aggregate_fit " +"`\\ " +"\\(server\\_round\\, results\\, failures\\)" +msgstr "" + +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_fit:1 +#: flwr.server.strategy.fedadagrad.FedAdagrad.aggregate_fit:1 +#: flwr.server.strategy.fedadam.FedAdam.aggregate_fit:1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_fit:1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_fit:1 +#: flwr.server.strategy.fedavgm.FedAvgM.aggregate_fit:1 +#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg.aggregate_fit:1 +#: flwr.server.strategy.fedyogi.FedYogi.aggregate_fit:1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_fit:1 of +msgid "Aggregate fit results using weighted average." +msgstr "" + +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`configure_fit " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 #: of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 #: of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 #: of msgid "" ":py:obj:`num_fit_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedXgbNnAvg.rst:2 -msgid "FedXgbNnAvg" +#: ../../source/ref-api/flwr.server.strategy.FedAdagrad.rst:2 +#: ../../source/ref-changelog.md:905 +msgid "FedAdagrad" msgstr "" -#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg:5 of -msgid "" -"This strategy is deprecated, but a copy of it is available in Flower " -"Baselines: " -"https://github.com/adap/flower/tree/main/baselines/hfedxgboost." +#: flwr.server.strategy.fedadagrad.FedAdagrad:1 +#: flwr.server.strategy.fedadam.FedAdam:1 +#: flwr.server.strategy.fedyogi.FedYogi:1 of +msgid "Bases: :py:class:`~flwr.server.strategy.fedopt.FedOpt`" +msgstr "" + +#: flwr.server.strategy.fedadagrad.FedAdagrad:3 +#: flwr.server.strategy.fedadam.FedAdam:3 flwr.server.strategy.fedopt.FedOpt:3 +#: flwr.server.strategy.fedyogi.FedYogi:3 of +msgid "Implementation based on https://arxiv.org/abs/2003.00295v5" +msgstr "" + +#: flwr.server.strategy.fedadagrad.FedAdagrad:21 +#: flwr.server.strategy.fedadagrad.FedAdagrad:23 +#: flwr.server.strategy.fedadam.FedAdam:25 +#: flwr.server.strategy.fedadam.FedAdam:27 +#: flwr.server.strategy.fedavg.FedAvg:29 flwr.server.strategy.fedavg.FedAvg:31 +#: flwr.server.strategy.fedopt.FedOpt:25 flwr.server.strategy.fedopt.FedOpt:27 +#: flwr.server.strategy.fedprox.FedProx:61 +#: flwr.server.strategy.fedprox.FedProx:63 +#: flwr.server.strategy.fedyogi.FedYogi:28 +#: flwr.server.strategy.fedyogi.FedYogi:30 of +msgid "Metrics aggregation function, optional." +msgstr "" + +#: flwr.server.strategy.fedadagrad.FedAdagrad:29 +#: flwr.server.strategy.fedadam.FedAdam:29 +#: flwr.server.strategy.fedopt.FedOpt:29 of +msgid "Server-side learning rate. Defaults to 1e-1." +msgstr "" + +#: flwr.server.strategy.fedadagrad.FedAdagrad:31 +#: flwr.server.strategy.fedadam.FedAdam:31 +#: flwr.server.strategy.fedopt.FedOpt:31 of +msgid "Client-side learning rate. Defaults to 1e-1." +msgstr "" + +#: flwr.server.strategy.fedadagrad.FedAdagrad:33 +#: flwr.server.strategy.fedadam.FedAdam:37 +#: flwr.server.strategy.fedopt.FedOpt:37 of +msgid "Controls the algorithm's degree of adaptability. Defaults to 1e-9." msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ " +"`\\ " "\\(server\\_round\\, results\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`aggregate_fit " -"`\\ \\(server\\_round\\, " -"results\\, failures\\)" +":py:obj:`aggregate_fit `\\" +" \\(server\\_round\\, results\\, failures\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`configure_evaluate " -"`\\ " +"`\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`configure_fit " -"`\\ \\(server\\_round\\, " -"parameters\\, ...\\)" +":py:obj:`configure_fit `\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_fit_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedYogi.rst:2 -msgid "FedYogi" -msgstr "" - -#: flwr.server.strategy.fedyogi.FedYogi:32 of -msgid "Server-side learning rate. Defaults to 1e-2." +#: ../../source/ref-api/flwr.server.strategy.FedAdam.rst:2 +msgid "FedAdam" msgstr "" -#: flwr.server.strategy.fedyogi.FedYogi:34 of -msgid "Client-side learning rate. Defaults to 0.0316." +#: flwr.server.strategy.fedadam.FedAdam:33 +#: flwr.server.strategy.fedyogi.FedYogi:36 of +msgid "Momentum parameter. Defaults to 0.9." msgstr "" -#: flwr.server.strategy.fedyogi.FedYogi:40 of -msgid "Controls the algorithm's degree of adaptability. Defaults to 1e-3." +#: flwr.server.strategy.fedadam.FedAdam:35 +#: flwr.server.strategy.fedyogi.FedYogi:38 of +msgid "Second moment parameter. Defaults to 0.99." msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\," +"`\\ \\(server\\_round\\," " results\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit `\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\," +"`\\ \\(server\\_round\\," " parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit `\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_fit_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.Krum.rst:2 -msgid "Krum" +#: ../../source/ref-api/flwr.server.strategy.FedAvg.rst:2 +msgid "FedAvg" msgstr "" -#: flwr.server.strategy.krum.Krum:3 of -msgid "Implementation based on https://arxiv.org/abs/1703.02757" +#: flwr.server.strategy.fedavg.FedAvg:3 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:3 of +msgid "Implementation based on https://arxiv.org/abs/1602.05629" msgstr "" -#: flwr.server.strategy.krum.Krum:17 of +#: flwr.server.strategy.fedavg.FedAvg:5 flwr.server.strategy.fedprox.FedProx:37 +#: of msgid "" -"Number of clients to keep before averaging (MultiKrum). Defaults to 0, in" -" that case classical Krum is applied." +"Fraction of clients used during training. In case `min_fit_clients` is " +"larger than `fraction_fit * available_clients`, `min_fit_clients` will " +"still be sampled. Defaults to 1.0." +msgstr "" + +#: flwr.server.strategy.fedavg.FedAvg:9 flwr.server.strategy.fedprox.FedProx:41 +#: of +msgid "" +"Fraction of clients used during validation. In case " +"`min_evaluate_clients` is larger than `fraction_evaluate * " +"available_clients`, `min_evaluate_clients` will still be sampled. " +"Defaults to 1.0." +msgstr "" + +#: flwr.server.strategy.fedavg.FedAvg:33 of +msgid "Enable (True) or disable (False) in-place aggregation of model updates." msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\, " +"`\\ \\(server\\_round\\, " "results\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit `\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.krum.Krum.aggregate_fit:1 of -msgid "Aggregate fit results using Krum." -msgstr "" - #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\, " +"`\\ \\(server\\_round\\, " "parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit `\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`num_fit_clients `\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients `\\" +" \\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.QFedAvg.rst:2 -msgid "QFedAvg" +#: ../../source/ref-api/flwr.server.strategy.FedAvgAndroid.rst:2 +msgid "FedAvgAndroid" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\," -" results\\, ...\\)" +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit " +"`\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`bytes_to_ndarray " +"`\\ \\(tensor\\)" +msgstr "" + +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.bytes_to_ndarray:1 of +msgid "Deserialize NumPy array from bytes." +msgstr "" + +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\," -" parameters\\, ...\\)" +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit " +"`\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" -msgstr "" - -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`ndarray_to_bytes " +"`\\ \\(ndarray\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.Strategy.rst:2 -msgid "Strategy" +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.ndarray_to_bytes:1 of +msgid "Serialize NumPy array to bytes." msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 #: of msgid "" -":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +":py:obj:`ndarrays_to_parameters " +"`\\ " +"\\(ndarrays\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1 -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 #: of -msgid "Aggregate evaluation results." +msgid "" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 #: of msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" -msgstr "" - -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: flwr.server.strategy.strategy.Strategy.aggregate_fit:1 of -msgid "Aggregate training results." +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 #: of msgid "" -":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`parameters_to_ndarrays " +"`\\ " +"\\(parameters\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.parameters_to_ndarrays:1 #: of -msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +msgid "Convert parameters object to NumPy weights." msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +#: ../../source/ref-api/flwr.server.strategy.FedAvgM.rst:2 +msgid "FedAvgM" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: flwr.server.strategy.strategy.Strategy.evaluate:1 of -msgid "Evaluate the current model parameters." +#: flwr.server.strategy.fedavgm.FedAvgM:3 of +msgid "Implementation based on https://arxiv.org/abs/1909.06335" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavgm.FedAvgM:25 of msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +"Server-side learning rate used in server-side optimization. Defaults to " +"1.0." msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: flwr.server.strategy.strategy.Strategy.initialize_parameters:1 of -msgid "Initialize the (global) model parameters." +#: flwr.server.strategy.fedavgm.FedAvgM:28 of +msgid "Server-side momentum factor used for FedAvgM. Defaults to 0.0." msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:5 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"Successful updates from the previously selected and configured clients. " -"Each pair of `(ClientProxy, FitRes` constitutes a successful update from " -"one of the previously selected clients. Not that not all previously " -"selected clients are necessarily included in this list: a client might " -"drop out and not submit a result. For each client that did not submit an " -"update, there should be an `Exception` in `failures`." +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\," +" results\\, ...\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:13 -#: flwr.server.strategy.strategy.Strategy.aggregate_fit:13 of -msgid "Exceptions that occurred while the server was waiting for client updates." +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:16 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"**aggregation_result** -- The aggregated evaluation result. Aggregation " -"typically uses some variant of a weighted average." +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_fit:5 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"Successful updates from the previously selected and configured clients. " -"Each pair of `(ClientProxy, FitRes)` constitutes a successful update from" -" one of the previously selected clients. Not that not all previously " -"selected clients are necessarily included in this list: a client might " -"drop out and not submit a result. For each client that did not submit an " -"update, there should be an `Exception` in `failures`." +":py:obj:`configure_fit `\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_fit:17 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"**parameters** -- If parameters are returned, then the server will treat " -"these as the new global model parameters (i.e., it will replace the " -"previous parameters with the ones returned from this method). If `None` " -"is returned (e.g., because there were only failures and no viable " -"results) then the server will no update the previous model parameters, " -"the updates received in this round are discarded, and the global model " -"parameters remain the same." +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.evaluate:3 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"This function can be used to perform centralized (i.e., server-side) " -"evaluation of model parameters." +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.evaluate:11 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"**evaluation_result** -- The evaluation result, usually a Tuple " -"containing loss and a dictionary containing task-specific metrics (e.g., " -"accuracy)." +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.initialize_parameters:6 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"**parameters** -- If parameters are returned, then the server will treat " -"these as the initial global model parameters." +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.workflow.rst:2 -msgid "workflow" +#: ../../source/ref-api/flwr.server.strategy.FedMedian.rst:2 +msgid "FedMedian" msgstr "" -#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`DefaultWorkflow `\\ " -"\\(\\[fit\\_workflow\\, ...\\]\\)" -msgstr "" - -#: ../../source/ref-api/flwr.server.workflow.rst:24::1 -#: flwr.server.workflow.default_workflows.DefaultWorkflow:1 of -msgid "Default workflow in Flower." +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`SecAggPlusWorkflow `\\ " -"\\(num\\_shares\\, ...\\[\\, ...\\]\\)" +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: ../../source/ref-api/flwr.server.workflow.rst:24::1 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:1 -#: of -msgid "The workflow for the SecAgg+ protocol." +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedmedian.FedMedian.aggregate_fit:1 of +msgid "Aggregate fit results using median." msgstr "" -#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`SecAggWorkflow `\\ " -"\\(reconstruction\\_threshold\\, \\*\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: ../../source/ref-api/flwr.server.workflow.rst:24::1 -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:1 of -msgid "The workflow for the SecAgg protocol." +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`configure_fit `\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: ../../source/ref-api/flwr.server.workflow.DefaultWorkflow.rst:2 -msgid "DefaultWorkflow" -msgstr "" - -#: ../../source/ref-api/flwr.server.workflow.SecAggPlusWorkflow.rst:2 -msgid "SecAggPlusWorkflow" +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:3 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"The SecAgg+ protocol ensures the secure summation of integer vectors " -"owned by multiple parties, without accessing any individual integer " -"vector. This workflow allows the server to compute the weighted average " -"of model parameters across all clients, ensuring individual contributions" -" remain private. This is achieved by clients sending both, a weighting " -"factor and a weighted version of the locally updated parameters, both of " -"which are masked for privacy. Specifically, each client uploads \"[w, w *" -" params]\" with masks, where weighting factor 'w' is the number of " -"examples ('num_examples') and 'params' represents the model parameters " -"('parameters') from the client's `FitRes`. The server then aggregates " -"these contributions to compute the weighted average of model parameters." +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:14 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"The protocol involves four main stages: - 'setup': Send SecAgg+ " -"configuration to clients and collect their public keys. - 'share keys': " -"Broadcast public keys among clients and collect encrypted secret" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:17 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:17 -#: of -msgid "key shares." +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:18 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:18 -#: of -msgid "" -"'collect masked vectors': Forward encrypted secret key shares to target " -"clients and collect masked model parameters." +#: ../../source/ref-api/flwr.server.strategy.FedOpt.rst:2 +msgid "FedOpt" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:20 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:20 -#: of -msgid "" -"'unmask': Collect secret key shares to decrypt and aggregate the model " -"parameters." +#: flwr.server.strategy.fedopt.FedOpt:33 of +msgid "Momentum parameter. Defaults to 0.0." msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:22 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:22 -#: of -msgid "" -"Only the aggregated model parameters are exposed and passed to " -"`Strategy.aggregate_fit`, ensuring individual data privacy." +#: flwr.server.strategy.fedopt.FedOpt:35 of +msgid "Second moment parameter. Defaults to 0.0." msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:25 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"The number of shares into which each client's private key is split under " -"the SecAgg+ protocol. If specified as a float, it represents the " -"proportion of all selected clients, and the number of shares will be set " -"dynamically in the run time. A private key can be reconstructed from " -"these shares, allowing for the secure aggregation of model updates. Each " -"client sends one share to each of its neighbors while retaining one." +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\, " +"results\\, ...\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:25 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:32 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"The minimum number of shares required to reconstruct a client's private " -"key, or, if specified as a float, it represents the proportion of the " -"total number of shares needed for reconstruction. This threshold ensures " -"privacy by allowing for the recovery of contributions from dropped " -"clients during aggregation, without compromising individual client data." +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:31 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:38 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"The maximum value of the weight that can be assigned to any single " -"client's update during the weighted average calculation on the server " -"side, e.g., in the FedAvg algorithm." +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:35 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:42 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"The range within which model parameters are clipped before quantization. " -"This parameter ensures each model parameter is bounded within " -"[-clipping_range, clipping_range], facilitating quantization." +":py:obj:`configure_fit `\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:39 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:46 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"The size of the range into which floating-point model parameters are " -"quantized, mapping each parameter to an integer in [0, " -"quantization_range-1]. This facilitates cryptographic operations on the " -"model updates." +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:43 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:50 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"The range of values from which random mask entries are uniformly sampled " -"([0, modulus_range-1]). `modulus_range` must be less than 4294967296. " -"Please use 2**n values for `modulus_range` to prevent overflow issues." +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:47 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:54 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"The timeout duration in seconds. If specified, the workflow will wait for" -" replies for this duration each time. If `None`, there is no time limit " -"and the workflow will wait until replies for all messages are received." +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:61 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"Generally, higher `num_shares` means more robust to dropouts while " -"increasing the computational costs; higher `reconstruction_threshold` " -"means better privacy guarantees but less tolerance to dropouts." +":py:obj:`num_fit_clients `\\" +" \\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:58 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:64 -#: of -msgid "Too large `max_weight` may compromise the precision of the quantization." +#: ../../source/ref-api/flwr.server.strategy.FedProx.rst:2 +msgid "FedProx" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:59 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:65 -#: of -msgid "`modulus_range` must be 2**n and larger than `quantization_range`." +#: flwr.server.strategy.fedprox.FedProx:3 of +msgid "Implementation based on https://arxiv.org/abs/1812.06127" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:66 -#: of +#: flwr.server.strategy.fedprox.FedProx:5 of msgid "" -"When `num_shares` is a float, it is interpreted as the proportion of all " -"selected clients, and hence the number of shares will be determined in " -"the runtime. This allows for dynamic adjustment based on the total number" -" of participating clients." +"The strategy in itself will not be different than FedAvg, the client " +"needs to be adjusted. A proximal term needs to be added to the loss " +"function during the training:" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:69 -#: of +#: flwr.server.strategy.fedprox.FedProx:9 of msgid "" -"Similarly, when `reconstruction_threshold` is a float, it is interpreted " -"as the proportion of the number of shares needed for the reconstruction " -"of a private key. This feature enables flexibility in setting the " -"security threshold relative to the number of distributed shares." +"\\\\frac{\\\\mu}{2} || w - w^t ||^2\n" +"\n" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:73 -#: of +#: flwr.server.strategy.fedprox.FedProx:12 of msgid "" -"`num_shares`, `reconstruction_threshold`, and the quantization parameters" -" (`clipping_range`, `quantization_range`, `modulus_range`) play critical " -"roles in balancing privacy, robustness, and efficiency within the SecAgg+" -" protocol." +"Where $w^t$ are the global parameters and $w$ are the local weights the " +"function will be optimized with." msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: of -msgid "" -":py:obj:`collect_masked_vectors_stage " -"`\\" -" \\(driver\\, ...\\)" +#: flwr.server.strategy.fedprox.FedProx:15 of +msgid "In PyTorch, for example, the loss would go from:" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: of -msgid "Execute the 'collect masked vectors' stage." +#: flwr.server.strategy.fedprox.FedProx:21 of +msgid "To:" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: of +#: flwr.server.strategy.fedprox.FedProx:30 of msgid "" -":py:obj:`setup_stage " -"`\\ \\(driver\\, " -"context\\, state\\)" +"With `global_params` being a copy of the parameters before the training " +"takes place." msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.setup_stage:1 -#: of -msgid "Execute the 'setup' stage." +#: flwr.server.strategy.fedprox.FedProx:65 of +msgid "" +"The weight of the proximal term used in the optimization. 0.0 makes this " +"strategy equivalent to FedAvg, and the higher the coefficient, the more " +"regularization will be used (that is, the client parameters will need to " +"be closer to the server parameters during training)." msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`share_keys_stage " -"`\\ " -"\\(driver\\, context\\, state\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\," +" results\\, ...\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.share_keys_stage:1 -#: of -msgid "Execute the 'share keys' stage." +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`unmask_stage " -"`\\ \\(driver\\, " -"context\\, state\\)" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.unmask_stage:1 -#: of -msgid "Execute the 'unmask' stage." +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`configure_fit `\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: ../../source/ref-api/flwr.server.workflow.SecAggWorkflow.rst:2 -msgid "SecAggWorkflow" +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:1 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"Bases: " -":py:class:`~flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow`" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:3 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"The SecAgg protocol ensures the secure summation of integer vectors owned" -" by multiple parties, without accessing any individual integer vector. " -"This workflow allows the server to compute the weighted average of model " -"parameters across all clients, ensuring individual contributions remain " -"private. This is achieved by clients sending both, a weighting factor and" -" a weighted version of the locally updated parameters, both of which are " -"masked for privacy. Specifically, each client uploads \"[w, w * params]\"" -" with masks, where weighting factor 'w' is the number of examples " -"('num_examples') and 'params' represents the model parameters " -"('parameters') from the client's `FitRes`. The server then aggregates " -"these contributions to compute the weighted average of model parameters." +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:14 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"The protocol involves four main stages: - 'setup': Send SecAgg " -"configuration to clients and collect their public keys. - 'share keys': " -"Broadcast public keys among clients and collect encrypted secret" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:54 of -msgid "" -"Each client's private key is split into N shares under the SecAgg " -"protocol, where N is the number of selected clients." +#: flwr.server.strategy.fedprox.FedProx.configure_fit:3 of +msgid "Sends the proximal factor mu to the clients" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:56 of -msgid "" -"Generally, higher `reconstruction_threshold` means better privacy " -"guarantees but less tolerance to dropouts." +#: ../../source/ref-api/flwr.server.strategy.FedTrimmedAvg.rst:2 +msgid "FedTrimmedAvg" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:60 of -msgid "" -"When `reconstruction_threshold` is a float, it is interpreted as the " -"proportion of the number of all selected clients needed for the " -"reconstruction of a private key. This feature enables flexibility in " -"setting the security threshold relative to the number of selected " -"clients." +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:3 of +msgid "Implemented based on: https://arxiv.org/abs/1803.01498" msgstr "" -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:64 of -msgid "" -"`reconstruction_threshold`, and the quantization parameters " -"(`clipping_range`, `quantization_range`, `modulus_range`) play critical " -"roles in balancing privacy, robustness, and efficiency within the SecAgg " -"protocol." +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:25 of +msgid "Fraction to cut off of both tails of the distribution. Defaults to 0.2." msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`collect_masked_vectors_stage " -"`\\ " -"\\(driver\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`setup_stage `\\" -" \\(driver\\, context\\, state\\)" +":py:obj:`aggregate_fit " +"`\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: of -msgid "" -":py:obj:`share_keys_stage " -"`\\ \\(driver\\, " -"context\\, state\\)" +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg.aggregate_fit:1 of +msgid "Aggregate fit results using trimmed average." msgstr "" -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`unmask_stage " -"`\\ \\(driver\\, " -"context\\, state\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: ../../source/ref-api/flwr.simulation.rst:2 -msgid "simulation" +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`configure_fit " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: ../../source/ref-api/flwr.simulation.rst:19::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`start_simulation `\\ \\(\\*\\," -" client\\_fn\\[\\, ...\\]\\)" +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: ../../source/ref-api/flwr.simulation.rst:19::1 -#: flwr.simulation.app.start_simulation:1 of -msgid "Start a Ray-based Flower simulation server." +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: ../../source/ref-api/flwr.simulation.rst:19::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`run_simulation_from_cli " -"`\\ \\(\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.simulation.rst:19::1 -#: flwr.simulation.run_simulation.run_simulation_from_cli:1 of -msgid "Run Simulation Engine from the CLI." +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.simulation.rst:19::1 -msgid "" -":py:obj:`run_simulation `\\ " -"\\(server\\_app\\, client\\_app\\, ...\\)" +#: ../../source/ref-api/flwr.server.strategy.FedXgbBagging.rst:2 +msgid "FedXgbBagging" msgstr "" -#: ../../source/ref-api/flwr.simulation.rst:19::1 -#: flwr.simulation.run_simulation.run_simulation:1 of -msgid "Run a Flower App using the Simulation Engine." +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: ../../source/ref-api/flwr.simulation.run_simulation.rst:2 -msgid "run\\_simulation" +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: of +msgid "Aggregate evaluation metrics using average." msgstr "" -#: flwr.simulation.run_simulation.run_simulation:3 of +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: of msgid "" -"The `ServerApp` to be executed. It will send messages to different " -"`ClientApp` instances running on different (virtual) SuperNodes." +":py:obj:`aggregate_fit " +"`\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.simulation.run_simulation.run_simulation:6 of -msgid "" -"The `ClientApp` to be executed by each of the SuperNodes. It will receive" -" messages sent by the `ServerApp`." +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_fit:1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_fit:1 of +msgid "Aggregate fit results using bagging." msgstr "" -#: flwr.simulation.run_simulation.run_simulation:9 of +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: of msgid "" -"Number of nodes that run a ClientApp. They can be sampled by a Driver in " -"the ServerApp and receive a Message describing what the ClientApp should " -"perform." +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.simulation.run_simulation.run_simulation:13 of -msgid "A simulation backend that runs `ClientApp`s." +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`configure_fit " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.simulation.run_simulation.run_simulation:15 of +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: of msgid "" -"'A dictionary, e.g {\"\": , \"\": } to " -"configure a backend. Values supported in are those included by " -"`flwr.common.typing.ConfigsRecordValues`." +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.simulation.run_simulation.run_simulation:19 of +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: of msgid "" -"A boolean to indicate whether to enable GPU growth on the main thread. " -"This is desirable if you make use of a TensorFlow model on your " -"`ServerApp` while having your `ClientApp` running on the same GPU. " -"Without enabling this, you might encounter an out-of-memory error because" -" TensorFlow, by default, allocates all GPU memory. Read more about how " -"`tf.config.experimental.set_memory_growth()` works in the TensorFlow " -"documentation: https://www.tensorflow.org/api/stable." +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: flwr.simulation.run_simulation.run_simulation:26 of +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: of msgid "" -"When diabled, only INFO, WARNING and ERROR log messages will be shown. If" -" enabled, DEBUG-level logs will be displayed." +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.simulation.run_simulation_from_cli.rst:2 -msgid "run\\_simulation\\_from\\_cli" +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.simulation.start_simulation.rst:2 -msgid "start\\_simulation" +#: ../../source/ref-api/flwr.server.strategy.FedXgbCyclic.rst:2 +msgid "FedXgbCyclic" msgstr "" -#: flwr.simulation.app.start_simulation:3 of +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: of msgid "" -"A function creating client instances. The function must take a single " -"`str` argument called `cid`. It should return a single client instance of" -" type Client. Note that the created client instances are ephemeral and " -"will often be destroyed after a single method invocation. Since client " -"instances are not long-lived, they should not attempt to carry state over" -" method invocations. Any state required by the instance (model, dataset, " -"hyperparameters, ...) should be (re-)created in either the call to " -"`client_fn` or the call to any of the client methods (e.g., load " -"evaluation data in the `evaluate` method itself)." +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.simulation.app.start_simulation:13 of +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: of msgid "" -"The total number of clients in this simulation. This must be set if " -"`clients_ids` is not set and vice-versa." +":py:obj:`aggregate_fit " +"`\\ \\(server\\_round\\," +" results\\, failures\\)" msgstr "" -#: flwr.simulation.app.start_simulation:16 of +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: of msgid "" -"List `client_id`s for each client. This is only required if `num_clients`" -" is not set. Setting both `num_clients` and `clients_ids` with " -"`len(clients_ids)` not equal to `num_clients` generates an error." +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.simulation.app.start_simulation:20 of -msgid "" -"CPU and GPU resources for a single client. Supported keys are `num_cpus` " -"and `num_gpus`. To understand the GPU utilization caused by `num_gpus`, " -"as well as using custom resources, please consult the Ray documentation." +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`configure_fit " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" msgstr "" -#: flwr.simulation.app.start_simulation:25 of +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: of msgid "" -"An implementation of the abstract base class `flwr.server.Server`. If no " -"instance is provided, then `start_server` will create one." +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.simulation.app.start_simulation:31 of +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: of msgid "" -"An implementation of the abstract base class `flwr.server.Strategy`. If " -"no strategy is provided, then `start_server` will use " -"`flwr.server.strategy.FedAvg`." +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: flwr.simulation.app.start_simulation:35 of +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: of msgid "" -"An implementation of the abstract base class `flwr.server.ClientManager`." -" If no implementation is provided, then `start_simulation` will use " -"`flwr.server.client_manager.SimpleClientManager`." +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.simulation.app.start_simulation:39 of +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: of msgid "" -"Optional dictionary containing arguments for the call to `ray.init`. If " -"ray_init_args is None (the default), Ray will be initialized with the " -"following default args: { \"ignore_reinit_error\": True, " -"\"include_dashboard\": False } An empty dictionary can be used " -"(ray_init_args={}) to prevent any arguments from being passed to " -"ray.init." +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.simulation.app.start_simulation:39 of -msgid "" -"Optional dictionary containing arguments for the call to `ray.init`. If " -"ray_init_args is None (the default), Ray will be initialized with the " -"following default args:" +#: ../../source/ref-api/flwr.server.strategy.FedXgbNnAvg.rst:2 +msgid "FedXgbNnAvg" msgstr "" -#: flwr.simulation.app.start_simulation:43 of -msgid "{ \"ignore_reinit_error\": True, \"include_dashboard\": False }" +#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg:5 of +msgid "" +"This strategy is deprecated, but a copy of it is available in Flower " +"Baselines: " +"https://github.com/adap/flower/tree/main/baselines/hfedxgboost." msgstr "" -#: flwr.simulation.app.start_simulation:45 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"An empty dictionary can be used (ray_init_args={}) to prevent any " -"arguments from being passed to ray.init." +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.simulation.app.start_simulation:48 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"Set to True to prevent `ray.shutdown()` in case " -"`ray.is_initialized()=True`." +":py:obj:`aggregate_fit " +"`\\ \\(server\\_round\\, " +"results\\, failures\\)" msgstr "" -#: flwr.simulation.app.start_simulation:50 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"Optionally specify the type of actor to use. The actor object, which " -"persists throughout the simulation, will be the process in charge of " -"executing a ClientApp wrapping input argument `client_fn`." +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.simulation.app.start_simulation:54 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"If you want to create your own Actor classes, you might need to pass some" -" input argument. You can use this dictionary for such purpose." +":py:obj:`configure_fit " +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" msgstr "" -#: flwr.simulation.app.start_simulation:57 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"(default: \"DEFAULT\") Optional string (\"DEFAULT\" or \"SPREAD\") for " -"the VCE to choose in which node the actor is placed. If you are an " -"advanced user needed more control you can use lower-level scheduling " -"strategies to pin actors to specific compute nodes (e.g. via " -"NodeAffinitySchedulingStrategy). Please note this is an advanced feature." -" For all details, please refer to the Ray documentation: " -"https://docs.ray.io/en/latest/ray-core/scheduling/index.html" +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.simulation.app.start_simulation:66 of -msgid "**hist** -- Object containing metrics from training." +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: ../../source/ref-changelog.md:1 -msgid "Changelog" +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-changelog.md:3 -msgid "Unreleased" +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-changelog.md:5 ../../source/ref-changelog.md:17 -#: ../../source/ref-changelog.md:110 ../../source/ref-changelog.md:210 -#: ../../source/ref-changelog.md:294 ../../source/ref-changelog.md:358 -#: ../../source/ref-changelog.md:416 ../../source/ref-changelog.md:485 -#: ../../source/ref-changelog.md:614 ../../source/ref-changelog.md:656 -#: ../../source/ref-changelog.md:723 ../../source/ref-changelog.md:789 -#: ../../source/ref-changelog.md:834 ../../source/ref-changelog.md:873 -#: ../../source/ref-changelog.md:906 ../../source/ref-changelog.md:956 -msgid "What's new?" +#: ../../source/ref-api/flwr.server.strategy.FedYogi.rst:2 +msgid "FedYogi" msgstr "" -#: ../../source/ref-changelog.md:7 ../../source/ref-changelog.md:80 -#: ../../source/ref-changelog.md:192 ../../source/ref-changelog.md:282 -#: ../../source/ref-changelog.md:346 ../../source/ref-changelog.md:404 -#: ../../source/ref-changelog.md:473 ../../source/ref-changelog.md:535 -#: ../../source/ref-changelog.md:554 ../../source/ref-changelog.md:710 -#: ../../source/ref-changelog.md:781 ../../source/ref-changelog.md:818 -#: ../../source/ref-changelog.md:861 -msgid "Incompatible changes" +#: flwr.server.strategy.fedyogi.FedYogi:32 of +msgid "Server-side learning rate. Defaults to 1e-2." msgstr "" -#: ../../source/ref-changelog.md:9 -msgid "v1.7.0 (2024-02-05)" +#: flwr.server.strategy.fedyogi.FedYogi:34 of +msgid "Client-side learning rate. Defaults to 0.0316." msgstr "" -#: ../../source/ref-changelog.md:11 ../../source/ref-changelog.md:104 -#: ../../source/ref-changelog.md:204 ../../source/ref-changelog.md:288 -#: ../../source/ref-changelog.md:352 ../../source/ref-changelog.md:410 -#: ../../source/ref-changelog.md:479 ../../source/ref-changelog.md:548 -msgid "Thanks to our contributors" +#: flwr.server.strategy.fedyogi.FedYogi:40 of +msgid "Controls the algorithm's degree of adaptability. Defaults to 1e-3." msgstr "" -#: ../../source/ref-changelog.md:13 ../../source/ref-changelog.md:106 -#: ../../source/ref-changelog.md:206 ../../source/ref-changelog.md:290 -#: ../../source/ref-changelog.md:354 ../../source/ref-changelog.md:412 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"We would like to give our special thanks to all the contributors who made" -" the new version of Flower possible (in `git shortlog` order):" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\," +" results\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:15 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"`Aasheesh Singh`, `Adam Narozniak`, `Aml Hassan Esmil`, `Charles " -"Beauville`, `Daniel J. Beutel`, `Daniel Nata Nugraha`, `Edoardo " -"Gabrielli`, `Gustavo Bertoli`, `HelinLin`, `Heng Pan`, `Javier`, `M S " -"Chaitanya Kumar`, `Mohammad Naseri`, `Nikos Vlachakis`, `Pritam Neog`, " -"`Robert Kuska`, `Robert Steiner`, `Taner Topal`, `Yahia Salaheldin " -"Shaaban`, `Yan Gao`, `Yasar Abbas` " +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: ../../source/ref-changelog.md:19 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"**Introduce stateful clients (experimental)** " -"([#2770](https://github.com/adap/flower/pull/2770), " -"[#2686](https://github.com/adap/flower/pull/2686), " -"[#2696](https://github.com/adap/flower/pull/2696), " -"[#2643](https://github.com/adap/flower/pull/2643), " -"[#2769](https://github.com/adap/flower/pull/2769))" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:21 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"Subclasses of `Client` and `NumPyClient` can now store local state that " -"remains on the client. Let's start with the highlight first: this new " -"feature is compatible with both simulated clients (via " -"`start_simulation`) and networked clients (via `start_client`). It's also" -" the first preview of new abstractions like `Context` and `RecordSet`. " -"Clients can access state of type `RecordSet` via `state: RecordSet = " -"self.context.state`. Changes to this `RecordSet` are preserved across " -"different rounds of execution to enable stateful computations in a " -"unified way across simulation and deployment." +":py:obj:`configure_fit `\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:23 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"**Improve performance** " -"([#2293](https://github.com/adap/flower/pull/2293))" +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: ../../source/ref-changelog.md:25 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"Flower is faster than ever. All `FedAvg`-derived strategies now use in-" -"place aggregation to reduce memory consumption. The Flower client " -"serialization/deserialization has been rewritten from the ground up, " -"which results in significant speedups, especially when the client-side " -"training time is short." +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: ../../source/ref-changelog.md:27 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"**Support Federated Learning with Apple MLX and Flower** " -"([#2693](https://github.com/adap/flower/pull/2693))" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-changelog.md:29 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"Flower has official support for federated learning using [Apple " -"MLX](https://ml-explore.github.io/mlx) via the new `quickstart-mlx` code " -"example." +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-changelog.md:31 -msgid "" -"**Introduce new XGBoost cyclic strategy** " -"([#2666](https://github.com/adap/flower/pull/2666), " -"[#2668](https://github.com/adap/flower/pull/2668))" +#: ../../source/ref-api/flwr.server.strategy.Krum.rst:2 +msgid "Krum" msgstr "" -#: ../../source/ref-changelog.md:33 +#: flwr.server.strategy.krum.Krum:3 of +msgid "Implementation based on https://arxiv.org/abs/1703.02757" +msgstr "" + +#: flwr.server.strategy.krum.Krum:17 of msgid "" -"A new strategy called `FedXgbCyclic` supports a client-by-client style of" -" training (often called cyclic). The `xgboost-comprehensive` code example" -" shows how to use it in a full project. In addition to that, `xgboost-" -"comprehensive` now also supports simulation mode. With this, Flower " -"offers best-in-class XGBoost support." +"Number of clients to keep before averaging (MultiKrum). Defaults to 0, in" +" that case classical Krum is applied." msgstr "" -#: ../../source/ref-changelog.md:35 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"**Support Python 3.11** " -"([#2394](https://github.com/adap/flower/pull/2394))" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\, " +"results\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:37 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"Framework tests now run on Python 3.8, 3.9, 3.10, and 3.11. This will " -"ensure better support for users using more recent Python versions." +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: ../../source/ref-changelog.md:39 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.krum.Krum.aggregate_fit:1 of +msgid "Aggregate fit results using Krum." +msgstr "" + +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"**Update gRPC and ProtoBuf dependencies** " -"([#2814](https://github.com/adap/flower/pull/2814))" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:41 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"The `grpcio` and `protobuf` dependencies were updated to their latest " -"versions for improved security and performance." +":py:obj:`configure_fit `\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:43 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"**Introduce Docker image for Flower server** " -"([#2700](https://github.com/adap/flower/pull/2700), " -"[#2688](https://github.com/adap/flower/pull/2688), " -"[#2705](https://github.com/adap/flower/pull/2705), " -"[#2695](https://github.com/adap/flower/pull/2695), " -"[#2747](https://github.com/adap/flower/pull/2747), " -"[#2746](https://github.com/adap/flower/pull/2746), " -"[#2680](https://github.com/adap/flower/pull/2680), " -"[#2682](https://github.com/adap/flower/pull/2682), " -"[#2701](https://github.com/adap/flower/pull/2701))" +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: ../../source/ref-changelog.md:45 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"The Flower server can now be run using an official Docker image. A new " -"how-to guide explains [how to run Flower using " -"Docker](https://flower.ai/docs/framework/how-to-run-flower-using-" -"docker.html). An official Flower client Docker image will follow." +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: ../../source/ref-changelog.md:47 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"**Introduce** `flower-via-docker-compose` **example** " -"([#2626](https://github.com/adap/flower/pull/2626))" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-changelog.md:49 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"**Introduce** `quickstart-sklearn-tabular` **example** " -"([#2719](https://github.com/adap/flower/pull/2719))" +":py:obj:`num_fit_clients `\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-changelog.md:51 -msgid "" -"**Introduce** `custom-metrics` **example** " -"([#1958](https://github.com/adap/flower/pull/1958))" +#: ../../source/ref-api/flwr.server.strategy.QFedAvg.rst:2 +msgid "QFedAvg" msgstr "" -#: ../../source/ref-changelog.md:53 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of msgid "" -"**Update code examples to use Flower Datasets** " -"([#2450](https://github.com/adap/flower/pull/2450), " -"[#2456](https://github.com/adap/flower/pull/2456), " -"[#2318](https://github.com/adap/flower/pull/2318), " -"[#2712](https://github.com/adap/flower/pull/2712))" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\," +" results\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:55 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of msgid "" -"Several code examples were updated to use [Flower " -"Datasets](https://flower.ai/docs/datasets/)." +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: ../../source/ref-changelog.md:57 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of msgid "" -"**General updates to Flower Examples** " -"([#2381](https://github.com/adap/flower/pull/2381), " -"[#2805](https://github.com/adap/flower/pull/2805), " -"[#2782](https://github.com/adap/flower/pull/2782), " -"[#2806](https://github.com/adap/flower/pull/2806), " -"[#2829](https://github.com/adap/flower/pull/2829), " -"[#2825](https://github.com/adap/flower/pull/2825), " -"[#2816](https://github.com/adap/flower/pull/2816), " -"[#2726](https://github.com/adap/flower/pull/2726), " -"[#2659](https://github.com/adap/flower/pull/2659), " -"[#2655](https://github.com/adap/flower/pull/2655))" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:59 -msgid "Many Flower code examples received substantial updates." +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`configure_fit `\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:61 ../../source/ref-changelog.md:154 -msgid "**Update Flower Baselines**" +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: ../../source/ref-changelog.md:63 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of msgid "" -"HFedXGBoost ([#2226](https://github.com/adap/flower/pull/2226), " -"[#2771](https://github.com/adap/flower/pull/2771))" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: ../../source/ref-changelog.md:64 -msgid "FedVSSL ([#2412](https://github.com/adap/flower/pull/2412))" +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-changelog.md:65 -msgid "FedNova ([#2179](https://github.com/adap/flower/pull/2179))" +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-changelog.md:66 -msgid "HeteroFL ([#2439](https://github.com/adap/flower/pull/2439))" +#: ../../source/ref-api/flwr.server.strategy.Strategy.rst:2 +msgid "Strategy" msgstr "" -#: ../../source/ref-changelog.md:67 -msgid "FedAvgM ([#2246](https://github.com/adap/flower/pull/2246))" +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:68 -msgid "FedPara ([#2722](https://github.com/adap/flower/pull/2722))" +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: of +msgid "Aggregate evaluation results." msgstr "" -#: ../../source/ref-changelog.md:70 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: of msgid "" -"**Improve documentation** " -"([#2674](https://github.com/adap/flower/pull/2674), " -"[#2480](https://github.com/adap/flower/pull/2480), " -"[#2826](https://github.com/adap/flower/pull/2826), " -"[#2727](https://github.com/adap/flower/pull/2727), " -"[#2761](https://github.com/adap/flower/pull/2761), " -"[#2900](https://github.com/adap/flower/pull/2900))" +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: ../../source/ref-changelog.md:72 -msgid "" -"**Improved testing and development infrastructure** " -"([#2797](https://github.com/adap/flower/pull/2797), " -"[#2676](https://github.com/adap/flower/pull/2676), " -"[#2644](https://github.com/adap/flower/pull/2644), " -"[#2656](https://github.com/adap/flower/pull/2656), " -"[#2848](https://github.com/adap/flower/pull/2848), " -"[#2675](https://github.com/adap/flower/pull/2675), " -"[#2735](https://github.com/adap/flower/pull/2735), " -"[#2767](https://github.com/adap/flower/pull/2767), " -"[#2732](https://github.com/adap/flower/pull/2732), " -"[#2744](https://github.com/adap/flower/pull/2744), " -"[#2681](https://github.com/adap/flower/pull/2681), " -"[#2699](https://github.com/adap/flower/pull/2699), " -"[#2745](https://github.com/adap/flower/pull/2745), " -"[#2734](https://github.com/adap/flower/pull/2734), " -"[#2731](https://github.com/adap/flower/pull/2731), " -"[#2652](https://github.com/adap/flower/pull/2652), " -"[#2720](https://github.com/adap/flower/pull/2720), " -"[#2721](https://github.com/adap/flower/pull/2721), " -"[#2717](https://github.com/adap/flower/pull/2717), " -"[#2864](https://github.com/adap/flower/pull/2864), " -"[#2694](https://github.com/adap/flower/pull/2694), " -"[#2709](https://github.com/adap/flower/pull/2709), " -"[#2658](https://github.com/adap/flower/pull/2658), " -"[#2796](https://github.com/adap/flower/pull/2796), " -"[#2692](https://github.com/adap/flower/pull/2692), " -"[#2657](https://github.com/adap/flower/pull/2657), " -"[#2813](https://github.com/adap/flower/pull/2813), " -"[#2661](https://github.com/adap/flower/pull/2661), " -"[#2398](https://github.com/adap/flower/pull/2398))" +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: flwr.server.strategy.strategy.Strategy.aggregate_fit:1 of +msgid "Aggregate training results." msgstr "" -#: ../../source/ref-changelog.md:74 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: of msgid "" -"The Flower testing and development infrastructure has received " -"substantial updates. This makes Flower 1.7 the most tested release ever." +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:76 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: of msgid "" -"**Update dependencies** " -"([#2753](https://github.com/adap/flower/pull/2753), " -"[#2651](https://github.com/adap/flower/pull/2651), " -"[#2739](https://github.com/adap/flower/pull/2739), " -"[#2837](https://github.com/adap/flower/pull/2837), " -"[#2788](https://github.com/adap/flower/pull/2788), " -"[#2811](https://github.com/adap/flower/pull/2811), " -"[#2774](https://github.com/adap/flower/pull/2774), " -"[#2790](https://github.com/adap/flower/pull/2790), " -"[#2751](https://github.com/adap/flower/pull/2751), " -"[#2850](https://github.com/adap/flower/pull/2850), " -"[#2812](https://github.com/adap/flower/pull/2812), " -"[#2872](https://github.com/adap/flower/pull/2872), " -"[#2736](https://github.com/adap/flower/pull/2736), " -"[#2756](https://github.com/adap/flower/pull/2756), " -"[#2857](https://github.com/adap/flower/pull/2857), " -"[#2757](https://github.com/adap/flower/pull/2757), " -"[#2810](https://github.com/adap/flower/pull/2810), " -"[#2740](https://github.com/adap/flower/pull/2740), " -"[#2789](https://github.com/adap/flower/pull/2789))" +":py:obj:`configure_fit `\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:78 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: of msgid "" -"**General improvements** " -"([#2803](https://github.com/adap/flower/pull/2803), " -"[#2847](https://github.com/adap/flower/pull/2847), " -"[#2877](https://github.com/adap/flower/pull/2877), " -"[#2690](https://github.com/adap/flower/pull/2690), " -"[#2889](https://github.com/adap/flower/pull/2889), " -"[#2874](https://github.com/adap/flower/pull/2874), " -"[#2819](https://github.com/adap/flower/pull/2819), " -"[#2689](https://github.com/adap/flower/pull/2689), " -"[#2457](https://github.com/adap/flower/pull/2457), " -"[#2870](https://github.com/adap/flower/pull/2870), " -"[#2669](https://github.com/adap/flower/pull/2669), " -"[#2876](https://github.com/adap/flower/pull/2876), " -"[#2885](https://github.com/adap/flower/pull/2885), " -"[#2858](https://github.com/adap/flower/pull/2858), " -"[#2867](https://github.com/adap/flower/pull/2867), " -"[#2351](https://github.com/adap/flower/pull/2351), " -"[#2886](https://github.com/adap/flower/pull/2886), " -"[#2860](https://github.com/adap/flower/pull/2860), " -"[#2828](https://github.com/adap/flower/pull/2828), " -"[#2869](https://github.com/adap/flower/pull/2869), " -"[#2875](https://github.com/adap/flower/pull/2875), " -"[#2733](https://github.com/adap/flower/pull/2733), " -"[#2488](https://github.com/adap/flower/pull/2488), " -"[#2646](https://github.com/adap/flower/pull/2646), " -"[#2879](https://github.com/adap/flower/pull/2879), " -"[#2821](https://github.com/adap/flower/pull/2821), " -"[#2855](https://github.com/adap/flower/pull/2855), " -"[#2800](https://github.com/adap/flower/pull/2800), " -"[#2807](https://github.com/adap/flower/pull/2807), " -"[#2801](https://github.com/adap/flower/pull/2801), " -"[#2804](https://github.com/adap/flower/pull/2804), " -"[#2851](https://github.com/adap/flower/pull/2851), " -"[#2787](https://github.com/adap/flower/pull/2787), " -"[#2852](https://github.com/adap/flower/pull/2852), " -"[#2672](https://github.com/adap/flower/pull/2672), " -"[#2759](https://github.com/adap/flower/pull/2759))" +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: ../../source/ref-changelog.md:82 -msgid "" -"**Deprecate** `start_numpy_client` " -"([#2563](https://github.com/adap/flower/pull/2563), " -"[#2718](https://github.com/adap/flower/pull/2718))" +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: flwr.server.strategy.strategy.Strategy.evaluate:1 of +msgid "Evaluate the current model parameters." msgstr "" -#: ../../source/ref-changelog.md:84 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: of msgid "" -"Until now, clients of type `NumPyClient` needed to be started via " -"`start_numpy_client`. In our efforts to consolidate framework APIs, we " -"have introduced changes, and now all client types should start via " -"`start_client`. To continue using `NumPyClient` clients, you simply need " -"to first call the `.to_client()` method and then pass returned `Client` " -"object to `start_client`. The examples and the documentation have been " -"updated accordingly." +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: ../../source/ref-changelog.md:86 -msgid "" -"**Deprecate legacy DP wrappers** " -"([#2749](https://github.com/adap/flower/pull/2749))" +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: flwr.server.strategy.strategy.Strategy.initialize_parameters:1 of +msgid "Initialize the (global) model parameters." msgstr "" -#: ../../source/ref-changelog.md:88 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:5 of msgid "" -"Legacy DP wrapper classes are deprecated, but still functional. This is " -"in preparation for an all-new pluggable version of differential privacy " -"support in Flower." +"Successful updates from the previously selected and configured clients. " +"Each pair of `(ClientProxy, FitRes` constitutes a successful update from " +"one of the previously selected clients. Not that not all previously " +"selected clients are necessarily included in this list: a client might " +"drop out and not submit a result. For each client that did not submit an " +"update, there should be an `Exception` in `failures`." msgstr "" -#: ../../source/ref-changelog.md:90 -msgid "" -"**Make optional arg** `--callable` **in** `flower-client` **a required " -"positional arg** ([#2673](https://github.com/adap/flower/pull/2673))" +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:13 +#: flwr.server.strategy.strategy.Strategy.aggregate_fit:13 of +msgid "Exceptions that occurred while the server was waiting for client updates." msgstr "" -#: ../../source/ref-changelog.md:92 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:16 of msgid "" -"**Rename** `certificates` **to** `root_certificates` **in** `Driver` " -"([#2890](https://github.com/adap/flower/pull/2890))" +"**aggregation_result** -- The aggregated evaluation result. Aggregation " +"typically uses some variant of a weighted average." msgstr "" -#: ../../source/ref-changelog.md:94 +#: flwr.server.strategy.strategy.Strategy.aggregate_fit:5 of msgid "" -"**Drop experimental** `Task` **fields** " -"([#2866](https://github.com/adap/flower/pull/2866), " -"[#2865](https://github.com/adap/flower/pull/2865))" +"Successful updates from the previously selected and configured clients. " +"Each pair of `(ClientProxy, FitRes)` constitutes a successful update from" +" one of the previously selected clients. Not that not all previously " +"selected clients are necessarily included in this list: a client might " +"drop out and not submit a result. For each client that did not submit an " +"update, there should be an `Exception` in `failures`." msgstr "" -#: ../../source/ref-changelog.md:96 +#: flwr.server.strategy.strategy.Strategy.aggregate_fit:17 of msgid "" -"Experimental fields `sa`, `legacy_server_message` and " -"`legacy_client_message` were removed from `Task` message. The removed " -"fields are superseded by the new `RecordSet` abstraction." +"**parameters** -- If parameters are returned, then the server will treat " +"these as the new global model parameters (i.e., it will replace the " +"previous parameters with the ones returned from this method). If `None` " +"is returned (e.g., because there were only failures and no viable " +"results) then the server will no update the previous model parameters, " +"the updates received in this round are discarded, and the global model " +"parameters remain the same." msgstr "" -#: ../../source/ref-changelog.md:98 +#: flwr.server.strategy.strategy.Strategy.evaluate:3 of msgid "" -"**Retire MXNet examples** " -"([#2724](https://github.com/adap/flower/pull/2724))" +"This function can be used to perform centralized (i.e., server-side) " +"evaluation of model parameters." msgstr "" -#: ../../source/ref-changelog.md:100 +#: flwr.server.strategy.strategy.Strategy.evaluate:11 of msgid "" -"The development of the MXNet fremework has ended and the project is now " -"[archived on GitHub](https://github.com/apache/mxnet). Existing MXNet " -"examples won't receive updates." -msgstr "" - -#: ../../source/ref-changelog.md:102 -msgid "v1.6.0 (2023-11-28)" +"**evaluation_result** -- The evaluation result, usually a Tuple " +"containing loss and a dictionary containing task-specific metrics (e.g., " +"accuracy)." msgstr "" -#: ../../source/ref-changelog.md:108 +#: flwr.server.strategy.strategy.Strategy.initialize_parameters:6 of msgid "" -"`Aashish Kolluri`, `Adam Narozniak`, `Alessio Mora`, `Barathwaja S`, " -"`Charles Beauville`, `Daniel J. Beutel`, `Daniel Nata Nugraha`, `Gabriel " -"Mota`, `Heng Pan`, `Ivan Agarský`, `JS.KIM`, `Javier`, `Marius Schlegel`," -" `Navin Chandra`, `Nic Lane`, `Peterpan828`, `Qinbin Li`, `Shaz-hash`, " -"`Steve Laskaridis`, `Taner Topal`, `William Lindskog`, `Yan Gao`, " -"`cnxdeveloper`, `k3nfalt` " +"**parameters** -- If parameters are returned, then the server will treat " +"these as the initial global model parameters." msgstr "" -#: ../../source/ref-changelog.md:112 -msgid "" -"**Add experimental support for Python 3.12** " -"([#2565](https://github.com/adap/flower/pull/2565))" +#: ../../source/ref-api/flwr.server.workflow.rst:2 +msgid "workflow" msgstr "" -#: ../../source/ref-changelog.md:114 +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 msgid "" -"**Add new XGBoost examples** " -"([#2612](https://github.com/adap/flower/pull/2612), " -"[#2554](https://github.com/adap/flower/pull/2554), " -"[#2617](https://github.com/adap/flower/pull/2617), " -"[#2618](https://github.com/adap/flower/pull/2618), " -"[#2619](https://github.com/adap/flower/pull/2619), " -"[#2567](https://github.com/adap/flower/pull/2567))" +":py:obj:`DefaultWorkflow `\\ " +"\\(\\[fit\\_workflow\\, ...\\]\\)" msgstr "" -#: ../../source/ref-changelog.md:116 -msgid "" -"We have added a new `xgboost-quickstart` example alongside a new " -"`xgboost-comprehensive` example that goes more in-depth." +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +#: flwr.server.workflow.default_workflows.DefaultWorkflow:1 of +msgid "Default workflow in Flower." msgstr "" -#: ../../source/ref-changelog.md:118 +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 msgid "" -"**Add Vertical FL example** " -"([#2598](https://github.com/adap/flower/pull/2598))" +":py:obj:`SecAggPlusWorkflow `\\ " +"\\(num\\_shares\\, ...\\[\\, ...\\]\\)" msgstr "" -#: ../../source/ref-changelog.md:120 -msgid "" -"We had many questions about Vertical Federated Learning using Flower, so " -"we decided to add an simple example for it on the [Titanic " -"dataset](https://www.kaggle.com/competitions/titanic/data) alongside a " -"tutorial (in the README)." +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:1 +#: of +msgid "The workflow for the SecAgg+ protocol." msgstr "" -#: ../../source/ref-changelog.md:122 +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 msgid "" -"**Support custom** `ClientManager` **in** `start_driver()` " -"([#2292](https://github.com/adap/flower/pull/2292))" +":py:obj:`SecAggWorkflow `\\ " +"\\(reconstruction\\_threshold\\, \\*\\)" msgstr "" -#: ../../source/ref-changelog.md:124 -msgid "" -"**Update REST API to support create and delete nodes** " -"([#2283](https://github.com/adap/flower/pull/2283))" +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:1 of +msgid "The workflow for the SecAgg protocol." msgstr "" -#: ../../source/ref-changelog.md:126 -msgid "" -"**Update the Android SDK** " -"([#2187](https://github.com/adap/flower/pull/2187))" +#: ../../source/ref-api/flwr.server.workflow.DefaultWorkflow.rst:2 +msgid "DefaultWorkflow" msgstr "" -#: ../../source/ref-changelog.md:128 -msgid "Add gRPC request-response capability to the Android SDK." +#: ../../source/ref-api/flwr.server.workflow.SecAggPlusWorkflow.rst:2 +msgid "SecAggPlusWorkflow" msgstr "" -#: ../../source/ref-changelog.md:130 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:3 +#: of msgid "" -"**Update the C++ SDK** " -"([#2537](https://github.com/adap/flower/pull/2537), " -"[#2528](https://github.com/adap/flower/pull/2528), " -"[#2523](https://github.com/adap/flower/pull/2523), " -"[#2522](https://github.com/adap/flower/pull/2522))" +"The SecAgg+ protocol ensures the secure summation of integer vectors " +"owned by multiple parties, without accessing any individual integer " +"vector. This workflow allows the server to compute the weighted average " +"of model parameters across all clients, ensuring individual contributions" +" remain private. This is achieved by clients sending both, a weighting " +"factor and a weighted version of the locally updated parameters, both of " +"which are masked for privacy. Specifically, each client uploads \"[w, w *" +" params]\" with masks, where weighting factor 'w' is the number of " +"examples ('num_examples') and 'params' represents the model parameters " +"('parameters') from the client's `FitRes`. The server then aggregates " +"these contributions to compute the weighted average of model parameters." msgstr "" -#: ../../source/ref-changelog.md:132 -msgid "Add gRPC request-response capability to the C++ SDK." +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:14 +#: of +msgid "" +"The protocol involves four main stages: - 'setup': Send SecAgg+ " +"configuration to clients and collect their public keys. - 'share keys': " +"Broadcast public keys among clients and collect encrypted secret" msgstr "" -#: ../../source/ref-changelog.md:134 -msgid "" -"**Make HTTPS the new default** " -"([#2591](https://github.com/adap/flower/pull/2591), " -"[#2636](https://github.com/adap/flower/pull/2636))" +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:17 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:17 +#: of +msgid "key shares." msgstr "" -#: ../../source/ref-changelog.md:136 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:18 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:18 +#: of msgid "" -"Flower is moving to HTTPS by default. The new `flower-server` requires " -"passing `--certificates`, but users can enable `--insecure` to use HTTP " -"for prototyping. The same applies to `flower-client`, which can either " -"use user-provided credentials or gRPC-bundled certificates to connect to " -"an HTTPS-enabled server or requires opt-out via passing `--insecure` to " -"enable insecure HTTP connections." +"'collect masked vectors': Forward encrypted secret key shares to target " +"clients and collect masked model parameters." msgstr "" -#: ../../source/ref-changelog.md:138 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:20 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:20 +#: of msgid "" -"For backward compatibility, `start_client()` and `start_numpy_client()` " -"will still start in insecure mode by default. In a future release, " -"insecure connections will require user opt-in by passing `insecure=True`." +"'unmask': Collect secret key shares to decrypt and aggregate the model " +"parameters." msgstr "" -#: ../../source/ref-changelog.md:140 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:22 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:22 +#: of msgid "" -"**Unify client API** ([#2303](https://github.com/adap/flower/pull/2303), " -"[#2390](https://github.com/adap/flower/pull/2390), " -"[#2493](https://github.com/adap/flower/pull/2493))" +"Only the aggregated model parameters are exposed and passed to " +"`Strategy.aggregate_fit`, ensuring individual data privacy." msgstr "" -#: ../../source/ref-changelog.md:142 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:25 +#: of msgid "" -"Using the `client_fn`, Flower clients can interchangeably run as " -"standalone processes (i.e. via `start_client`) or in simulation (i.e. via" -" `start_simulation`) without requiring changes to how the client class is" -" defined and instantiated. The `to_client()` function is introduced to " -"convert a `NumPyClient` to a `Client`." +"The number of shares into which each client's private key is split under " +"the SecAgg+ protocol. If specified as a float, it represents the " +"proportion of all selected clients, and the number of shares will be set " +"dynamically in the run time. A private key can be reconstructed from " +"these shares, allowing for the secure aggregation of model updates. Each " +"client sends one share to each of its neighbors while retaining one." msgstr "" -#: ../../source/ref-changelog.md:144 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:25 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:32 +#: of msgid "" -"**Add new** `Bulyan` **strategy** " -"([#1817](https://github.com/adap/flower/pull/1817), " -"[#1891](https://github.com/adap/flower/pull/1891))" +"The minimum number of shares required to reconstruct a client's private " +"key, or, if specified as a float, it represents the proportion of the " +"total number of shares needed for reconstruction. This threshold ensures " +"privacy by allowing for the recovery of contributions from dropped " +"clients during aggregation, without compromising individual client data." msgstr "" -#: ../../source/ref-changelog.md:146 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:31 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:38 +#: of msgid "" -"The new `Bulyan` strategy implements Bulyan by [El Mhamdi et al., " -"2018](https://arxiv.org/abs/1802.07927)" +"The maximum value of the weight that can be assigned to any single " +"client's update during the weighted average calculation on the server " +"side, e.g., in the FedAvg algorithm." msgstr "" -#: ../../source/ref-changelog.md:148 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:35 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:42 +#: of msgid "" -"**Add new** `XGB Bagging` **strategy** " -"([#2611](https://github.com/adap/flower/pull/2611))" +"The range within which model parameters are clipped before quantization. " +"This parameter ensures each model parameter is bounded within " +"[-clipping_range, clipping_range], facilitating quantization." msgstr "" -#: ../../source/ref-changelog.md:150 ../../source/ref-changelog.md:152 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:39 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:46 +#: of msgid "" -"**Introduce `WorkloadState`** " -"([#2564](https://github.com/adap/flower/pull/2564), " -"[#2632](https://github.com/adap/flower/pull/2632))" +"The size of the range into which floating-point model parameters are " +"quantized, mapping each parameter to an integer in [0, " +"quantization_range-1]. This facilitates cryptographic operations on the " +"model updates." msgstr "" -#: ../../source/ref-changelog.md:156 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:43 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:50 +#: of msgid "" -"FedProx ([#2210](https://github.com/adap/flower/pull/2210), " -"[#2286](https://github.com/adap/flower/pull/2286), " -"[#2509](https://github.com/adap/flower/pull/2509))" +"The range of values from which random mask entries are uniformly sampled " +"([0, modulus_range-1]). `modulus_range` must be less than 4294967296. " +"Please use 2**n values for `modulus_range` to prevent overflow issues." msgstr "" -#: ../../source/ref-changelog.md:158 -msgid "" -"Baselines Docs ([#2290](https://github.com/adap/flower/pull/2290), " -"[#2400](https://github.com/adap/flower/pull/2400))" -msgstr "" - -#: ../../source/ref-changelog.md:160 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:47 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:54 +#: of msgid "" -"FedMLB ([#2340](https://github.com/adap/flower/pull/2340), " -"[#2507](https://github.com/adap/flower/pull/2507))" +"The timeout duration in seconds. If specified, the workflow will wait for" +" replies for this duration each time. If `None`, there is no time limit " +"and the workflow will wait until replies for all messages are received." msgstr "" -#: ../../source/ref-changelog.md:162 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:61 +#: of msgid "" -"TAMUNA ([#2254](https://github.com/adap/flower/pull/2254), " -"[#2508](https://github.com/adap/flower/pull/2508))" +"Generally, higher `num_shares` means more robust to dropouts while " +"increasing the computational costs; higher `reconstruction_threshold` " +"means better privacy guarantees but less tolerance to dropouts." msgstr "" -#: ../../source/ref-changelog.md:164 -msgid "FedMeta [#2438](https://github.com/adap/flower/pull/2438)" +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:58 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:64 +#: of +msgid "Too large `max_weight` may compromise the precision of the quantization." msgstr "" -#: ../../source/ref-changelog.md:166 -msgid "FjORD [#2431](https://github.com/adap/flower/pull/2431)" +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:59 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:65 +#: of +msgid "`modulus_range` must be 2**n and larger than `quantization_range`." msgstr "" -#: ../../source/ref-changelog.md:168 -msgid "MOON [#2421](https://github.com/adap/flower/pull/2421)" +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:66 +#: of +msgid "" +"When `num_shares` is a float, it is interpreted as the proportion of all " +"selected clients, and hence the number of shares will be determined in " +"the runtime. This allows for dynamic adjustment based on the total number" +" of participating clients." msgstr "" -#: ../../source/ref-changelog.md:170 -msgid "DepthFL [#2295](https://github.com/adap/flower/pull/2295)" +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:69 +#: of +msgid "" +"Similarly, when `reconstruction_threshold` is a float, it is interpreted " +"as the proportion of the number of shares needed for the reconstruction " +"of a private key. This feature enables flexibility in setting the " +"security threshold relative to the number of distributed shares." msgstr "" -#: ../../source/ref-changelog.md:172 -msgid "FedPer [#2266](https://github.com/adap/flower/pull/2266)" +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:73 +#: of +msgid "" +"`num_shares`, `reconstruction_threshold`, and the quantization parameters" +" (`clipping_range`, `quantization_range`, `modulus_range`) play critical " +"roles in balancing privacy, robustness, and efficiency within the SecAgg+" +" protocol." msgstr "" -#: ../../source/ref-changelog.md:174 -msgid "FedWav2vec [#2551](https://github.com/adap/flower/pull/2551)" +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of +msgid "" +":py:obj:`collect_masked_vectors_stage " +"`\\" +" \\(driver\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:176 -msgid "niid-Bench [#2428](https://github.com/adap/flower/pull/2428)" +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of +msgid "Execute the 'collect masked vectors' stage." msgstr "" -#: ../../source/ref-changelog.md:178 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"FedBN ([#2608](https://github.com/adap/flower/pull/2608), " -"[#2615](https://github.com/adap/flower/pull/2615))" +":py:obj:`setup_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" msgstr "" -#: ../../source/ref-changelog.md:180 -msgid "" -"**General updates to Flower Examples** " -"([#2384](https://github.com/adap/flower/pull/2384), " -"[#2425](https://github.com/adap/flower/pull/2425), " -"[#2526](https://github.com/adap/flower/pull/2526), " -"[#2302](https://github.com/adap/flower/pull/2302), " -"[#2545](https://github.com/adap/flower/pull/2545))" +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.setup_stage:1 +#: of +msgid "Execute the 'setup' stage." msgstr "" -#: ../../source/ref-changelog.md:182 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"**General updates to Flower Baselines** " -"([#2301](https://github.com/adap/flower/pull/2301), " -"[#2305](https://github.com/adap/flower/pull/2305), " -"[#2307](https://github.com/adap/flower/pull/2307), " -"[#2327](https://github.com/adap/flower/pull/2327), " -"[#2435](https://github.com/adap/flower/pull/2435), " -"[#2462](https://github.com/adap/flower/pull/2462), " -"[#2463](https://github.com/adap/flower/pull/2463), " -"[#2461](https://github.com/adap/flower/pull/2461), " -"[#2469](https://github.com/adap/flower/pull/2469), " -"[#2466](https://github.com/adap/flower/pull/2466), " -"[#2471](https://github.com/adap/flower/pull/2471), " -"[#2472](https://github.com/adap/flower/pull/2472), " -"[#2470](https://github.com/adap/flower/pull/2470))" +":py:obj:`share_keys_stage " +"`\\ " +"\\(driver\\, context\\, state\\)" msgstr "" -#: ../../source/ref-changelog.md:184 -msgid "" -"**General updates to the simulation engine** " -"([#2331](https://github.com/adap/flower/pull/2331), " -"[#2447](https://github.com/adap/flower/pull/2447), " -"[#2448](https://github.com/adap/flower/pull/2448), " -"[#2294](https://github.com/adap/flower/pull/2294))" +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.share_keys_stage:1 +#: of +msgid "Execute the 'share keys' stage." msgstr "" -#: ../../source/ref-changelog.md:186 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"**General updates to Flower SDKs** " -"([#2288](https://github.com/adap/flower/pull/2288), " -"[#2429](https://github.com/adap/flower/pull/2429), " -"[#2555](https://github.com/adap/flower/pull/2555), " -"[#2543](https://github.com/adap/flower/pull/2543), " -"[#2544](https://github.com/adap/flower/pull/2544), " -"[#2597](https://github.com/adap/flower/pull/2597), " -"[#2623](https://github.com/adap/flower/pull/2623))" +":py:obj:`unmask_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" msgstr "" -#: ../../source/ref-changelog.md:188 -msgid "" -"**General improvements** " -"([#2309](https://github.com/adap/flower/pull/2309), " -"[#2310](https://github.com/adap/flower/pull/2310), " -"[#2313](https://github.com/adap/flower/pull/2313), " -"[#2316](https://github.com/adap/flower/pull/2316), " -"[#2317](https://github.com/adap/flower/pull/2317), " -"[#2349](https://github.com/adap/flower/pull/2349), " -"[#2360](https://github.com/adap/flower/pull/2360), " -"[#2402](https://github.com/adap/flower/pull/2402), " -"[#2446](https://github.com/adap/flower/pull/2446), " -"[#2561](https://github.com/adap/flower/pull/2561), " -"[#2273](https://github.com/adap/flower/pull/2273), " -"[#2267](https://github.com/adap/flower/pull/2267), " -"[#2274](https://github.com/adap/flower/pull/2274), " -"[#2275](https://github.com/adap/flower/pull/2275), " -"[#2432](https://github.com/adap/flower/pull/2432), " -"[#2251](https://github.com/adap/flower/pull/2251), " -"[#2321](https://github.com/adap/flower/pull/2321), " -"[#1936](https://github.com/adap/flower/pull/1936), " -"[#2408](https://github.com/adap/flower/pull/2408), " -"[#2413](https://github.com/adap/flower/pull/2413), " -"[#2401](https://github.com/adap/flower/pull/2401), " -"[#2531](https://github.com/adap/flower/pull/2531), " -"[#2534](https://github.com/adap/flower/pull/2534), " -"[#2535](https://github.com/adap/flower/pull/2535), " -"[#2521](https://github.com/adap/flower/pull/2521), " -"[#2553](https://github.com/adap/flower/pull/2553), " -"[#2596](https://github.com/adap/flower/pull/2596))" +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.unmask_stage:1 +#: of +msgid "Execute the 'unmask' stage." msgstr "" -#: ../../source/ref-changelog.md:190 ../../source/ref-changelog.md:280 -#: ../../source/ref-changelog.md:344 ../../source/ref-changelog.md:398 -#: ../../source/ref-changelog.md:465 -msgid "Flower received many improvements under the hood, too many to list here." +#: ../../source/ref-api/flwr.server.workflow.SecAggWorkflow.rst:2 +msgid "SecAggWorkflow" msgstr "" -#: ../../source/ref-changelog.md:194 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:1 of msgid "" -"**Remove support for Python 3.7** " -"([#2280](https://github.com/adap/flower/pull/2280), " -"[#2299](https://github.com/adap/flower/pull/2299), " -"[#2304](https://github.com/adap/flower/pull/2304), " -"[#2306](https://github.com/adap/flower/pull/2306), " -"[#2355](https://github.com/adap/flower/pull/2355), " -"[#2356](https://github.com/adap/flower/pull/2356))" +"Bases: " +":py:class:`~flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow`" msgstr "" -#: ../../source/ref-changelog.md:196 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:3 of msgid "" -"Python 3.7 support was deprecated in Flower 1.5, and this release removes" -" support. Flower now requires Python 3.8." +"The SecAgg protocol ensures the secure summation of integer vectors owned" +" by multiple parties, without accessing any individual integer vector. " +"This workflow allows the server to compute the weighted average of model " +"parameters across all clients, ensuring individual contributions remain " +"private. This is achieved by clients sending both, a weighting factor and" +" a weighted version of the locally updated parameters, both of which are " +"masked for privacy. Specifically, each client uploads \"[w, w * params]\"" +" with masks, where weighting factor 'w' is the number of examples " +"('num_examples') and 'params' represents the model parameters " +"('parameters') from the client's `FitRes`. The server then aggregates " +"these contributions to compute the weighted average of model parameters." msgstr "" -#: ../../source/ref-changelog.md:198 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:14 of msgid "" -"**Remove experimental argument** `rest` **from** `start_client` " -"([#2324](https://github.com/adap/flower/pull/2324))" +"The protocol involves four main stages: - 'setup': Send SecAgg " +"configuration to clients and collect their public keys. - 'share keys': " +"Broadcast public keys among clients and collect encrypted secret" msgstr "" -#: ../../source/ref-changelog.md:200 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:54 of msgid "" -"The (still experimental) argument `rest` was removed from `start_client` " -"and `start_numpy_client`. Use `transport=\"rest\"` to opt into the " -"experimental REST API instead." +"Each client's private key is split into N shares under the SecAgg " +"protocol, where N is the number of selected clients." msgstr "" -#: ../../source/ref-changelog.md:202 -msgid "v1.5.0 (2023-08-31)" +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:56 of +msgid "" +"Generally, higher `reconstruction_threshold` means better privacy " +"guarantees but less tolerance to dropouts." msgstr "" -#: ../../source/ref-changelog.md:208 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:60 of msgid "" -"`Adam Narozniak`, `Anass Anhari`, `Charles Beauville`, `Dana-Farber`, " -"`Daniel J. Beutel`, `Daniel Nata Nugraha`, `Edoardo Gabrielli`, `Gustavo " -"Bertoli`, `Heng Pan`, `Javier`, `Mahdi`, `Steven Hé (Sīchàng)`, `Taner " -"Topal`, `achiverram28`, `danielnugraha`, `eunchung`, `ruthgal` " +"When `reconstruction_threshold` is a float, it is interpreted as the " +"proportion of the number of all selected clients needed for the " +"reconstruction of a private key. This feature enables flexibility in " +"setting the security threshold relative to the number of selected " +"clients." msgstr "" -#: ../../source/ref-changelog.md:212 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:64 of msgid "" -"**Introduce new simulation engine** " -"([#1969](https://github.com/adap/flower/pull/1969), " -"[#2221](https://github.com/adap/flower/pull/2221), " -"[#2248](https://github.com/adap/flower/pull/2248))" +"`reconstruction_threshold`, and the quantization parameters " +"(`clipping_range`, `quantization_range`, `modulus_range`) play critical " +"roles in balancing privacy, robustness, and efficiency within the SecAgg " +"protocol." msgstr "" -#: ../../source/ref-changelog.md:214 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"The new simulation engine has been rewritten from the ground up, yet it " -"remains fully backwards compatible. It offers much improved stability and" -" memory handling, especially when working with GPUs. Simulations " -"transparently adapt to different settings to scale simulation in CPU-" -"only, CPU+GPU, multi-GPU, or multi-node multi-GPU environments." +":py:obj:`collect_masked_vectors_stage " +"`\\ " +"\\(driver\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:216 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"Comprehensive documentation includes a new [how-to run " -"simulations](https://flower.ai/docs/framework/how-to-run-" -"simulations.html) guide, new [simulation-" -"pytorch](https://flower.ai/docs/examples/simulation-pytorch.html) and " -"[simulation-tensorflow](https://flower.ai/docs/examples/simulation-" -"tensorflow.html) notebooks, and a new [YouTube tutorial " -"series](https://www.youtube.com/watch?v=cRebUIGB5RU&list=PLNG4feLHqCWlnj8a_E1A_n5zr2-8pafTB)." +":py:obj:`setup_stage `\\" +" \\(driver\\, context\\, state\\)" msgstr "" -#: ../../source/ref-changelog.md:218 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"**Restructure Flower Docs** " -"([#1824](https://github.com/adap/flower/pull/1824), " -"[#1865](https://github.com/adap/flower/pull/1865), " -"[#1884](https://github.com/adap/flower/pull/1884), " -"[#1887](https://github.com/adap/flower/pull/1887), " -"[#1919](https://github.com/adap/flower/pull/1919), " -"[#1922](https://github.com/adap/flower/pull/1922), " -"[#1920](https://github.com/adap/flower/pull/1920), " -"[#1923](https://github.com/adap/flower/pull/1923), " -"[#1924](https://github.com/adap/flower/pull/1924), " -"[#1962](https://github.com/adap/flower/pull/1962), " -"[#2006](https://github.com/adap/flower/pull/2006), " -"[#2133](https://github.com/adap/flower/pull/2133), " -"[#2203](https://github.com/adap/flower/pull/2203), " -"[#2215](https://github.com/adap/flower/pull/2215), " -"[#2122](https://github.com/adap/flower/pull/2122), " -"[#2223](https://github.com/adap/flower/pull/2223), " -"[#2219](https://github.com/adap/flower/pull/2219), " -"[#2232](https://github.com/adap/flower/pull/2232), " -"[#2233](https://github.com/adap/flower/pull/2233), " -"[#2234](https://github.com/adap/flower/pull/2234), " -"[#2235](https://github.com/adap/flower/pull/2235), " -"[#2237](https://github.com/adap/flower/pull/2237), " -"[#2238](https://github.com/adap/flower/pull/2238), " -"[#2242](https://github.com/adap/flower/pull/2242), " -"[#2231](https://github.com/adap/flower/pull/2231), " -"[#2243](https://github.com/adap/flower/pull/2243), " -"[#2227](https://github.com/adap/flower/pull/2227))" +":py:obj:`share_keys_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" msgstr "" -#: ../../source/ref-changelog.md:220 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"Much effort went into a completely restructured Flower docs experience. " -"The documentation on [flower.ai/docs](https://flower.ai/docs) is now " -"divided into Flower Framework, Flower Baselines, Flower Android SDK, " -"Flower iOS SDK, and code example projects." +":py:obj:`unmask_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" msgstr "" -#: ../../source/ref-changelog.md:222 -msgid "" -"**Introduce Flower Swift SDK** " -"([#1858](https://github.com/adap/flower/pull/1858), " -"[#1897](https://github.com/adap/flower/pull/1897))" +#: ../../source/ref-api/flwr.simulation.rst:2 +msgid "simulation" msgstr "" -#: ../../source/ref-changelog.md:224 +#: ../../source/ref-api/flwr.simulation.rst:18::1 msgid "" -"This is the first preview release of the Flower Swift SDK. Flower support" -" on iOS is improving, and alongside the Swift SDK and code example, there" -" is now also an iOS quickstart tutorial." +":py:obj:`start_simulation `\\ \\(\\*\\," +" client\\_fn\\[\\, ...\\]\\)" msgstr "" -#: ../../source/ref-changelog.md:226 -msgid "" -"**Introduce Flower Android SDK** " -"([#2131](https://github.com/adap/flower/pull/2131))" +#: ../../source/ref-api/flwr.simulation.rst:18::1 +#: flwr.simulation.app.start_simulation:1 of +msgid "Start a Ray-based Flower simulation server." msgstr "" -#: ../../source/ref-changelog.md:228 +#: ../../source/ref-api/flwr.simulation.rst:18::1 msgid "" -"This is the first preview release of the Flower Kotlin SDK. Flower " -"support on Android is improving, and alongside the Kotlin SDK and code " -"example, there is now also an Android quickstart tutorial." +":py:obj:`run_simulation `\\ " +"\\(server\\_app\\, client\\_app\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:230 -msgid "" -"**Introduce new end-to-end testing infrastructure** " -"([#1842](https://github.com/adap/flower/pull/1842), " -"[#2071](https://github.com/adap/flower/pull/2071), " -"[#2072](https://github.com/adap/flower/pull/2072), " -"[#2068](https://github.com/adap/flower/pull/2068), " -"[#2067](https://github.com/adap/flower/pull/2067), " -"[#2069](https://github.com/adap/flower/pull/2069), " -"[#2073](https://github.com/adap/flower/pull/2073), " -"[#2070](https://github.com/adap/flower/pull/2070), " -"[#2074](https://github.com/adap/flower/pull/2074), " -"[#2082](https://github.com/adap/flower/pull/2082), " -"[#2084](https://github.com/adap/flower/pull/2084), " -"[#2093](https://github.com/adap/flower/pull/2093), " -"[#2109](https://github.com/adap/flower/pull/2109), " -"[#2095](https://github.com/adap/flower/pull/2095), " -"[#2140](https://github.com/adap/flower/pull/2140), " -"[#2137](https://github.com/adap/flower/pull/2137), " -"[#2165](https://github.com/adap/flower/pull/2165))" +#: ../../source/ref-api/flwr.simulation.rst:18::1 +#: flwr.simulation.run_simulation.run_simulation:1 of +msgid "Run a Flower App using the Simulation Engine." msgstr "" -#: ../../source/ref-changelog.md:232 -msgid "" -"A new testing infrastructure ensures that new changes stay compatible " -"with existing framework integrations or strategies." +#: ../../source/ref-api/flwr.simulation.run_simulation.rst:2 +msgid "run\\_simulation" msgstr "" -#: ../../source/ref-changelog.md:234 -msgid "**Deprecate Python 3.7**" +#: flwr.simulation.run_simulation.run_simulation:3 of +msgid "" +"The `ServerApp` to be executed. It will send messages to different " +"`ClientApp` instances running on different (virtual) SuperNodes." msgstr "" -#: ../../source/ref-changelog.md:236 +#: flwr.simulation.run_simulation.run_simulation:6 of msgid "" -"Since Python 3.7 reached its end of life (EOL) on 2023-06-27, support for" -" Python 3.7 is now deprecated and will be removed in an upcoming release." +"The `ClientApp` to be executed by each of the SuperNodes. It will receive" +" messages sent by the `ServerApp`." msgstr "" -#: ../../source/ref-changelog.md:238 +#: flwr.simulation.run_simulation.run_simulation:9 of msgid "" -"**Add new** `FedTrimmedAvg` **strategy** " -"([#1769](https://github.com/adap/flower/pull/1769), " -"[#1853](https://github.com/adap/flower/pull/1853))" +"Number of nodes that run a ClientApp. They can be sampled by a Driver in " +"the ServerApp and receive a Message describing what the ClientApp should " +"perform." msgstr "" -#: ../../source/ref-changelog.md:240 -msgid "" -"The new `FedTrimmedAvg` strategy implements Trimmed Mean by [Dong Yin, " -"2018](https://arxiv.org/abs/1803.01498)." +#: flwr.simulation.run_simulation.run_simulation:13 of +msgid "A simulation backend that runs `ClientApp`s." msgstr "" -#: ../../source/ref-changelog.md:242 +#: flwr.simulation.run_simulation.run_simulation:15 of msgid "" -"**Introduce start_driver** " -"([#1697](https://github.com/adap/flower/pull/1697))" +"'A dictionary, e.g {\"\": , \"\": } to " +"configure a backend. Values supported in are those included by " +"`flwr.common.typing.ConfigsRecordValues`." msgstr "" -#: ../../source/ref-changelog.md:244 +#: flwr.simulation.run_simulation.run_simulation:19 of msgid "" -"In addition to `start_server` and using the raw Driver API, there is a " -"new `start_driver` function that allows for running `start_server` " -"scripts as a Flower driver with only a single-line code change. Check out" -" the `mt-pytorch` code example to see a working example using " -"`start_driver`." +"A boolean to indicate whether to enable GPU growth on the main thread. " +"This is desirable if you make use of a TensorFlow model on your " +"`ServerApp` while having your `ClientApp` running on the same GPU. " +"Without enabling this, you might encounter an out-of-memory error because" +" TensorFlow, by default, allocates all GPU memory. Read more about how " +"`tf.config.experimental.set_memory_growth()` works in the TensorFlow " +"documentation: https://www.tensorflow.org/api/stable." msgstr "" -#: ../../source/ref-changelog.md:246 +#: flwr.simulation.run_simulation.run_simulation:26 of msgid "" -"**Add parameter aggregation to** `mt-pytorch` **code example** " -"([#1785](https://github.com/adap/flower/pull/1785))" +"When diabled, only INFO, WARNING and ERROR log messages will be shown. If" +" enabled, DEBUG-level logs will be displayed." msgstr "" -#: ../../source/ref-changelog.md:248 -msgid "" -"The `mt-pytorch` example shows how to aggregate parameters when writing a" -" driver script. The included `driver.py` and `server.py` have been " -"aligned to demonstrate both the low-level way and the high-level way of " -"building server-side logic." +#: ../../source/ref-api/flwr.simulation.start_simulation.rst:2 +msgid "start\\_simulation" msgstr "" -#: ../../source/ref-changelog.md:250 +#: flwr.simulation.app.start_simulation:3 of msgid "" -"**Migrate experimental REST API to Starlette** " -"([2171](https://github.com/adap/flower/pull/2171))" +"A function creating client instances. The function must take a single " +"`str` argument called `cid`. It should return a single client instance of" +" type Client. Note that the created client instances are ephemeral and " +"will often be destroyed after a single method invocation. Since client " +"instances are not long-lived, they should not attempt to carry state over" +" method invocations. Any state required by the instance (model, dataset, " +"hyperparameters, ...) should be (re-)created in either the call to " +"`client_fn` or the call to any of the client methods (e.g., load " +"evaluation data in the `evaluate` method itself)." msgstr "" -#: ../../source/ref-changelog.md:252 +#: flwr.simulation.app.start_simulation:13 of msgid "" -"The (experimental) REST API used to be implemented in " -"[FastAPI](https://fastapi.tiangolo.com/), but it has now been migrated to" -" use [Starlette](https://www.starlette.io/) directly." +"The total number of clients in this simulation. This must be set if " +"`clients_ids` is not set and vice-versa." msgstr "" -#: ../../source/ref-changelog.md:254 +#: flwr.simulation.app.start_simulation:16 of msgid "" -"Please note: The REST request-response API is still experimental and will" -" likely change significantly over time." +"List `client_id`s for each client. This is only required if `num_clients`" +" is not set. Setting both `num_clients` and `clients_ids` with " +"`len(clients_ids)` not equal to `num_clients` generates an error." msgstr "" -#: ../../source/ref-changelog.md:256 +#: flwr.simulation.app.start_simulation:20 of msgid "" -"**Introduce experimental gRPC request-response API** " -"([#1867](https://github.com/adap/flower/pull/1867), " -"[#1901](https://github.com/adap/flower/pull/1901))" +"CPU and GPU resources for a single client. Supported keys are `num_cpus` " +"and `num_gpus`. To understand the GPU utilization caused by `num_gpus`, " +"as well as using custom resources, please consult the Ray documentation." msgstr "" -#: ../../source/ref-changelog.md:258 +#: flwr.simulation.app.start_simulation:25 of msgid "" -"In addition to the existing gRPC API (based on bidirectional streaming) " -"and the experimental REST API, there is now a new gRPC API that uses a " -"request-response model to communicate with client nodes." +"An implementation of the abstract base class `flwr.server.Server`. If no " +"instance is provided, then `start_server` will create one." msgstr "" -#: ../../source/ref-changelog.md:260 +#: flwr.simulation.app.start_simulation:31 of msgid "" -"Please note: The gRPC request-response API is still experimental and will" -" likely change significantly over time." +"An implementation of the abstract base class `flwr.server.Strategy`. If " +"no strategy is provided, then `start_server` will use " +"`flwr.server.strategy.FedAvg`." msgstr "" -#: ../../source/ref-changelog.md:262 +#: flwr.simulation.app.start_simulation:35 of msgid "" -"**Replace the experimental** `start_client(rest=True)` **with the new** " -"`start_client(transport=\"rest\")` " -"([#1880](https://github.com/adap/flower/pull/1880))" +"An implementation of the abstract base class `flwr.server.ClientManager`." +" If no implementation is provided, then `start_simulation` will use " +"`flwr.server.client_manager.SimpleClientManager`." msgstr "" -#: ../../source/ref-changelog.md:264 +#: flwr.simulation.app.start_simulation:39 of msgid "" -"The (experimental) `start_client` argument `rest` was deprecated in " -"favour of a new argument `transport`. `start_client(transport=\"rest\")` " -"will yield the same behaviour as `start_client(rest=True)` did before. " -"All code should migrate to the new argument `transport`. The deprecated " -"argument `rest` will be removed in a future release." +"Optional dictionary containing arguments for the call to `ray.init`. If " +"ray_init_args is None (the default), Ray will be initialized with the " +"following default args: { \"ignore_reinit_error\": True, " +"\"include_dashboard\": False } An empty dictionary can be used " +"(ray_init_args={}) to prevent any arguments from being passed to " +"ray.init." msgstr "" -#: ../../source/ref-changelog.md:266 +#: flwr.simulation.app.start_simulation:39 of msgid "" -"**Add a new gRPC option** " -"([#2197](https://github.com/adap/flower/pull/2197))" +"Optional dictionary containing arguments for the call to `ray.init`. If " +"ray_init_args is None (the default), Ray will be initialized with the " +"following default args:" msgstr "" -#: ../../source/ref-changelog.md:268 -msgid "" -"We now start a gRPC server with the `grpc.keepalive_permit_without_calls`" -" option set to 0 by default. This prevents the clients from sending " -"keepalive pings when there is no outstanding stream." +#: flwr.simulation.app.start_simulation:43 of +msgid "{ \"ignore_reinit_error\": True, \"include_dashboard\": False }" msgstr "" -#: ../../source/ref-changelog.md:270 +#: flwr.simulation.app.start_simulation:45 of msgid "" -"**Improve example notebooks** " -"([#2005](https://github.com/adap/flower/pull/2005))" +"An empty dictionary can be used (ray_init_args={}) to prevent any " +"arguments from being passed to ray.init." msgstr "" -#: ../../source/ref-changelog.md:272 -msgid "There's a new 30min Federated Learning PyTorch tutorial!" +#: flwr.simulation.app.start_simulation:48 of +msgid "" +"Set to True to prevent `ray.shutdown()` in case " +"`ray.is_initialized()=True`." msgstr "" -#: ../../source/ref-changelog.md:274 +#: flwr.simulation.app.start_simulation:50 of msgid "" -"**Example updates** ([#1772](https://github.com/adap/flower/pull/1772), " -"[#1873](https://github.com/adap/flower/pull/1873), " -"[#1981](https://github.com/adap/flower/pull/1981), " -"[#1988](https://github.com/adap/flower/pull/1988), " -"[#1984](https://github.com/adap/flower/pull/1984), " -"[#1982](https://github.com/adap/flower/pull/1982), " -"[#2112](https://github.com/adap/flower/pull/2112), " -"[#2144](https://github.com/adap/flower/pull/2144), " -"[#2174](https://github.com/adap/flower/pull/2174), " -"[#2225](https://github.com/adap/flower/pull/2225), " -"[#2183](https://github.com/adap/flower/pull/2183))" +"Optionally specify the type of actor to use. The actor object, which " +"persists throughout the simulation, will be the process in charge of " +"executing a ClientApp wrapping input argument `client_fn`." msgstr "" -#: ../../source/ref-changelog.md:276 +#: flwr.simulation.app.start_simulation:54 of msgid "" -"Many examples have received significant updates, including simplified " -"advanced-tensorflow and advanced-pytorch examples, improved macOS " -"compatibility of TensorFlow examples, and code examples for simulation. A" -" major upgrade is that all code examples now have a `requirements.txt` " -"(in addition to `pyproject.toml`)." +"If you want to create your own Actor classes, you might need to pass some" +" input argument. You can use this dictionary for such purpose." msgstr "" -#: ../../source/ref-changelog.md:278 +#: flwr.simulation.app.start_simulation:57 of msgid "" -"**General improvements** " -"([#1872](https://github.com/adap/flower/pull/1872), " -"[#1866](https://github.com/adap/flower/pull/1866), " -"[#1884](https://github.com/adap/flower/pull/1884), " -"[#1837](https://github.com/adap/flower/pull/1837), " -"[#1477](https://github.com/adap/flower/pull/1477), " -"[#2171](https://github.com/adap/flower/pull/2171))" +"(default: \"DEFAULT\") Optional string (\"DEFAULT\" or \"SPREAD\") for " +"the VCE to choose in which node the actor is placed. If you are an " +"advanced user needed more control you can use lower-level scheduling " +"strategies to pin actors to specific compute nodes (e.g. via " +"NodeAffinitySchedulingStrategy). Please note this is an advanced feature." +" For all details, please refer to the Ray documentation: " +"https://docs.ray.io/en/latest/ray-core/scheduling/index.html" msgstr "" -#: ../../source/ref-changelog.md:284 ../../source/ref-changelog.md:348 -#: ../../source/ref-changelog.md:406 ../../source/ref-changelog.md:475 -#: ../../source/ref-changelog.md:537 +#: flwr.simulation.app.start_simulation:66 of +msgid "**hist** -- Object containing metrics from training." +msgstr "" + +#: ../../source/ref-changelog.md:1 +msgid "Changelog" +msgstr "" + +#: ../../source/ref-changelog.md:3 +msgid "Unreleased" +msgstr "" + +#: ../../source/ref-changelog.md:5 ../../source/ref-changelog.md:19 +#: ../../source/ref-changelog.md:83 ../../source/ref-changelog.md:176 +#: ../../source/ref-changelog.md:276 ../../source/ref-changelog.md:360 +#: ../../source/ref-changelog.md:424 ../../source/ref-changelog.md:482 +#: ../../source/ref-changelog.md:551 ../../source/ref-changelog.md:680 +#: ../../source/ref-changelog.md:722 ../../source/ref-changelog.md:789 +#: ../../source/ref-changelog.md:855 ../../source/ref-changelog.md:900 +#: ../../source/ref-changelog.md:939 ../../source/ref-changelog.md:972 +#: ../../source/ref-changelog.md:1022 +msgid "What's new?" +msgstr "" + +#: ../../source/ref-changelog.md:7 ../../source/ref-changelog.md:71 +#: ../../source/ref-changelog.md:146 ../../source/ref-changelog.md:258 +#: ../../source/ref-changelog.md:348 ../../source/ref-changelog.md:412 +#: ../../source/ref-changelog.md:470 ../../source/ref-changelog.md:539 +#: ../../source/ref-changelog.md:601 ../../source/ref-changelog.md:620 +#: ../../source/ref-changelog.md:776 ../../source/ref-changelog.md:847 +#: ../../source/ref-changelog.md:884 ../../source/ref-changelog.md:927 +msgid "Incompatible changes" +msgstr "" + +#: ../../source/ref-changelog.md:9 ../../source/ref-changelog.md:73 +#: ../../source/ref-changelog.md:350 ../../source/ref-changelog.md:414 +#: ../../source/ref-changelog.md:472 ../../source/ref-changelog.md:541 +#: ../../source/ref-changelog.md:603 msgid "None" msgstr "" -#: ../../source/ref-changelog.md:286 -msgid "v1.4.0 (2023-04-21)" +#: ../../source/ref-changelog.md:11 +msgid "v1.8.0 (2024-04-03)" msgstr "" -#: ../../source/ref-changelog.md:292 +#: ../../source/ref-changelog.md:13 ../../source/ref-changelog.md:77 +#: ../../source/ref-changelog.md:170 ../../source/ref-changelog.md:270 +#: ../../source/ref-changelog.md:354 ../../source/ref-changelog.md:418 +#: ../../source/ref-changelog.md:476 ../../source/ref-changelog.md:545 +#: ../../source/ref-changelog.md:614 +msgid "Thanks to our contributors" +msgstr "" + +#: ../../source/ref-changelog.md:15 ../../source/ref-changelog.md:79 +#: ../../source/ref-changelog.md:172 ../../source/ref-changelog.md:272 +#: ../../source/ref-changelog.md:356 ../../source/ref-changelog.md:420 +#: ../../source/ref-changelog.md:478 msgid "" -"`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, " -"`Chenyang Ma (Danny)`, `Daniel J. Beutel`, `Edoardo`, `Gautam Jajoo`, " -"`Iacob-Alexandru-Andrei`, `JDRanpariya`, `Jean Charle Yaacoub`, `Kunal " -"Sarkhel`, `L. Jiang`, `Lennart Behme`, `Max Kapsecker`, `Michał`, `Nic " -"Lane`, `Nikolaos Episkopos`, `Ragy`, `Saurav Maheshkar`, `Semo Yang`, " -"`Steve Laskaridis`, `Steven Hé (Sīchàng)`, `Taner Topal`" +"We would like to give our special thanks to all the contributors who made" +" the new version of Flower possible (in `git shortlog` order):" msgstr "" -#: ../../source/ref-changelog.md:296 +#: ../../source/ref-changelog.md:17 msgid "" -"**Introduce support for XGBoost (**`FedXgbNnAvg` **strategy and " -"example)** ([#1694](https://github.com/adap/flower/pull/1694), " -"[#1709](https://github.com/adap/flower/pull/1709), " -"[#1715](https://github.com/adap/flower/pull/1715), " -"[#1717](https://github.com/adap/flower/pull/1717), " -"[#1763](https://github.com/adap/flower/pull/1763), " -"[#1795](https://github.com/adap/flower/pull/1795))" +"`Adam Narozniak`, `Charles Beauville`, `Daniel J. Beutel`, `Daniel Nata " +"Nugraha`, `Danny`, `Gustavo Bertoli`, `Heng Pan`, `Ikko Eltociear " +"Ashimine`, `Jack Cook`, `Javier`, `Raj Parekh`, `Robert Steiner`, " +"`Sebastian van der Voort`, `Taner Topal`, `Yan Gao`, `mohammadnaseri`, " +"`tabdar-khan` " msgstr "" -#: ../../source/ref-changelog.md:298 +#: ../../source/ref-changelog.md:21 msgid "" -"XGBoost is a tree-based ensemble machine learning algorithm that uses " -"gradient boosting to improve model accuracy. We added a new `FedXgbNnAvg`" -" " -"[strategy](https://github.com/adap/flower/tree/main/src/py/flwr/server/strategy/fedxgb_nn_avg.py)," -" and a [code example](https://github.com/adap/flower/tree/main/examples" -"/xgboost-quickstart) that demonstrates the usage of this new strategy in " -"an XGBoost project." +"**Introduce Flower Next high-level API (stable)** " +"([#3002](https://github.com/adap/flower/pull/3002), " +"[#2934](https://github.com/adap/flower/pull/2934), " +"[#2958](https://github.com/adap/flower/pull/2958), " +"[#3173](https://github.com/adap/flower/pull/3173), " +"[#3174](https://github.com/adap/flower/pull/3174), " +"[#2923](https://github.com/adap/flower/pull/2923), " +"[#2691](https://github.com/adap/flower/pull/2691), " +"[#3079](https://github.com/adap/flower/pull/3079), " +"[#2961](https://github.com/adap/flower/pull/2961), " +"[#2924](https://github.com/adap/flower/pull/2924), " +"[#3166](https://github.com/adap/flower/pull/3166), " +"[#3031](https://github.com/adap/flower/pull/3031), " +"[#3057](https://github.com/adap/flower/pull/3057), " +"[#3000](https://github.com/adap/flower/pull/3000), " +"[#3113](https://github.com/adap/flower/pull/3113), " +"[#2957](https://github.com/adap/flower/pull/2957), " +"[#3183](https://github.com/adap/flower/pull/3183), " +"[#3180](https://github.com/adap/flower/pull/3180), " +"[#3035](https://github.com/adap/flower/pull/3035), " +"[#3189](https://github.com/adap/flower/pull/3189), " +"[#3185](https://github.com/adap/flower/pull/3185), " +"[#3190](https://github.com/adap/flower/pull/3190), " +"[#3191](https://github.com/adap/flower/pull/3191), " +"[#3195](https://github.com/adap/flower/pull/3195), " +"[#3197](https://github.com/adap/flower/pull/3197))" msgstr "" -#: ../../source/ref-changelog.md:300 +#: ../../source/ref-changelog.md:23 msgid "" -"**Introduce iOS SDK (preview)** " -"([#1621](https://github.com/adap/flower/pull/1621), " -"[#1764](https://github.com/adap/flower/pull/1764))" +"The Flower Next high-level API is stable! Flower Next is the future of " +"Flower - all new features (like Flower Mods) will be built on top of it. " +"You can start to migrate your existing projects to Flower Next by using " +"`ServerApp` and `ClientApp` (check out `quickstart-pytorch` or " +"`quickstart-tensorflow`, a detailed migration guide will follow shortly)." +" Flower Next allows you to run multiple projects concurrently (we call " +"this multi-run) and execute the same project in either simulation " +"environments or deployment environments without having to change a single" +" line of code. The best part? It's fully compatible with existing Flower " +"projects that use `Strategy`, `NumPyClient` & co." msgstr "" -#: ../../source/ref-changelog.md:302 +#: ../../source/ref-changelog.md:25 msgid "" -"This is a major update for anyone wanting to implement Federated Learning" -" on iOS mobile devices. We now have a swift iOS SDK present under " -"[src/swift/flwr](https://github.com/adap/flower/tree/main/src/swift/flwr)" -" that will facilitate greatly the app creating process. To showcase its " -"use, the [iOS " -"example](https://github.com/adap/flower/tree/main/examples/ios) has also " -"been updated!" +"**Introduce Flower Next low-level API (preview)** " +"([#3062](https://github.com/adap/flower/pull/3062), " +"[#3034](https://github.com/adap/flower/pull/3034), " +"[#3069](https://github.com/adap/flower/pull/3069))" msgstr "" -#: ../../source/ref-changelog.md:304 +#: ../../source/ref-changelog.md:27 msgid "" -"**Introduce new \"What is Federated Learning?\" tutorial** " -"([#1657](https://github.com/adap/flower/pull/1657), " -"[#1721](https://github.com/adap/flower/pull/1721))" +"In addition to the Flower Next *high-level* API that uses `Strategy`, " +"`NumPyClient` & co, Flower 1.8 also comes with a preview version of the " +"new Flower Next *low-level* API. The low-level API allows for granular " +"control of every aspect of the learning process by sending/receiving " +"individual messages to/from client nodes. The new `ServerApp` supports " +"registering a custom `main` function that allows writing custom training " +"loops for methods like async FL, cyclic training, or federated analytics." +" The new `ClientApp` supports registering `train`, `evaluate` and `query`" +" functions that can access the raw message received from the `ServerApp`." +" New abstractions like `RecordSet`, `Message` and `Context` further " +"enable sending multiple models, multiple sets of config values and " +"metrics, stateful computations on the client node and implementations of " +"custom SMPC protocols, to name just a few." msgstr "" -#: ../../source/ref-changelog.md:306 +#: ../../source/ref-changelog.md:29 msgid "" -"A new [entry-level tutorial](https://flower.ai/docs/framework/tutorial-" -"what-is-federated-learning.html) in our documentation explains the basics" -" of Fedetated Learning. It enables anyone who's unfamiliar with Federated" -" Learning to start their journey with Flower. Forward it to anyone who's " -"interested in Federated Learning!" +"**Introduce Flower Mods (preview)** " +"([#3054](https://github.com/adap/flower/pull/3054), " +"[#2911](https://github.com/adap/flower/pull/2911), " +"[#3083](https://github.com/adap/flower/pull/3083))" msgstr "" -#: ../../source/ref-changelog.md:308 +#: ../../source/ref-changelog.md:31 msgid "" -"**Introduce new Flower Baseline: FedProx MNIST** " -"([#1513](https://github.com/adap/flower/pull/1513), " -"[#1680](https://github.com/adap/flower/pull/1680), " -"[#1681](https://github.com/adap/flower/pull/1681), " -"[#1679](https://github.com/adap/flower/pull/1679))" +"Flower Modifiers (we call them Mods) can intercept messages and analyze, " +"edit or handle them directly. Mods can be used to develop pluggable " +"modules that work across different projects. Flower 1.8 already includes " +"mods to log the size of a message, the number of parameters sent over the" +" network, differential privacy with fixed clipping and adaptive clipping," +" local differential privacy and secure aggregation protocols SecAgg and " +"SecAgg+. The Flower Mods API is released as a preview, but researchers " +"can already use it to experiment with arbirtrary SMPC protocols." msgstr "" -#: ../../source/ref-changelog.md:310 +#: ../../source/ref-changelog.md:33 msgid "" -"This new baseline replicates the MNIST+CNN task from the paper [Federated" -" Optimization in Heterogeneous Networks (Li et al., " -"2018)](https://arxiv.org/abs/1812.06127). It uses the `FedProx` strategy," -" which aims at making convergence more robust in heterogeneous settings." +"**Fine-tune LLMs with LLM FlowerTune** " +"([#3029](https://github.com/adap/flower/pull/3029), " +"[#3089](https://github.com/adap/flower/pull/3089), " +"[#3092](https://github.com/adap/flower/pull/3092), " +"[#3100](https://github.com/adap/flower/pull/3100), " +"[#3114](https://github.com/adap/flower/pull/3114), " +"[#3162](https://github.com/adap/flower/pull/3162), " +"[#3172](https://github.com/adap/flower/pull/3172))" msgstr "" -#: ../../source/ref-changelog.md:312 +#: ../../source/ref-changelog.md:35 msgid "" -"**Introduce new Flower Baseline: FedAvg FEMNIST** " -"([#1655](https://github.com/adap/flower/pull/1655))" +"We are introducing LLM FlowerTune, an introductory example that " +"demonstrates federated LLM fine-tuning of pre-trained Llama2 models on " +"the Alpaca-GPT4 dataset. The example is built to be easily adapted to use" +" different models and/or datasets. Read our blog post [LLM FlowerTune: " +"Federated LLM Fine-tuning with Flower](https://flower.ai/blog/2024-03-14" +"-llm-flowertune-federated-llm-finetuning-with-flower/) for more details." msgstr "" -#: ../../source/ref-changelog.md:314 +#: ../../source/ref-changelog.md:37 msgid "" -"This new baseline replicates an experiment evaluating the performance of " -"the FedAvg algorithm on the FEMNIST dataset from the paper [LEAF: A " -"Benchmark for Federated Settings (Caldas et al., " -"2018)](https://arxiv.org/abs/1812.01097)." +"**Introduce built-in Differential Privacy (preview)** " +"([#2798](https://github.com/adap/flower/pull/2798), " +"[#2959](https://github.com/adap/flower/pull/2959), " +"[#3038](https://github.com/adap/flower/pull/3038), " +"[#3147](https://github.com/adap/flower/pull/3147), " +"[#2909](https://github.com/adap/flower/pull/2909), " +"[#2893](https://github.com/adap/flower/pull/2893), " +"[#2892](https://github.com/adap/flower/pull/2892), " +"[#3039](https://github.com/adap/flower/pull/3039), " +"[#3074](https://github.com/adap/flower/pull/3074))" msgstr "" -#: ../../source/ref-changelog.md:316 +#: ../../source/ref-changelog.md:39 msgid "" -"**Introduce (experimental) REST API** " -"([#1594](https://github.com/adap/flower/pull/1594), " -"[#1690](https://github.com/adap/flower/pull/1690), " -"[#1695](https://github.com/adap/flower/pull/1695), " -"[#1712](https://github.com/adap/flower/pull/1712), " -"[#1802](https://github.com/adap/flower/pull/1802), " -"[#1770](https://github.com/adap/flower/pull/1770), " -"[#1733](https://github.com/adap/flower/pull/1733))" +"Built-in Differential Privacy is here! Flower supports both central and " +"local differential privacy (DP). Central DP can be configured with either" +" fixed or adaptive clipping. The clipping can happen either on the " +"server-side or the client-side. Local DP does both clipping and noising " +"on the client-side. A new documentation page [explains Differential " +"Privacy approaches](https://flower.ai/docs/framework/explanation-" +"differential-privacy.html) and a new how-to guide describes [how to use " +"the new Differential Privacy components](https://flower.ai/docs/framework" +"/how-to-use-differential-privacy.html) in Flower." msgstr "" -#: ../../source/ref-changelog.md:318 +#: ../../source/ref-changelog.md:41 msgid "" -"A new REST API has been introduced as an alternative to the gRPC-based " -"communication stack. In this initial version, the REST API only supports " -"anonymous clients." +"**Introduce built-in Secure Aggregation (preview)** " +"([#3120](https://github.com/adap/flower/pull/3120), " +"[#3110](https://github.com/adap/flower/pull/3110), " +"[#3108](https://github.com/adap/flower/pull/3108))" msgstr "" -#: ../../source/ref-changelog.md:320 +#: ../../source/ref-changelog.md:43 msgid "" -"Please note: The REST API is still experimental and will likely change " -"significantly over time." +"Built-in Secure Aggregation is here! Flower now supports different secure" +" aggregation protocols out-of-the-box. The best part? You can add secure " +"aggregation to your Flower projects with only a few lines of code. In " +"this initial release, we inlcude support for SecAgg and SecAgg+, but more" +" protocols will be implemented shortly. We'll also add detailed docs that" +" explain secure aggregation and how to use it in Flower. You can already " +"check out the new code example that shows how to use Flower to easily " +"combine Federated Learning, Differential Privacy and Secure Aggregation " +"in the same project." msgstr "" -#: ../../source/ref-changelog.md:322 +#: ../../source/ref-changelog.md:45 msgid "" -"**Improve the (experimental) Driver API** " -"([#1663](https://github.com/adap/flower/pull/1663), " -"[#1666](https://github.com/adap/flower/pull/1666), " -"[#1667](https://github.com/adap/flower/pull/1667), " -"[#1664](https://github.com/adap/flower/pull/1664), " -"[#1675](https://github.com/adap/flower/pull/1675), " -"[#1676](https://github.com/adap/flower/pull/1676), " -"[#1693](https://github.com/adap/flower/pull/1693), " -"[#1662](https://github.com/adap/flower/pull/1662), " -"[#1794](https://github.com/adap/flower/pull/1794))" +"**Introduce** `flwr` **CLI (preview)** " +"([#2942](https://github.com/adap/flower/pull/2942), " +"[#3055](https://github.com/adap/flower/pull/3055), " +"[#3111](https://github.com/adap/flower/pull/3111), " +"[#3130](https://github.com/adap/flower/pull/3130), " +"[#3136](https://github.com/adap/flower/pull/3136), " +"[#3094](https://github.com/adap/flower/pull/3094), " +"[#3059](https://github.com/adap/flower/pull/3059), " +"[#3049](https://github.com/adap/flower/pull/3049), " +"[#3142](https://github.com/adap/flower/pull/3142))" msgstr "" -#: ../../source/ref-changelog.md:324 +#: ../../source/ref-changelog.md:47 msgid "" -"The Driver API is still an experimental feature, but this release " -"introduces some major upgrades. One of the main improvements is the " -"introduction of an SQLite database to store server state on disk (instead" -" of in-memory). Another improvement is that tasks (instructions or " -"results) that have been delivered will now be deleted. This greatly " -"improves the memory efficiency of a long-running Flower server." +"A new `flwr` CLI command allows creating new Flower projects (`flwr new`)" +" and then running them using the Simulation Engine (`flwr run`)." msgstr "" -#: ../../source/ref-changelog.md:326 +#: ../../source/ref-changelog.md:49 msgid "" -"**Fix spilling issues related to Ray during simulations** " -"([#1698](https://github.com/adap/flower/pull/1698))" +"**Introduce Flower Next Simulation Engine** " +"([#3024](https://github.com/adap/flower/pull/3024), " +"[#3061](https://github.com/adap/flower/pull/3061), " +"[#2997](https://github.com/adap/flower/pull/2997), " +"[#2783](https://github.com/adap/flower/pull/2783), " +"[#3184](https://github.com/adap/flower/pull/3184), " +"[#3075](https://github.com/adap/flower/pull/3075), " +"[#3047](https://github.com/adap/flower/pull/3047), " +"[#2998](https://github.com/adap/flower/pull/2998), " +"[#3009](https://github.com/adap/flower/pull/3009), " +"[#3008](https://github.com/adap/flower/pull/3008))" msgstr "" -#: ../../source/ref-changelog.md:328 +#: ../../source/ref-changelog.md:51 msgid "" -"While running long simulations, `ray` was sometimes spilling huge amounts" -" of data that would make the training unable to continue. This is now " -"fixed! 🎉" +"The Flower Simulation Engine can now run Flower Next projects. For " +"notebook environments, there's also a new `run_simulation` function that " +"can run `ServerApp` and `ClientApp`." msgstr "" -#: ../../source/ref-changelog.md:330 +#: ../../source/ref-changelog.md:53 msgid "" -"**Add new example using** `TabNet` **and Flower** " -"([#1725](https://github.com/adap/flower/pull/1725))" +"**Handle SuperNode connection errors** " +"([#2969](https://github.com/adap/flower/pull/2969))" msgstr "" -#: ../../source/ref-changelog.md:332 +#: ../../source/ref-changelog.md:55 msgid "" -"TabNet is a powerful and flexible framework for training machine learning" -" models on tabular data. We now have a federated example using Flower: " -"[quickstart-tabnet](https://github.com/adap/flower/tree/main/examples" -"/quickstart-tabnet)." +"A SuperNode will now try to reconnect indefinitely to the SuperLink in " +"case of connection errors. The arguments `--max-retries` and `--max-wait-" +"time` can now be passed to the `flower-client-app` command. `--max-" +"retries` will define the number of tentatives the client should make " +"before it gives up trying to reconnect to the SuperLink, and, `--max-" +"wait-time` defines the time before the SuperNode gives up trying to " +"reconnect to the SuperLink." msgstr "" -#: ../../source/ref-changelog.md:334 +#: ../../source/ref-changelog.md:57 msgid "" -"**Add new how-to guide for monitoring simulations** " -"([#1649](https://github.com/adap/flower/pull/1649))" +"**General updates to Flower Baselines** " +"([#2904](https://github.com/adap/flower/pull/2904), " +"[#2482](https://github.com/adap/flower/pull/2482), " +"[#2985](https://github.com/adap/flower/pull/2985), " +"[#2968](https://github.com/adap/flower/pull/2968))" msgstr "" -#: ../../source/ref-changelog.md:336 +#: ../../source/ref-changelog.md:59 msgid "" -"We now have a documentation guide to help users monitor their performance" -" during simulations." +"There's a new [FedStar](https://flower.ai/docs/baselines/fedstar.html) " +"baseline. Several other baselined have been updated as well." msgstr "" -#: ../../source/ref-changelog.md:338 +#: ../../source/ref-changelog.md:61 msgid "" -"**Add training metrics to** `History` **object during simulations** " -"([#1696](https://github.com/adap/flower/pull/1696))" +"**Improve documentation and translations** " +"([#3050](https://github.com/adap/flower/pull/3050), " +"[#3044](https://github.com/adap/flower/pull/3044), " +"[#3043](https://github.com/adap/flower/pull/3043), " +"[#2986](https://github.com/adap/flower/pull/2986), " +"[#3041](https://github.com/adap/flower/pull/3041), " +"[#3046](https://github.com/adap/flower/pull/3046), " +"[#3042](https://github.com/adap/flower/pull/3042), " +"[#2978](https://github.com/adap/flower/pull/2978), " +"[#2952](https://github.com/adap/flower/pull/2952), " +"[#3167](https://github.com/adap/flower/pull/3167), " +"[#2953](https://github.com/adap/flower/pull/2953), " +"[#3045](https://github.com/adap/flower/pull/3045), " +"[#2654](https://github.com/adap/flower/pull/2654), " +"[#3082](https://github.com/adap/flower/pull/3082), " +"[#2990](https://github.com/adap/flower/pull/2990), " +"[#2989](https://github.com/adap/flower/pull/2989))" msgstr "" -#: ../../source/ref-changelog.md:340 +#: ../../source/ref-changelog.md:63 msgid "" -"The `fit_metrics_aggregation_fn` can be used to aggregate training " -"metrics, but previous releases did not save the results in the `History` " -"object. This is now the case!" +"As usual, we merged many smaller and larger improvements to the " +"documentation. A special thank you goes to [Sebastian van der " +"Voort](https://github.com/svdvoort) for landing a big documentation PR!" msgstr "" -#: ../../source/ref-changelog.md:342 +#: ../../source/ref-changelog.md:65 +msgid "" +"**General updates to Flower Examples** " +"([3134](https://github.com/adap/flower/pull/3134), " +"[2996](https://github.com/adap/flower/pull/2996), " +"[2930](https://github.com/adap/flower/pull/2930), " +"[2967](https://github.com/adap/flower/pull/2967), " +"[2467](https://github.com/adap/flower/pull/2467), " +"[2910](https://github.com/adap/flower/pull/2910), " +"[#2918](https://github.com/adap/flower/pull/2918), " +"[#2773](https://github.com/adap/flower/pull/2773), " +"[#3063](https://github.com/adap/flower/pull/3063), " +"[#3116](https://github.com/adap/flower/pull/3116), " +"[#3117](https://github.com/adap/flower/pull/3117))" +msgstr "" + +#: ../../source/ref-changelog.md:67 +msgid "" +"Two new examples show federated training of a Vision Transformer (ViT) " +"and federated learning in a medical context using the popular MONAI " +"library. `quickstart-pytorch` and `quickstart-tensorflow` demonstrate the" +" new Flower Next `ServerApp` and `ClientApp`. Many other examples " +"received considerable updates as well." +msgstr "" + +#: ../../source/ref-changelog.md:69 msgid "" "**General improvements** " -"([#1659](https://github.com/adap/flower/pull/1659), " -"[#1646](https://github.com/adap/flower/pull/1646), " -"[#1647](https://github.com/adap/flower/pull/1647), " -"[#1471](https://github.com/adap/flower/pull/1471), " -"[#1648](https://github.com/adap/flower/pull/1648), " -"[#1651](https://github.com/adap/flower/pull/1651), " -"[#1652](https://github.com/adap/flower/pull/1652), " -"[#1653](https://github.com/adap/flower/pull/1653), " -"[#1659](https://github.com/adap/flower/pull/1659), " -"[#1665](https://github.com/adap/flower/pull/1665), " -"[#1670](https://github.com/adap/flower/pull/1670), " -"[#1672](https://github.com/adap/flower/pull/1672), " -"[#1677](https://github.com/adap/flower/pull/1677), " -"[#1684](https://github.com/adap/flower/pull/1684), " -"[#1683](https://github.com/adap/flower/pull/1683), " -"[#1686](https://github.com/adap/flower/pull/1686), " -"[#1682](https://github.com/adap/flower/pull/1682), " -"[#1685](https://github.com/adap/flower/pull/1685), " -"[#1692](https://github.com/adap/flower/pull/1692), " -"[#1705](https://github.com/adap/flower/pull/1705), " -"[#1708](https://github.com/adap/flower/pull/1708), " -"[#1711](https://github.com/adap/flower/pull/1711), " -"[#1713](https://github.com/adap/flower/pull/1713), " -"[#1714](https://github.com/adap/flower/pull/1714), " -"[#1718](https://github.com/adap/flower/pull/1718), " -"[#1716](https://github.com/adap/flower/pull/1716), " -"[#1723](https://github.com/adap/flower/pull/1723), " -"[#1735](https://github.com/adap/flower/pull/1735), " -"[#1678](https://github.com/adap/flower/pull/1678), " -"[#1750](https://github.com/adap/flower/pull/1750), " -"[#1753](https://github.com/adap/flower/pull/1753), " -"[#1736](https://github.com/adap/flower/pull/1736), " -"[#1766](https://github.com/adap/flower/pull/1766), " -"[#1760](https://github.com/adap/flower/pull/1760), " -"[#1775](https://github.com/adap/flower/pull/1775), " -"[#1776](https://github.com/adap/flower/pull/1776), " -"[#1777](https://github.com/adap/flower/pull/1777), " -"[#1779](https://github.com/adap/flower/pull/1779), " -"[#1784](https://github.com/adap/flower/pull/1784), " -"[#1773](https://github.com/adap/flower/pull/1773), " -"[#1755](https://github.com/adap/flower/pull/1755), " -"[#1789](https://github.com/adap/flower/pull/1789), " -"[#1788](https://github.com/adap/flower/pull/1788), " -"[#1798](https://github.com/adap/flower/pull/1798), " -"[#1799](https://github.com/adap/flower/pull/1799), " -"[#1739](https://github.com/adap/flower/pull/1739), " -"[#1800](https://github.com/adap/flower/pull/1800), " -"[#1804](https://github.com/adap/flower/pull/1804), " -"[#1805](https://github.com/adap/flower/pull/1805))" -msgstr "" - -#: ../../source/ref-changelog.md:350 -msgid "v1.3.0 (2023-02-06)" +"([#3171](https://github.com/adap/flower/pull/3171), " +"[3099](https://github.com/adap/flower/pull/3099), " +"[3003](https://github.com/adap/flower/pull/3003), " +"[3145](https://github.com/adap/flower/pull/3145), " +"[3017](https://github.com/adap/flower/pull/3017), " +"[3085](https://github.com/adap/flower/pull/3085), " +"[3012](https://github.com/adap/flower/pull/3012), " +"[3119](https://github.com/adap/flower/pull/3119), " +"[2991](https://github.com/adap/flower/pull/2991), " +"[2970](https://github.com/adap/flower/pull/2970), " +"[2980](https://github.com/adap/flower/pull/2980), " +"[3086](https://github.com/adap/flower/pull/3086), " +"[2932](https://github.com/adap/flower/pull/2932), " +"[2928](https://github.com/adap/flower/pull/2928), " +"[2941](https://github.com/adap/flower/pull/2941), " +"[2933](https://github.com/adap/flower/pull/2933), " +"[3181](https://github.com/adap/flower/pull/3181), " +"[2973](https://github.com/adap/flower/pull/2973), " +"[2992](https://github.com/adap/flower/pull/2992), " +"[2915](https://github.com/adap/flower/pull/2915), " +"[3040](https://github.com/adap/flower/pull/3040), " +"[3022](https://github.com/adap/flower/pull/3022), " +"[3032](https://github.com/adap/flower/pull/3032), " +"[2902](https://github.com/adap/flower/pull/2902), " +"[2931](https://github.com/adap/flower/pull/2931), " +"[3005](https://github.com/adap/flower/pull/3005), " +"[3132](https://github.com/adap/flower/pull/3132), " +"[3115](https://github.com/adap/flower/pull/3115), " +"[2944](https://github.com/adap/flower/pull/2944), " +"[3064](https://github.com/adap/flower/pull/3064), " +"[3106](https://github.com/adap/flower/pull/3106), " +"[2974](https://github.com/adap/flower/pull/2974), " +"[3178](https://github.com/adap/flower/pull/3178), " +"[2993](https://github.com/adap/flower/pull/2993), " +"[3186](https://github.com/adap/flower/pull/3186), " +"[3091](https://github.com/adap/flower/pull/3091), " +"[3125](https://github.com/adap/flower/pull/3125), " +"[3093](https://github.com/adap/flower/pull/3093), " +"[3013](https://github.com/adap/flower/pull/3013), " +"[3033](https://github.com/adap/flower/pull/3033), " +"[3133](https://github.com/adap/flower/pull/3133), " +"[3068](https://github.com/adap/flower/pull/3068), " +"[2916](https://github.com/adap/flower/pull/2916), " +"[2975](https://github.com/adap/flower/pull/2975), " +"[2984](https://github.com/adap/flower/pull/2984), " +"[2846](https://github.com/adap/flower/pull/2846), " +"[3077](https://github.com/adap/flower/pull/3077), " +"[3143](https://github.com/adap/flower/pull/3143), " +"[2921](https://github.com/adap/flower/pull/2921), " +"[3101](https://github.com/adap/flower/pull/3101), " +"[2927](https://github.com/adap/flower/pull/2927), " +"[2995](https://github.com/adap/flower/pull/2995), " +"[2972](https://github.com/adap/flower/pull/2972), " +"[2912](https://github.com/adap/flower/pull/2912), " +"[3065](https://github.com/adap/flower/pull/3065), " +"[3028](https://github.com/adap/flower/pull/3028), " +"[2922](https://github.com/adap/flower/pull/2922), " +"[2982](https://github.com/adap/flower/pull/2982), " +"[2914](https://github.com/adap/flower/pull/2914), " +"[3179](https://github.com/adap/flower/pull/3179), " +"[3080](https://github.com/adap/flower/pull/3080), " +"[2994](https://github.com/adap/flower/pull/2994), " +"[3187](https://github.com/adap/flower/pull/3187), " +"[2926](https://github.com/adap/flower/pull/2926), " +"[3018](https://github.com/adap/flower/pull/3018), " +"[3144](https://github.com/adap/flower/pull/3144), " +"[3011](https://github.com/adap/flower/pull/3011), " +"[#3152](https://github.com/adap/flower/pull/3152), " +"[#2836](https://github.com/adap/flower/pull/2836), " +"[#2929](https://github.com/adap/flower/pull/2929), " +"[#2943](https://github.com/adap/flower/pull/2943), " +"[#2955](https://github.com/adap/flower/pull/2955), " +"[#2954](https://github.com/adap/flower/pull/2954))" +msgstr "" + +#: ../../source/ref-changelog.md:75 +msgid "v1.7.0 (2024-02-05)" msgstr "" -#: ../../source/ref-changelog.md:356 +#: ../../source/ref-changelog.md:81 msgid "" -"`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, " -"`Daniel J. Beutel`, `JDRanpariya`, `Lennart Behme`, `Taner Topal`" +"`Aasheesh Singh`, `Adam Narozniak`, `Aml Hassan Esmil`, `Charles " +"Beauville`, `Daniel J. Beutel`, `Daniel Nata Nugraha`, `Edoardo " +"Gabrielli`, `Gustavo Bertoli`, `HelinLin`, `Heng Pan`, `Javier`, `M S " +"Chaitanya Kumar`, `Mohammad Naseri`, `Nikos Vlachakis`, `Pritam Neog`, " +"`Robert Kuska`, `Robert Steiner`, `Taner Topal`, `Yahia Salaheldin " +"Shaaban`, `Yan Gao`, `Yasar Abbas` " msgstr "" -#: ../../source/ref-changelog.md:360 +#: ../../source/ref-changelog.md:85 msgid "" -"**Add support for** `workload_id` **and** `group_id` **in Driver API** " -"([#1595](https://github.com/adap/flower/pull/1595))" +"**Introduce stateful clients (experimental)** " +"([#2770](https://github.com/adap/flower/pull/2770), " +"[#2686](https://github.com/adap/flower/pull/2686), " +"[#2696](https://github.com/adap/flower/pull/2696), " +"[#2643](https://github.com/adap/flower/pull/2643), " +"[#2769](https://github.com/adap/flower/pull/2769))" msgstr "" -#: ../../source/ref-changelog.md:362 +#: ../../source/ref-changelog.md:87 msgid "" -"The (experimental) Driver API now supports a `workload_id` that can be " -"used to identify which workload a task belongs to. It also supports a new" -" `group_id` that can be used, for example, to indicate the current " -"training round. Both the `workload_id` and `group_id` enable client nodes" -" to decide whether they want to handle a task or not." +"Subclasses of `Client` and `NumPyClient` can now store local state that " +"remains on the client. Let's start with the highlight first: this new " +"feature is compatible with both simulated clients (via " +"`start_simulation`) and networked clients (via `start_client`). It's also" +" the first preview of new abstractions like `Context` and `RecordSet`. " +"Clients can access state of type `RecordSet` via `state: RecordSet = " +"self.context.state`. Changes to this `RecordSet` are preserved across " +"different rounds of execution to enable stateful computations in a " +"unified way across simulation and deployment." msgstr "" -#: ../../source/ref-changelog.md:364 +#: ../../source/ref-changelog.md:89 msgid "" -"**Make Driver API and Fleet API address configurable** " -"([#1637](https://github.com/adap/flower/pull/1637))" +"**Improve performance** " +"([#2293](https://github.com/adap/flower/pull/2293))" msgstr "" -#: ../../source/ref-changelog.md:366 +#: ../../source/ref-changelog.md:91 msgid "" -"The (experimental) long-running Flower server (Driver API and Fleet API) " -"can now configure the server address of both Driver API (via `--driver-" -"api-address`) and Fleet API (via `--fleet-api-address`) when starting:" +"Flower is faster than ever. All `FedAvg`-derived strategies now use in-" +"place aggregation to reduce memory consumption. The Flower client " +"serialization/deserialization has been rewritten from the ground up, " +"which results in significant speedups, especially when the client-side " +"training time is short." msgstr "" -#: ../../source/ref-changelog.md:368 +#: ../../source/ref-changelog.md:93 msgid "" -"`flower-server --driver-api-address \"0.0.0.0:8081\" --fleet-api-address " -"\"0.0.0.0:8086\"`" -msgstr "" - -#: ../../source/ref-changelog.md:370 -msgid "Both IPv4 and IPv6 addresses are supported." +"**Support Federated Learning with Apple MLX and Flower** " +"([#2693](https://github.com/adap/flower/pull/2693))" msgstr "" -#: ../../source/ref-changelog.md:372 +#: ../../source/ref-changelog.md:95 msgid "" -"**Add new example of Federated Learning using fastai and Flower** " -"([#1598](https://github.com/adap/flower/pull/1598))" +"Flower has official support for federated learning using [Apple " +"MLX](https://ml-explore.github.io/mlx) via the new `quickstart-mlx` code " +"example." msgstr "" -#: ../../source/ref-changelog.md:374 +#: ../../source/ref-changelog.md:97 msgid "" -"A new code example (`quickstart-fastai`) demonstrates federated learning " -"with [fastai](https://www.fast.ai/) and Flower. You can find it here: " -"[quickstart-fastai](https://github.com/adap/flower/tree/main/examples" -"/quickstart-fastai)." +"**Introduce new XGBoost cyclic strategy** " +"([#2666](https://github.com/adap/flower/pull/2666), " +"[#2668](https://github.com/adap/flower/pull/2668))" msgstr "" -#: ../../source/ref-changelog.md:376 +#: ../../source/ref-changelog.md:99 msgid "" -"**Make Android example compatible with** `flwr >= 1.0.0` **and the latest" -" versions of Android** " -"([#1603](https://github.com/adap/flower/pull/1603))" +"A new strategy called `FedXgbCyclic` supports a client-by-client style of" +" training (often called cyclic). The `xgboost-comprehensive` code example" +" shows how to use it in a full project. In addition to that, `xgboost-" +"comprehensive` now also supports simulation mode. With this, Flower " +"offers best-in-class XGBoost support." msgstr "" -#: ../../source/ref-changelog.md:378 +#: ../../source/ref-changelog.md:101 msgid "" -"The Android code example has received a substantial update: the project " -"is compatible with Flower 1.0 (and later), the UI received a full " -"refresh, and the project is updated to be compatible with newer Android " -"tooling." +"**Support Python 3.11** " +"([#2394](https://github.com/adap/flower/pull/2394))" msgstr "" -#: ../../source/ref-changelog.md:380 +#: ../../source/ref-changelog.md:103 msgid "" -"**Add new `FedProx` strategy** " -"([#1619](https://github.com/adap/flower/pull/1619))" +"Framework tests now run on Python 3.8, 3.9, 3.10, and 3.11. This will " +"ensure better support for users using more recent Python versions." msgstr "" -#: ../../source/ref-changelog.md:382 +#: ../../source/ref-changelog.md:105 msgid "" -"This " -"[strategy](https://github.com/adap/flower/blob/main/src/py/flwr/server/strategy/fedprox.py)" -" is almost identical to " -"[`FedAvg`](https://github.com/adap/flower/blob/main/src/py/flwr/server/strategy/fedavg.py)," -" but helps users replicate what is described in this " -"[paper](https://arxiv.org/abs/1812.06127). It essentially adds a " -"parameter called `proximal_mu` to regularize the local models with " -"respect to the global models." +"**Update gRPC and ProtoBuf dependencies** " +"([#2814](https://github.com/adap/flower/pull/2814))" msgstr "" -#: ../../source/ref-changelog.md:384 +#: ../../source/ref-changelog.md:107 msgid "" -"**Add new metrics to telemetry events** " -"([#1640](https://github.com/adap/flower/pull/1640))" +"The `grpcio` and `protobuf` dependencies were updated to their latest " +"versions for improved security and performance." msgstr "" -#: ../../source/ref-changelog.md:386 +#: ../../source/ref-changelog.md:109 msgid "" -"An updated event structure allows, for example, the clustering of events " -"within the same workload." +"**Introduce Docker image for Flower server** " +"([#2700](https://github.com/adap/flower/pull/2700), " +"[#2688](https://github.com/adap/flower/pull/2688), " +"[#2705](https://github.com/adap/flower/pull/2705), " +"[#2695](https://github.com/adap/flower/pull/2695), " +"[#2747](https://github.com/adap/flower/pull/2747), " +"[#2746](https://github.com/adap/flower/pull/2746), " +"[#2680](https://github.com/adap/flower/pull/2680), " +"[#2682](https://github.com/adap/flower/pull/2682), " +"[#2701](https://github.com/adap/flower/pull/2701))" msgstr "" -#: ../../source/ref-changelog.md:388 +#: ../../source/ref-changelog.md:111 msgid "" -"**Add new custom strategy tutorial section** " -"[#1623](https://github.com/adap/flower/pull/1623)" +"The Flower server can now be run using an official Docker image. A new " +"how-to guide explains [how to run Flower using " +"Docker](https://flower.ai/docs/framework/how-to-run-flower-using-" +"docker.html). An official Flower client Docker image will follow." msgstr "" -#: ../../source/ref-changelog.md:390 +#: ../../source/ref-changelog.md:113 msgid "" -"The Flower tutorial now has a new section that covers implementing a " -"custom strategy from scratch: [Open in " -"Colab](https://colab.research.google.com/github/adap/flower/blob/main/doc/source" -"/tutorial-build-a-strategy-from-scratch-pytorch.ipynb)" +"**Introduce** `flower-via-docker-compose` **example** " +"([#2626](https://github.com/adap/flower/pull/2626))" msgstr "" -#: ../../source/ref-changelog.md:392 +#: ../../source/ref-changelog.md:115 msgid "" -"**Add new custom serialization tutorial section** " -"([#1622](https://github.com/adap/flower/pull/1622))" +"**Introduce** `quickstart-sklearn-tabular` **example** " +"([#2719](https://github.com/adap/flower/pull/2719))" msgstr "" -#: ../../source/ref-changelog.md:394 +#: ../../source/ref-changelog.md:117 msgid "" -"The Flower tutorial now has a new section that covers custom " -"serialization: [Open in " -"Colab](https://colab.research.google.com/github/adap/flower/blob/main/doc/source" -"/tutorial-customize-the-client-pytorch.ipynb)" +"**Introduce** `custom-metrics` **example** " +"([#1958](https://github.com/adap/flower/pull/1958))" msgstr "" -#: ../../source/ref-changelog.md:396 +#: ../../source/ref-changelog.md:119 msgid "" -"**General improvements** " -"([#1638](https://github.com/adap/flower/pull/1638), " -"[#1634](https://github.com/adap/flower/pull/1634), " -"[#1636](https://github.com/adap/flower/pull/1636), " -"[#1635](https://github.com/adap/flower/pull/1635), " -"[#1633](https://github.com/adap/flower/pull/1633), " -"[#1632](https://github.com/adap/flower/pull/1632), " -"[#1631](https://github.com/adap/flower/pull/1631), " -"[#1630](https://github.com/adap/flower/pull/1630), " -"[#1627](https://github.com/adap/flower/pull/1627), " -"[#1593](https://github.com/adap/flower/pull/1593), " -"[#1616](https://github.com/adap/flower/pull/1616), " -"[#1615](https://github.com/adap/flower/pull/1615), " -"[#1607](https://github.com/adap/flower/pull/1607), " -"[#1609](https://github.com/adap/flower/pull/1609), " -"[#1608](https://github.com/adap/flower/pull/1608), " -"[#1603](https://github.com/adap/flower/pull/1603), " -"[#1590](https://github.com/adap/flower/pull/1590), " -"[#1580](https://github.com/adap/flower/pull/1580), " -"[#1599](https://github.com/adap/flower/pull/1599), " -"[#1600](https://github.com/adap/flower/pull/1600), " -"[#1601](https://github.com/adap/flower/pull/1601), " -"[#1597](https://github.com/adap/flower/pull/1597), " -"[#1595](https://github.com/adap/flower/pull/1595), " -"[#1591](https://github.com/adap/flower/pull/1591), " -"[#1588](https://github.com/adap/flower/pull/1588), " -"[#1589](https://github.com/adap/flower/pull/1589), " -"[#1587](https://github.com/adap/flower/pull/1587), " -"[#1573](https://github.com/adap/flower/pull/1573), " -"[#1581](https://github.com/adap/flower/pull/1581), " -"[#1578](https://github.com/adap/flower/pull/1578), " -"[#1574](https://github.com/adap/flower/pull/1574), " -"[#1572](https://github.com/adap/flower/pull/1572), " -"[#1586](https://github.com/adap/flower/pull/1586))" +"**Update code examples to use Flower Datasets** " +"([#2450](https://github.com/adap/flower/pull/2450), " +"[#2456](https://github.com/adap/flower/pull/2456), " +"[#2318](https://github.com/adap/flower/pull/2318), " +"[#2712](https://github.com/adap/flower/pull/2712))" msgstr "" -#: ../../source/ref-changelog.md:400 +#: ../../source/ref-changelog.md:121 msgid "" -"**Updated documentation** " -"([#1629](https://github.com/adap/flower/pull/1629), " -"[#1628](https://github.com/adap/flower/pull/1628), " -"[#1620](https://github.com/adap/flower/pull/1620), " -"[#1618](https://github.com/adap/flower/pull/1618), " -"[#1617](https://github.com/adap/flower/pull/1617), " -"[#1613](https://github.com/adap/flower/pull/1613), " -"[#1614](https://github.com/adap/flower/pull/1614))" +"Several code examples were updated to use [Flower " +"Datasets](https://flower.ai/docs/datasets/)." msgstr "" -#: ../../source/ref-changelog.md:402 ../../source/ref-changelog.md:469 +#: ../../source/ref-changelog.md:123 msgid "" -"As usual, the documentation has improved quite a bit. It is another step " -"in our effort to make the Flower documentation the best documentation of " -"any project. Stay tuned and as always, feel free to provide feedback!" +"**General updates to Flower Examples** " +"([#2381](https://github.com/adap/flower/pull/2381), " +"[#2805](https://github.com/adap/flower/pull/2805), " +"[#2782](https://github.com/adap/flower/pull/2782), " +"[#2806](https://github.com/adap/flower/pull/2806), " +"[#2829](https://github.com/adap/flower/pull/2829), " +"[#2825](https://github.com/adap/flower/pull/2825), " +"[#2816](https://github.com/adap/flower/pull/2816), " +"[#2726](https://github.com/adap/flower/pull/2726), " +"[#2659](https://github.com/adap/flower/pull/2659), " +"[#2655](https://github.com/adap/flower/pull/2655))" msgstr "" -#: ../../source/ref-changelog.md:408 -msgid "v1.2.0 (2023-01-13)" +#: ../../source/ref-changelog.md:125 +msgid "Many Flower code examples received substantial updates." msgstr "" -#: ../../source/ref-changelog.md:414 -msgid "" -"`Adam Narozniak`, `Charles Beauville`, `Daniel J. Beutel`, `Edoardo`, `L." -" Jiang`, `Ragy`, `Taner Topal`, `dannymcy`" +#: ../../source/ref-changelog.md:127 ../../source/ref-changelog.md:220 +msgid "**Update Flower Baselines**" msgstr "" -#: ../../source/ref-changelog.md:418 +#: ../../source/ref-changelog.md:129 msgid "" -"**Introduce new Flower Baseline: FedAvg MNIST** " -"([#1497](https://github.com/adap/flower/pull/1497), " -"[#1552](https://github.com/adap/flower/pull/1552))" +"HFedXGBoost ([#2226](https://github.com/adap/flower/pull/2226), " +"[#2771](https://github.com/adap/flower/pull/2771))" msgstr "" -#: ../../source/ref-changelog.md:420 -msgid "" -"Over the coming weeks, we will be releasing a number of new reference " -"implementations useful especially to FL newcomers. They will typically " -"revisit well known papers from the literature, and be suitable for " -"integration in your own application or for experimentation, in order to " -"deepen your knowledge of FL in general. Today's release is the first in " -"this series. [Read more.](https://flower.ai/blog/2023-01-12-fl-starter-" -"pack-fedavg-mnist-cnn/)" +#: ../../source/ref-changelog.md:130 +msgid "FedVSSL ([#2412](https://github.com/adap/flower/pull/2412))" msgstr "" -#: ../../source/ref-changelog.md:422 -msgid "" -"**Improve GPU support in simulations** " -"([#1555](https://github.com/adap/flower/pull/1555))" +#: ../../source/ref-changelog.md:131 +msgid "FedNova ([#2179](https://github.com/adap/flower/pull/2179))" msgstr "" -#: ../../source/ref-changelog.md:424 +#: ../../source/ref-changelog.md:132 +msgid "HeteroFL ([#2439](https://github.com/adap/flower/pull/2439))" +msgstr "" + +#: ../../source/ref-changelog.md:133 +msgid "FedAvgM ([#2246](https://github.com/adap/flower/pull/2246))" +msgstr "" + +#: ../../source/ref-changelog.md:134 +msgid "FedPara ([#2722](https://github.com/adap/flower/pull/2722))" +msgstr "" + +#: ../../source/ref-changelog.md:136 msgid "" -"The Ray-based Virtual Client Engine (`start_simulation`) has been updated" -" to improve GPU support. The update includes some of the hard-earned " -"lessons from scaling simulations in GPU cluster environments. New " -"defaults make running GPU-based simulations substantially more robust." +"**Improve documentation** " +"([#2674](https://github.com/adap/flower/pull/2674), " +"[#2480](https://github.com/adap/flower/pull/2480), " +"[#2826](https://github.com/adap/flower/pull/2826), " +"[#2727](https://github.com/adap/flower/pull/2727), " +"[#2761](https://github.com/adap/flower/pull/2761), " +"[#2900](https://github.com/adap/flower/pull/2900))" msgstr "" -#: ../../source/ref-changelog.md:426 +#: ../../source/ref-changelog.md:138 msgid "" -"**Improve GPU support in Jupyter Notebook tutorials** " -"([#1527](https://github.com/adap/flower/pull/1527), " -"[#1558](https://github.com/adap/flower/pull/1558))" +"**Improved testing and development infrastructure** " +"([#2797](https://github.com/adap/flower/pull/2797), " +"[#2676](https://github.com/adap/flower/pull/2676), " +"[#2644](https://github.com/adap/flower/pull/2644), " +"[#2656](https://github.com/adap/flower/pull/2656), " +"[#2848](https://github.com/adap/flower/pull/2848), " +"[#2675](https://github.com/adap/flower/pull/2675), " +"[#2735](https://github.com/adap/flower/pull/2735), " +"[#2767](https://github.com/adap/flower/pull/2767), " +"[#2732](https://github.com/adap/flower/pull/2732), " +"[#2744](https://github.com/adap/flower/pull/2744), " +"[#2681](https://github.com/adap/flower/pull/2681), " +"[#2699](https://github.com/adap/flower/pull/2699), " +"[#2745](https://github.com/adap/flower/pull/2745), " +"[#2734](https://github.com/adap/flower/pull/2734), " +"[#2731](https://github.com/adap/flower/pull/2731), " +"[#2652](https://github.com/adap/flower/pull/2652), " +"[#2720](https://github.com/adap/flower/pull/2720), " +"[#2721](https://github.com/adap/flower/pull/2721), " +"[#2717](https://github.com/adap/flower/pull/2717), " +"[#2864](https://github.com/adap/flower/pull/2864), " +"[#2694](https://github.com/adap/flower/pull/2694), " +"[#2709](https://github.com/adap/flower/pull/2709), " +"[#2658](https://github.com/adap/flower/pull/2658), " +"[#2796](https://github.com/adap/flower/pull/2796), " +"[#2692](https://github.com/adap/flower/pull/2692), " +"[#2657](https://github.com/adap/flower/pull/2657), " +"[#2813](https://github.com/adap/flower/pull/2813), " +"[#2661](https://github.com/adap/flower/pull/2661), " +"[#2398](https://github.com/adap/flower/pull/2398))" msgstr "" -#: ../../source/ref-changelog.md:428 +#: ../../source/ref-changelog.md:140 msgid "" -"Some users reported that Jupyter Notebooks have not always been easy to " -"use on GPU instances. We listened and made improvements to all of our " -"Jupyter notebooks! Check out the updated notebooks here:" +"The Flower testing and development infrastructure has received " +"substantial updates. This makes Flower 1.7 the most tested release ever." msgstr "" -#: ../../source/ref-changelog.md:430 +#: ../../source/ref-changelog.md:142 msgid "" -"[An Introduction to Federated Learning](https://flower.ai/docs/framework" -"/tutorial-get-started-with-flower-pytorch.html)" +"**Update dependencies** " +"([#2753](https://github.com/adap/flower/pull/2753), " +"[#2651](https://github.com/adap/flower/pull/2651), " +"[#2739](https://github.com/adap/flower/pull/2739), " +"[#2837](https://github.com/adap/flower/pull/2837), " +"[#2788](https://github.com/adap/flower/pull/2788), " +"[#2811](https://github.com/adap/flower/pull/2811), " +"[#2774](https://github.com/adap/flower/pull/2774), " +"[#2790](https://github.com/adap/flower/pull/2790), " +"[#2751](https://github.com/adap/flower/pull/2751), " +"[#2850](https://github.com/adap/flower/pull/2850), " +"[#2812](https://github.com/adap/flower/pull/2812), " +"[#2872](https://github.com/adap/flower/pull/2872), " +"[#2736](https://github.com/adap/flower/pull/2736), " +"[#2756](https://github.com/adap/flower/pull/2756), " +"[#2857](https://github.com/adap/flower/pull/2857), " +"[#2757](https://github.com/adap/flower/pull/2757), " +"[#2810](https://github.com/adap/flower/pull/2810), " +"[#2740](https://github.com/adap/flower/pull/2740), " +"[#2789](https://github.com/adap/flower/pull/2789))" msgstr "" -#: ../../source/ref-changelog.md:431 +#: ../../source/ref-changelog.md:144 msgid "" -"[Strategies in Federated Learning](https://flower.ai/docs/framework" -"/tutorial-use-a-federated-learning-strategy-pytorch.html)" +"**General improvements** " +"([#2803](https://github.com/adap/flower/pull/2803), " +"[#2847](https://github.com/adap/flower/pull/2847), " +"[#2877](https://github.com/adap/flower/pull/2877), " +"[#2690](https://github.com/adap/flower/pull/2690), " +"[#2889](https://github.com/adap/flower/pull/2889), " +"[#2874](https://github.com/adap/flower/pull/2874), " +"[#2819](https://github.com/adap/flower/pull/2819), " +"[#2689](https://github.com/adap/flower/pull/2689), " +"[#2457](https://github.com/adap/flower/pull/2457), " +"[#2870](https://github.com/adap/flower/pull/2870), " +"[#2669](https://github.com/adap/flower/pull/2669), " +"[#2876](https://github.com/adap/flower/pull/2876), " +"[#2885](https://github.com/adap/flower/pull/2885), " +"[#2858](https://github.com/adap/flower/pull/2858), " +"[#2867](https://github.com/adap/flower/pull/2867), " +"[#2351](https://github.com/adap/flower/pull/2351), " +"[#2886](https://github.com/adap/flower/pull/2886), " +"[#2860](https://github.com/adap/flower/pull/2860), " +"[#2828](https://github.com/adap/flower/pull/2828), " +"[#2869](https://github.com/adap/flower/pull/2869), " +"[#2875](https://github.com/adap/flower/pull/2875), " +"[#2733](https://github.com/adap/flower/pull/2733), " +"[#2488](https://github.com/adap/flower/pull/2488), " +"[#2646](https://github.com/adap/flower/pull/2646), " +"[#2879](https://github.com/adap/flower/pull/2879), " +"[#2821](https://github.com/adap/flower/pull/2821), " +"[#2855](https://github.com/adap/flower/pull/2855), " +"[#2800](https://github.com/adap/flower/pull/2800), " +"[#2807](https://github.com/adap/flower/pull/2807), " +"[#2801](https://github.com/adap/flower/pull/2801), " +"[#2804](https://github.com/adap/flower/pull/2804), " +"[#2851](https://github.com/adap/flower/pull/2851), " +"[#2787](https://github.com/adap/flower/pull/2787), " +"[#2852](https://github.com/adap/flower/pull/2852), " +"[#2672](https://github.com/adap/flower/pull/2672), " +"[#2759](https://github.com/adap/flower/pull/2759))" msgstr "" -#: ../../source/ref-changelog.md:432 +#: ../../source/ref-changelog.md:148 msgid "" -"[Building a Strategy](https://flower.ai/docs/framework/tutorial-build-a" -"-strategy-from-scratch-pytorch.html)" +"**Deprecate** `start_numpy_client` " +"([#2563](https://github.com/adap/flower/pull/2563), " +"[#2718](https://github.com/adap/flower/pull/2718))" msgstr "" -#: ../../source/ref-changelog.md:433 +#: ../../source/ref-changelog.md:150 msgid "" -"[Client and NumPyClient](https://flower.ai/docs/framework/tutorial-" -"customize-the-client-pytorch.html)" +"Until now, clients of type `NumPyClient` needed to be started via " +"`start_numpy_client`. In our efforts to consolidate framework APIs, we " +"have introduced changes, and now all client types should start via " +"`start_client`. To continue using `NumPyClient` clients, you simply need " +"to first call the `.to_client()` method and then pass returned `Client` " +"object to `start_client`. The examples and the documentation have been " +"updated accordingly." msgstr "" -#: ../../source/ref-changelog.md:435 +#: ../../source/ref-changelog.md:152 msgid "" -"**Introduce optional telemetry** " -"([#1533](https://github.com/adap/flower/pull/1533), " -"[#1544](https://github.com/adap/flower/pull/1544), " -"[#1584](https://github.com/adap/flower/pull/1584))" +"**Deprecate legacy DP wrappers** " +"([#2749](https://github.com/adap/flower/pull/2749))" msgstr "" -#: ../../source/ref-changelog.md:437 +#: ../../source/ref-changelog.md:154 msgid "" -"After a [request for " -"feedback](https://github.com/adap/flower/issues/1534) from the community," -" the Flower open-source project introduces optional collection of " -"*anonymous* usage metrics to make well-informed decisions to improve " -"Flower. Doing this enables the Flower team to understand how Flower is " -"used and what challenges users might face." +"Legacy DP wrapper classes are deprecated, but still functional. This is " +"in preparation for an all-new pluggable version of differential privacy " +"support in Flower." msgstr "" -#: ../../source/ref-changelog.md:439 +#: ../../source/ref-changelog.md:156 msgid "" -"**Flower is a friendly framework for collaborative AI and data science.**" -" Staying true to this statement, Flower makes it easy to disable " -"telemetry for users who do not want to share anonymous usage metrics. " -"[Read more.](https://flower.ai/docs/telemetry.html)." +"**Make optional arg** `--callable` **in** `flower-client` **a required " +"positional arg** ([#2673](https://github.com/adap/flower/pull/2673))" msgstr "" -#: ../../source/ref-changelog.md:441 +#: ../../source/ref-changelog.md:158 msgid "" -"**Introduce (experimental) Driver API** " -"([#1520](https://github.com/adap/flower/pull/1520), " -"[#1525](https://github.com/adap/flower/pull/1525), " -"[#1545](https://github.com/adap/flower/pull/1545), " -"[#1546](https://github.com/adap/flower/pull/1546), " -"[#1550](https://github.com/adap/flower/pull/1550), " -"[#1551](https://github.com/adap/flower/pull/1551), " -"[#1567](https://github.com/adap/flower/pull/1567))" +"**Rename** `certificates` **to** `root_certificates` **in** `Driver` " +"([#2890](https://github.com/adap/flower/pull/2890))" msgstr "" -#: ../../source/ref-changelog.md:443 +#: ../../source/ref-changelog.md:160 msgid "" -"Flower now has a new (experimental) Driver API which will enable fully " -"programmable, async, and multi-tenant Federated Learning and Federated " -"Analytics applications. Phew, that's a lot! Going forward, the Driver API" -" will be the abstraction that many upcoming features will be built on - " -"and you can start building those things now, too." +"**Drop experimental** `Task` **fields** " +"([#2866](https://github.com/adap/flower/pull/2866), " +"[#2865](https://github.com/adap/flower/pull/2865))" msgstr "" -#: ../../source/ref-changelog.md:445 +#: ../../source/ref-changelog.md:162 msgid "" -"The Driver API also enables a new execution mode in which the server runs" -" indefinitely. Multiple individual workloads can run concurrently and " -"start and stop their execution independent of the server. This is " -"especially useful for users who want to deploy Flower in production." +"Experimental fields `sa`, `legacy_server_message` and " +"`legacy_client_message` were removed from `Task` message. The removed " +"fields are superseded by the new `RecordSet` abstraction." msgstr "" -#: ../../source/ref-changelog.md:447 +#: ../../source/ref-changelog.md:164 msgid "" -"To learn more, check out the `mt-pytorch` code example. We look forward " -"to you feedback!" +"**Retire MXNet examples** " +"([#2724](https://github.com/adap/flower/pull/2724))" msgstr "" -#: ../../source/ref-changelog.md:449 +#: ../../source/ref-changelog.md:166 msgid "" -"Please note: *The Driver API is still experimental and will likely change" -" significantly over time.*" +"The development of the MXNet fremework has ended and the project is now " +"[archived on GitHub](https://github.com/apache/mxnet). Existing MXNet " +"examples won't receive updates." +msgstr "" + +#: ../../source/ref-changelog.md:168 +msgid "v1.6.0 (2023-11-28)" msgstr "" -#: ../../source/ref-changelog.md:451 +#: ../../source/ref-changelog.md:174 msgid "" -"**Add new Federated Analytics with Pandas example** " -"([#1469](https://github.com/adap/flower/pull/1469), " -"[#1535](https://github.com/adap/flower/pull/1535))" +"`Aashish Kolluri`, `Adam Narozniak`, `Alessio Mora`, `Barathwaja S`, " +"`Charles Beauville`, `Daniel J. Beutel`, `Daniel Nata Nugraha`, `Gabriel " +"Mota`, `Heng Pan`, `Ivan Agarský`, `JS.KIM`, `Javier`, `Marius Schlegel`," +" `Navin Chandra`, `Nic Lane`, `Peterpan828`, `Qinbin Li`, `Shaz-hash`, " +"`Steve Laskaridis`, `Taner Topal`, `William Lindskog`, `Yan Gao`, " +"`cnxdeveloper`, `k3nfalt` " msgstr "" -#: ../../source/ref-changelog.md:453 +#: ../../source/ref-changelog.md:178 msgid "" -"A new code example (`quickstart-pandas`) demonstrates federated analytics" -" with Pandas and Flower. You can find it here: [quickstart-" -"pandas](https://github.com/adap/flower/tree/main/examples/quickstart-" -"pandas)." +"**Add experimental support for Python 3.12** " +"([#2565](https://github.com/adap/flower/pull/2565))" msgstr "" -#: ../../source/ref-changelog.md:455 +#: ../../source/ref-changelog.md:180 msgid "" -"**Add new strategies: Krum and MultiKrum** " -"([#1481](https://github.com/adap/flower/pull/1481))" +"**Add new XGBoost examples** " +"([#2612](https://github.com/adap/flower/pull/2612), " +"[#2554](https://github.com/adap/flower/pull/2554), " +"[#2617](https://github.com/adap/flower/pull/2617), " +"[#2618](https://github.com/adap/flower/pull/2618), " +"[#2619](https://github.com/adap/flower/pull/2619), " +"[#2567](https://github.com/adap/flower/pull/2567))" msgstr "" -#: ../../source/ref-changelog.md:457 +#: ../../source/ref-changelog.md:182 msgid "" -"Edoardo, a computer science student at the Sapienza University of Rome, " -"contributed a new `Krum` strategy that enables users to easily use Krum " -"and MultiKrum in their workloads." +"We have added a new `xgboost-quickstart` example alongside a new " +"`xgboost-comprehensive` example that goes more in-depth." msgstr "" -#: ../../source/ref-changelog.md:459 +#: ../../source/ref-changelog.md:184 msgid "" -"**Update C++ example to be compatible with Flower v1.2.0** " -"([#1495](https://github.com/adap/flower/pull/1495))" +"**Add Vertical FL example** " +"([#2598](https://github.com/adap/flower/pull/2598))" msgstr "" -#: ../../source/ref-changelog.md:461 +#: ../../source/ref-changelog.md:186 msgid "" -"The C++ code example has received a substantial update to make it " -"compatible with the latest version of Flower." +"We had many questions about Vertical Federated Learning using Flower, so " +"we decided to add an simple example for it on the [Titanic " +"dataset](https://www.kaggle.com/competitions/titanic/data) alongside a " +"tutorial (in the README)." msgstr "" -#: ../../source/ref-changelog.md:463 +#: ../../source/ref-changelog.md:188 msgid "" -"**General improvements** " -"([#1491](https://github.com/adap/flower/pull/1491), " -"[#1504](https://github.com/adap/flower/pull/1504), " -"[#1506](https://github.com/adap/flower/pull/1506), " -"[#1514](https://github.com/adap/flower/pull/1514), " -"[#1522](https://github.com/adap/flower/pull/1522), " -"[#1523](https://github.com/adap/flower/pull/1523), " -"[#1526](https://github.com/adap/flower/pull/1526), " -"[#1528](https://github.com/adap/flower/pull/1528), " -"[#1547](https://github.com/adap/flower/pull/1547), " -"[#1549](https://github.com/adap/flower/pull/1549), " -"[#1560](https://github.com/adap/flower/pull/1560), " -"[#1564](https://github.com/adap/flower/pull/1564), " -"[#1566](https://github.com/adap/flower/pull/1566))" +"**Support custom** `ClientManager` **in** `start_driver()` " +"([#2292](https://github.com/adap/flower/pull/2292))" msgstr "" -#: ../../source/ref-changelog.md:467 +#: ../../source/ref-changelog.md:190 msgid "" -"**Updated documentation** " -"([#1494](https://github.com/adap/flower/pull/1494), " -"[#1496](https://github.com/adap/flower/pull/1496), " -"[#1500](https://github.com/adap/flower/pull/1500), " -"[#1503](https://github.com/adap/flower/pull/1503), " -"[#1505](https://github.com/adap/flower/pull/1505), " -"[#1524](https://github.com/adap/flower/pull/1524), " -"[#1518](https://github.com/adap/flower/pull/1518), " -"[#1519](https://github.com/adap/flower/pull/1519), " -"[#1515](https://github.com/adap/flower/pull/1515))" +"**Update REST API to support create and delete nodes** " +"([#2283](https://github.com/adap/flower/pull/2283))" msgstr "" -#: ../../source/ref-changelog.md:471 +#: ../../source/ref-changelog.md:192 msgid "" -"One highlight is the new [first time contributor " -"guide](https://flower.ai/docs/first-time-contributors.html): if you've " -"never contributed on GitHub before, this is the perfect place to start!" +"**Update the Android SDK** " +"([#2187](https://github.com/adap/flower/pull/2187))" msgstr "" -#: ../../source/ref-changelog.md:477 -msgid "v1.1.0 (2022-10-31)" +#: ../../source/ref-changelog.md:194 +msgid "Add gRPC request-response capability to the Android SDK." msgstr "" -#: ../../source/ref-changelog.md:481 +#: ../../source/ref-changelog.md:196 msgid "" -"We would like to give our **special thanks** to all the contributors who " -"made the new version of Flower possible (in `git shortlog` order):" +"**Update the C++ SDK** " +"([#2537](https://github.com/adap/flower/pull/2537), " +"[#2528](https://github.com/adap/flower/pull/2528), " +"[#2523](https://github.com/adap/flower/pull/2523), " +"[#2522](https://github.com/adap/flower/pull/2522))" msgstr "" -#: ../../source/ref-changelog.md:483 -msgid "" -"`Akis Linardos`, `Christopher S`, `Daniel J. Beutel`, `George`, `Jan " -"Schlicht`, `Mohammad Fares`, `Pedro Porto Buarque de Gusmão`, `Philipp " -"Wiesner`, `Rob Luke`, `Taner Topal`, `VasundharaAgarwal`, " -"`danielnugraha`, `edogab33`" +#: ../../source/ref-changelog.md:198 +msgid "Add gRPC request-response capability to the C++ SDK." msgstr "" -#: ../../source/ref-changelog.md:487 +#: ../../source/ref-changelog.md:200 msgid "" -"**Introduce Differential Privacy wrappers (preview)** " -"([#1357](https://github.com/adap/flower/pull/1357), " -"[#1460](https://github.com/adap/flower/pull/1460))" +"**Make HTTPS the new default** " +"([#2591](https://github.com/adap/flower/pull/2591), " +"[#2636](https://github.com/adap/flower/pull/2636))" msgstr "" -#: ../../source/ref-changelog.md:489 +#: ../../source/ref-changelog.md:202 msgid "" -"The first (experimental) preview of pluggable Differential Privacy " -"wrappers enables easy configuration and usage of differential privacy " -"(DP). The pluggable DP wrappers enable framework-agnostic **and** " -"strategy-agnostic usage of both client-side DP and server-side DP. Head " -"over to the Flower docs, a new explainer goes into more detail." +"Flower is moving to HTTPS by default. The new `flower-server` requires " +"passing `--certificates`, but users can enable `--insecure` to use HTTP " +"for prototyping. The same applies to `flower-client`, which can either " +"use user-provided credentials or gRPC-bundled certificates to connect to " +"an HTTPS-enabled server or requires opt-out via passing `--insecure` to " +"enable insecure HTTP connections." msgstr "" -#: ../../source/ref-changelog.md:491 +#: ../../source/ref-changelog.md:204 msgid "" -"**New iOS CoreML code example** " -"([#1289](https://github.com/adap/flower/pull/1289))" +"For backward compatibility, `start_client()` and `start_numpy_client()` " +"will still start in insecure mode by default. In a future release, " +"insecure connections will require user opt-in by passing `insecure=True`." msgstr "" -#: ../../source/ref-changelog.md:493 +#: ../../source/ref-changelog.md:206 msgid "" -"Flower goes iOS! A massive new code example shows how Flower clients can " -"be built for iOS. The code example contains both Flower iOS SDK " -"components that can be used for many tasks, and one task example running " -"on CoreML." +"**Unify client API** ([#2303](https://github.com/adap/flower/pull/2303), " +"[#2390](https://github.com/adap/flower/pull/2390), " +"[#2493](https://github.com/adap/flower/pull/2493))" msgstr "" -#: ../../source/ref-changelog.md:495 +#: ../../source/ref-changelog.md:208 msgid "" -"**New FedMedian strategy** " -"([#1461](https://github.com/adap/flower/pull/1461))" +"Using the `client_fn`, Flower clients can interchangeably run as " +"standalone processes (i.e. via `start_client`) or in simulation (i.e. via" +" `start_simulation`) without requiring changes to how the client class is" +" defined and instantiated. The `to_client()` function is introduced to " +"convert a `NumPyClient` to a `Client`." msgstr "" -#: ../../source/ref-changelog.md:497 +#: ../../source/ref-changelog.md:210 msgid "" -"The new `FedMedian` strategy implements Federated Median (FedMedian) by " -"[Yin et al., 2018](https://arxiv.org/pdf/1803.01498v1.pdf)." +"**Add new** `Bulyan` **strategy** " +"([#1817](https://github.com/adap/flower/pull/1817), " +"[#1891](https://github.com/adap/flower/pull/1891))" msgstr "" -#: ../../source/ref-changelog.md:499 +#: ../../source/ref-changelog.md:212 msgid "" -"**Log** `Client` **exceptions in Virtual Client Engine** " -"([#1493](https://github.com/adap/flower/pull/1493))" +"The new `Bulyan` strategy implements Bulyan by [El Mhamdi et al., " +"2018](https://arxiv.org/abs/1802.07927)" msgstr "" -#: ../../source/ref-changelog.md:501 +#: ../../source/ref-changelog.md:214 msgid "" -"All `Client` exceptions happening in the VCE are now logged by default " -"and not just exposed to the configured `Strategy` (via the `failures` " -"argument)." +"**Add new** `XGB Bagging` **strategy** " +"([#2611](https://github.com/adap/flower/pull/2611))" msgstr "" -#: ../../source/ref-changelog.md:503 +#: ../../source/ref-changelog.md:216 ../../source/ref-changelog.md:218 msgid "" -"**Improve Virtual Client Engine internals** " -"([#1401](https://github.com/adap/flower/pull/1401), " -"[#1453](https://github.com/adap/flower/pull/1453))" +"**Introduce `WorkloadState`** " +"([#2564](https://github.com/adap/flower/pull/2564), " +"[#2632](https://github.com/adap/flower/pull/2632))" msgstr "" -#: ../../source/ref-changelog.md:505 +#: ../../source/ref-changelog.md:222 msgid "" -"Some internals of the Virtual Client Engine have been revamped. The VCE " -"now uses Ray 2.0 under the hood, the value type of the `client_resources`" -" dictionary changed to `float` to allow fractions of resources to be " -"allocated." +"FedProx ([#2210](https://github.com/adap/flower/pull/2210), " +"[#2286](https://github.com/adap/flower/pull/2286), " +"[#2509](https://github.com/adap/flower/pull/2509))" msgstr "" -#: ../../source/ref-changelog.md:507 +#: ../../source/ref-changelog.md:224 msgid "" -"**Support optional** `Client`**/**`NumPyClient` **methods in Virtual " -"Client Engine**" +"Baselines Docs ([#2290](https://github.com/adap/flower/pull/2290), " +"[#2400](https://github.com/adap/flower/pull/2400))" msgstr "" -#: ../../source/ref-changelog.md:509 +#: ../../source/ref-changelog.md:226 msgid "" -"The Virtual Client Engine now has full support for optional `Client` (and" -" `NumPyClient`) methods." +"FedMLB ([#2340](https://github.com/adap/flower/pull/2340), " +"[#2507](https://github.com/adap/flower/pull/2507))" msgstr "" -#: ../../source/ref-changelog.md:511 +#: ../../source/ref-changelog.md:228 msgid "" -"**Provide type information to packages using** `flwr` " -"([#1377](https://github.com/adap/flower/pull/1377))" +"TAMUNA ([#2254](https://github.com/adap/flower/pull/2254), " +"[#2508](https://github.com/adap/flower/pull/2508))" msgstr "" -#: ../../source/ref-changelog.md:513 -msgid "" -"The package `flwr` is now bundled with a `py.typed` file indicating that " -"the package is typed. This enables typing support for projects or " -"packages that use `flwr` by enabling them to improve their code using " -"static type checkers like `mypy`." +#: ../../source/ref-changelog.md:230 +msgid "FedMeta [#2438](https://github.com/adap/flower/pull/2438)" msgstr "" -#: ../../source/ref-changelog.md:515 -msgid "" -"**Updated code example** " -"([#1344](https://github.com/adap/flower/pull/1344), " -"[#1347](https://github.com/adap/flower/pull/1347))" +#: ../../source/ref-changelog.md:232 +msgid "FjORD [#2431](https://github.com/adap/flower/pull/2431)" msgstr "" -#: ../../source/ref-changelog.md:517 -msgid "" -"The code examples covering scikit-learn and PyTorch Lightning have been " -"updated to work with the latest version of Flower." +#: ../../source/ref-changelog.md:234 +msgid "MOON [#2421](https://github.com/adap/flower/pull/2421)" msgstr "" -#: ../../source/ref-changelog.md:519 -msgid "" -"**Updated documentation** " -"([#1355](https://github.com/adap/flower/pull/1355), " -"[#1558](https://github.com/adap/flower/pull/1558), " -"[#1379](https://github.com/adap/flower/pull/1379), " -"[#1380](https://github.com/adap/flower/pull/1380), " -"[#1381](https://github.com/adap/flower/pull/1381), " -"[#1332](https://github.com/adap/flower/pull/1332), " -"[#1391](https://github.com/adap/flower/pull/1391), " -"[#1403](https://github.com/adap/flower/pull/1403), " -"[#1364](https://github.com/adap/flower/pull/1364), " -"[#1409](https://github.com/adap/flower/pull/1409), " -"[#1419](https://github.com/adap/flower/pull/1419), " -"[#1444](https://github.com/adap/flower/pull/1444), " -"[#1448](https://github.com/adap/flower/pull/1448), " -"[#1417](https://github.com/adap/flower/pull/1417), " -"[#1449](https://github.com/adap/flower/pull/1449), " -"[#1465](https://github.com/adap/flower/pull/1465), " -"[#1467](https://github.com/adap/flower/pull/1467))" +#: ../../source/ref-changelog.md:236 +msgid "DepthFL [#2295](https://github.com/adap/flower/pull/2295)" msgstr "" -#: ../../source/ref-changelog.md:521 -msgid "" -"There have been so many documentation updates that it doesn't even make " -"sense to list them individually." +#: ../../source/ref-changelog.md:238 +msgid "FedPer [#2266](https://github.com/adap/flower/pull/2266)" msgstr "" -#: ../../source/ref-changelog.md:523 -msgid "" -"**Restructured documentation** " -"([#1387](https://github.com/adap/flower/pull/1387))" +#: ../../source/ref-changelog.md:240 +msgid "FedWav2vec [#2551](https://github.com/adap/flower/pull/2551)" msgstr "" -#: ../../source/ref-changelog.md:525 -msgid "" -"The documentation has been restructured to make it easier to navigate. " -"This is just the first step in a larger effort to make the Flower " -"documentation the best documentation of any project ever. Stay tuned!" +#: ../../source/ref-changelog.md:242 +msgid "niid-Bench [#2428](https://github.com/adap/flower/pull/2428)" msgstr "" -#: ../../source/ref-changelog.md:527 +#: ../../source/ref-changelog.md:244 msgid "" -"**Open in Colab button** " -"([#1389](https://github.com/adap/flower/pull/1389))" +"FedBN ([#2608](https://github.com/adap/flower/pull/2608), " +"[#2615](https://github.com/adap/flower/pull/2615))" msgstr "" -#: ../../source/ref-changelog.md:529 +#: ../../source/ref-changelog.md:246 msgid "" -"The four parts of the Flower Federated Learning Tutorial now come with a " -"new `Open in Colab` button. No need to install anything on your local " -"machine, you can now use and learn about Flower in your browser, it's " -"only a single click away." +"**General updates to Flower Examples** " +"([#2384](https://github.com/adap/flower/pull/2384), " +"[#2425](https://github.com/adap/flower/pull/2425), " +"[#2526](https://github.com/adap/flower/pull/2526), " +"[#2302](https://github.com/adap/flower/pull/2302), " +"[#2545](https://github.com/adap/flower/pull/2545))" msgstr "" -#: ../../source/ref-changelog.md:531 +#: ../../source/ref-changelog.md:248 msgid "" -"**Improved tutorial** ([#1468](https://github.com/adap/flower/pull/1468)," -" [#1470](https://github.com/adap/flower/pull/1470), " -"[#1472](https://github.com/adap/flower/pull/1472), " -"[#1473](https://github.com/adap/flower/pull/1473), " -"[#1474](https://github.com/adap/flower/pull/1474), " -"[#1475](https://github.com/adap/flower/pull/1475))" +"**General updates to Flower Baselines** " +"([#2301](https://github.com/adap/flower/pull/2301), " +"[#2305](https://github.com/adap/flower/pull/2305), " +"[#2307](https://github.com/adap/flower/pull/2307), " +"[#2327](https://github.com/adap/flower/pull/2327), " +"[#2435](https://github.com/adap/flower/pull/2435), " +"[#2462](https://github.com/adap/flower/pull/2462), " +"[#2463](https://github.com/adap/flower/pull/2463), " +"[#2461](https://github.com/adap/flower/pull/2461), " +"[#2469](https://github.com/adap/flower/pull/2469), " +"[#2466](https://github.com/adap/flower/pull/2466), " +"[#2471](https://github.com/adap/flower/pull/2471), " +"[#2472](https://github.com/adap/flower/pull/2472), " +"[#2470](https://github.com/adap/flower/pull/2470))" msgstr "" -#: ../../source/ref-changelog.md:533 +#: ../../source/ref-changelog.md:250 msgid "" -"The Flower Federated Learning Tutorial has two brand-new parts covering " -"custom strategies (still WIP) and the distinction between `Client` and " -"`NumPyClient`. The existing parts one and two have also been improved " -"(many small changes and fixes)." +"**General updates to the simulation engine** " +"([#2331](https://github.com/adap/flower/pull/2331), " +"[#2447](https://github.com/adap/flower/pull/2447), " +"[#2448](https://github.com/adap/flower/pull/2448), " +"[#2294](https://github.com/adap/flower/pull/2294))" msgstr "" -#: ../../source/ref-changelog.md:539 -msgid "v1.0.0 (2022-07-28)" +#: ../../source/ref-changelog.md:252 +msgid "" +"**General updates to Flower SDKs** " +"([#2288](https://github.com/adap/flower/pull/2288), " +"[#2429](https://github.com/adap/flower/pull/2429), " +"[#2555](https://github.com/adap/flower/pull/2555), " +"[#2543](https://github.com/adap/flower/pull/2543), " +"[#2544](https://github.com/adap/flower/pull/2544), " +"[#2597](https://github.com/adap/flower/pull/2597), " +"[#2623](https://github.com/adap/flower/pull/2623))" msgstr "" -#: ../../source/ref-changelog.md:541 -msgid "Highlights" +#: ../../source/ref-changelog.md:254 +msgid "" +"**General improvements** " +"([#2309](https://github.com/adap/flower/pull/2309), " +"[#2310](https://github.com/adap/flower/pull/2310), " +"[#2313](https://github.com/adap/flower/pull/2313), " +"[#2316](https://github.com/adap/flower/pull/2316), " +"[#2317](https://github.com/adap/flower/pull/2317), " +"[#2349](https://github.com/adap/flower/pull/2349), " +"[#2360](https://github.com/adap/flower/pull/2360), " +"[#2402](https://github.com/adap/flower/pull/2402), " +"[#2446](https://github.com/adap/flower/pull/2446), " +"[#2561](https://github.com/adap/flower/pull/2561), " +"[#2273](https://github.com/adap/flower/pull/2273), " +"[#2267](https://github.com/adap/flower/pull/2267), " +"[#2274](https://github.com/adap/flower/pull/2274), " +"[#2275](https://github.com/adap/flower/pull/2275), " +"[#2432](https://github.com/adap/flower/pull/2432), " +"[#2251](https://github.com/adap/flower/pull/2251), " +"[#2321](https://github.com/adap/flower/pull/2321), " +"[#1936](https://github.com/adap/flower/pull/1936), " +"[#2408](https://github.com/adap/flower/pull/2408), " +"[#2413](https://github.com/adap/flower/pull/2413), " +"[#2401](https://github.com/adap/flower/pull/2401), " +"[#2531](https://github.com/adap/flower/pull/2531), " +"[#2534](https://github.com/adap/flower/pull/2534), " +"[#2535](https://github.com/adap/flower/pull/2535), " +"[#2521](https://github.com/adap/flower/pull/2521), " +"[#2553](https://github.com/adap/flower/pull/2553), " +"[#2596](https://github.com/adap/flower/pull/2596))" msgstr "" -#: ../../source/ref-changelog.md:543 -msgid "Stable **Virtual Client Engine** (accessible via `start_simulation`)" +#: ../../source/ref-changelog.md:256 ../../source/ref-changelog.md:346 +#: ../../source/ref-changelog.md:410 ../../source/ref-changelog.md:464 +#: ../../source/ref-changelog.md:531 +msgid "Flower received many improvements under the hood, too many to list here." msgstr "" -#: ../../source/ref-changelog.md:544 -msgid "All `Client`/`NumPyClient` methods are now optional" +#: ../../source/ref-changelog.md:260 +msgid "" +"**Remove support for Python 3.7** " +"([#2280](https://github.com/adap/flower/pull/2280), " +"[#2299](https://github.com/adap/flower/pull/2299), " +"[#2304](https://github.com/adap/flower/pull/2304), " +"[#2306](https://github.com/adap/flower/pull/2306), " +"[#2355](https://github.com/adap/flower/pull/2355), " +"[#2356](https://github.com/adap/flower/pull/2356))" msgstr "" -#: ../../source/ref-changelog.md:545 -msgid "Configurable `get_parameters`" +#: ../../source/ref-changelog.md:262 +msgid "" +"Python 3.7 support was deprecated in Flower 1.5, and this release removes" +" support. Flower now requires Python 3.8." msgstr "" -#: ../../source/ref-changelog.md:546 +#: ../../source/ref-changelog.md:264 msgid "" -"Tons of small API cleanups resulting in a more coherent developer " -"experience" +"**Remove experimental argument** `rest` **from** `start_client` " +"([#2324](https://github.com/adap/flower/pull/2324))" msgstr "" -#: ../../source/ref-changelog.md:550 +#: ../../source/ref-changelog.md:266 msgid "" -"We would like to give our **special thanks** to all the contributors who " -"made Flower 1.0 possible (in reverse [GitHub " -"Contributors](https://github.com/adap/flower/graphs/contributors) order):" +"The (still experimental) argument `rest` was removed from `start_client` " +"and `start_numpy_client`. Use `transport=\"rest\"` to opt into the " +"experimental REST API instead." +msgstr "" + +#: ../../source/ref-changelog.md:268 +msgid "v1.5.0 (2023-08-31)" msgstr "" -#: ../../source/ref-changelog.md:552 +#: ../../source/ref-changelog.md:274 msgid "" -"[@rtaiello](https://github.com/rtaiello), " -"[@g-pichler](https://github.com/g-pichler), [@rob-" -"luke](https://github.com/rob-luke), [@andreea-zaharia](https://github.com" -"/andreea-zaharia), [@kinshukdua](https://github.com/kinshukdua), " -"[@nfnt](https://github.com/nfnt), " -"[@tatiana-s](https://github.com/tatiana-s), " -"[@TParcollet](https://github.com/TParcollet), " -"[@vballoli](https://github.com/vballoli), " -"[@negedng](https://github.com/negedng), " -"[@RISHIKESHAVAN](https://github.com/RISHIKESHAVAN), " -"[@hei411](https://github.com/hei411), " -"[@SebastianSpeitel](https://github.com/SebastianSpeitel), " -"[@AmitChaulwar](https://github.com/AmitChaulwar), " -"[@Rubiel1](https://github.com/Rubiel1), [@FANTOME-PAN](https://github.com" -"/FANTOME-PAN), [@Rono-BC](https://github.com/Rono-BC), " -"[@lbhm](https://github.com/lbhm), " -"[@sishtiaq](https://github.com/sishtiaq), " -"[@remde](https://github.com/remde), [@Jueun-Park](https://github.com" -"/Jueun-Park), [@architjen](https://github.com/architjen), " -"[@PratikGarai](https://github.com/PratikGarai), " -"[@mrinaald](https://github.com/mrinaald), " -"[@zliel](https://github.com/zliel), " -"[@MeiruiJiang](https://github.com/MeiruiJiang), " -"[@sancarlim](https://github.com/sancarlim), " -"[@gubertoli](https://github.com/gubertoli), " -"[@Vingt100](https://github.com/Vingt100), " -"[@MakGulati](https://github.com/MakGulati), " -"[@cozek](https://github.com/cozek), " -"[@jafermarq](https://github.com/jafermarq), " -"[@sisco0](https://github.com/sisco0), " -"[@akhilmathurs](https://github.com/akhilmathurs), " -"[@CanTuerk](https://github.com/CanTuerk), " -"[@mariaboerner1987](https://github.com/mariaboerner1987), " -"[@pedropgusmao](https://github.com/pedropgusmao), " -"[@tanertopal](https://github.com/tanertopal), " -"[@danieljanes](https://github.com/danieljanes)." +"`Adam Narozniak`, `Anass Anhari`, `Charles Beauville`, `Dana-Farber`, " +"`Daniel J. Beutel`, `Daniel Nata Nugraha`, `Edoardo Gabrielli`, `Gustavo " +"Bertoli`, `Heng Pan`, `Javier`, `Mahdi`, `Steven Hé (Sīchàng)`, `Taner " +"Topal`, `achiverram28`, `danielnugraha`, `eunchung`, `ruthgal` " msgstr "" -#: ../../source/ref-changelog.md:556 +#: ../../source/ref-changelog.md:278 msgid "" -"**All arguments must be passed as keyword arguments** " -"([#1338](https://github.com/adap/flower/pull/1338))" +"**Introduce new simulation engine** " +"([#1969](https://github.com/adap/flower/pull/1969), " +"[#2221](https://github.com/adap/flower/pull/2221), " +"[#2248](https://github.com/adap/flower/pull/2248))" msgstr "" -#: ../../source/ref-changelog.md:558 +#: ../../source/ref-changelog.md:280 msgid "" -"Pass all arguments as keyword arguments, positional arguments are not " -"longer supported. Code that uses positional arguments (e.g., " -"`start_client(\"127.0.0.1:8080\", FlowerClient())`) must add the keyword " -"for each positional argument (e.g., " -"`start_client(server_address=\"127.0.0.1:8080\", " -"client=FlowerClient())`)." +"The new simulation engine has been rewritten from the ground up, yet it " +"remains fully backwards compatible. It offers much improved stability and" +" memory handling, especially when working with GPUs. Simulations " +"transparently adapt to different settings to scale simulation in CPU-" +"only, CPU+GPU, multi-GPU, or multi-node multi-GPU environments." msgstr "" -#: ../../source/ref-changelog.md:560 +#: ../../source/ref-changelog.md:282 msgid "" -"**Introduce configuration object** `ServerConfig` **in** `start_server` " -"**and** `start_simulation` " -"([#1317](https://github.com/adap/flower/pull/1317))" +"Comprehensive documentation includes a new [how-to run " +"simulations](https://flower.ai/docs/framework/how-to-run-" +"simulations.html) guide, new [simulation-" +"pytorch](https://flower.ai/docs/examples/simulation-pytorch.html) and " +"[simulation-tensorflow](https://flower.ai/docs/examples/simulation-" +"tensorflow.html) notebooks, and a new [YouTube tutorial " +"series](https://www.youtube.com/watch?v=cRebUIGB5RU&list=PLNG4feLHqCWlnj8a_E1A_n5zr2-8pafTB)." msgstr "" -#: ../../source/ref-changelog.md:562 +#: ../../source/ref-changelog.md:284 msgid "" -"Instead of a config dictionary `{\"num_rounds\": 3, \"round_timeout\": " -"600.0}`, `start_server` and `start_simulation` now expect a configuration" -" object of type `flwr.server.ServerConfig`. `ServerConfig` takes the same" -" arguments that as the previous config dict, but it makes writing type-" -"safe code easier and the default parameters values more transparent." +"**Restructure Flower Docs** " +"([#1824](https://github.com/adap/flower/pull/1824), " +"[#1865](https://github.com/adap/flower/pull/1865), " +"[#1884](https://github.com/adap/flower/pull/1884), " +"[#1887](https://github.com/adap/flower/pull/1887), " +"[#1919](https://github.com/adap/flower/pull/1919), " +"[#1922](https://github.com/adap/flower/pull/1922), " +"[#1920](https://github.com/adap/flower/pull/1920), " +"[#1923](https://github.com/adap/flower/pull/1923), " +"[#1924](https://github.com/adap/flower/pull/1924), " +"[#1962](https://github.com/adap/flower/pull/1962), " +"[#2006](https://github.com/adap/flower/pull/2006), " +"[#2133](https://github.com/adap/flower/pull/2133), " +"[#2203](https://github.com/adap/flower/pull/2203), " +"[#2215](https://github.com/adap/flower/pull/2215), " +"[#2122](https://github.com/adap/flower/pull/2122), " +"[#2223](https://github.com/adap/flower/pull/2223), " +"[#2219](https://github.com/adap/flower/pull/2219), " +"[#2232](https://github.com/adap/flower/pull/2232), " +"[#2233](https://github.com/adap/flower/pull/2233), " +"[#2234](https://github.com/adap/flower/pull/2234), " +"[#2235](https://github.com/adap/flower/pull/2235), " +"[#2237](https://github.com/adap/flower/pull/2237), " +"[#2238](https://github.com/adap/flower/pull/2238), " +"[#2242](https://github.com/adap/flower/pull/2242), " +"[#2231](https://github.com/adap/flower/pull/2231), " +"[#2243](https://github.com/adap/flower/pull/2243), " +"[#2227](https://github.com/adap/flower/pull/2227))" msgstr "" -#: ../../source/ref-changelog.md:564 +#: ../../source/ref-changelog.md:286 msgid "" -"**Rename built-in strategy parameters for clarity** " -"([#1334](https://github.com/adap/flower/pull/1334))" +"Much effort went into a completely restructured Flower docs experience. " +"The documentation on [flower.ai/docs](https://flower.ai/docs) is now " +"divided into Flower Framework, Flower Baselines, Flower Android SDK, " +"Flower iOS SDK, and code example projects." msgstr "" -#: ../../source/ref-changelog.md:566 +#: ../../source/ref-changelog.md:288 msgid "" -"The following built-in strategy parameters were renamed to improve " -"readability and consistency with other API's:" +"**Introduce Flower Swift SDK** " +"([#1858](https://github.com/adap/flower/pull/1858), " +"[#1897](https://github.com/adap/flower/pull/1897))" msgstr "" -#: ../../source/ref-changelog.md:568 -msgid "`fraction_eval` --> `fraction_evaluate`" +#: ../../source/ref-changelog.md:290 +msgid "" +"This is the first preview release of the Flower Swift SDK. Flower support" +" on iOS is improving, and alongside the Swift SDK and code example, there" +" is now also an iOS quickstart tutorial." msgstr "" -#: ../../source/ref-changelog.md:569 -msgid "`min_eval_clients` --> `min_evaluate_clients`" +#: ../../source/ref-changelog.md:292 +msgid "" +"**Introduce Flower Android SDK** " +"([#2131](https://github.com/adap/flower/pull/2131))" msgstr "" -#: ../../source/ref-changelog.md:570 -msgid "`eval_fn` --> `evaluate_fn`" +#: ../../source/ref-changelog.md:294 +msgid "" +"This is the first preview release of the Flower Kotlin SDK. Flower " +"support on Android is improving, and alongside the Kotlin SDK and code " +"example, there is now also an Android quickstart tutorial." msgstr "" -#: ../../source/ref-changelog.md:572 +#: ../../source/ref-changelog.md:296 msgid "" -"**Update default arguments of built-in strategies** " -"([#1278](https://github.com/adap/flower/pull/1278))" +"**Introduce new end-to-end testing infrastructure** " +"([#1842](https://github.com/adap/flower/pull/1842), " +"[#2071](https://github.com/adap/flower/pull/2071), " +"[#2072](https://github.com/adap/flower/pull/2072), " +"[#2068](https://github.com/adap/flower/pull/2068), " +"[#2067](https://github.com/adap/flower/pull/2067), " +"[#2069](https://github.com/adap/flower/pull/2069), " +"[#2073](https://github.com/adap/flower/pull/2073), " +"[#2070](https://github.com/adap/flower/pull/2070), " +"[#2074](https://github.com/adap/flower/pull/2074), " +"[#2082](https://github.com/adap/flower/pull/2082), " +"[#2084](https://github.com/adap/flower/pull/2084), " +"[#2093](https://github.com/adap/flower/pull/2093), " +"[#2109](https://github.com/adap/flower/pull/2109), " +"[#2095](https://github.com/adap/flower/pull/2095), " +"[#2140](https://github.com/adap/flower/pull/2140), " +"[#2137](https://github.com/adap/flower/pull/2137), " +"[#2165](https://github.com/adap/flower/pull/2165))" msgstr "" -#: ../../source/ref-changelog.md:574 +#: ../../source/ref-changelog.md:298 msgid "" -"All built-in strategies now use `fraction_fit=1.0` and " -"`fraction_evaluate=1.0`, which means they select *all* currently " -"available clients for training and evaluation. Projects that relied on " -"the previous default values can get the previous behaviour by " -"initializing the strategy in the following way:" +"A new testing infrastructure ensures that new changes stay compatible " +"with existing framework integrations or strategies." msgstr "" -#: ../../source/ref-changelog.md:576 -msgid "`strategy = FedAvg(fraction_fit=0.1, fraction_evaluate=0.1)`" +#: ../../source/ref-changelog.md:300 +msgid "**Deprecate Python 3.7**" msgstr "" -#: ../../source/ref-changelog.md:578 +#: ../../source/ref-changelog.md:302 msgid "" -"**Add** `server_round` **to** `Strategy.evaluate` " -"([#1334](https://github.com/adap/flower/pull/1334))" +"Since Python 3.7 reached its end of life (EOL) on 2023-06-27, support for" +" Python 3.7 is now deprecated and will be removed in an upcoming release." msgstr "" -#: ../../source/ref-changelog.md:580 +#: ../../source/ref-changelog.md:304 msgid "" -"The `Strategy` method `evaluate` now receives the current round of " -"federated learning/evaluation as the first parameter." +"**Add new** `FedTrimmedAvg` **strategy** " +"([#1769](https://github.com/adap/flower/pull/1769), " +"[#1853](https://github.com/adap/flower/pull/1853))" msgstr "" -#: ../../source/ref-changelog.md:582 +#: ../../source/ref-changelog.md:306 msgid "" -"**Add** `server_round` **and** `config` **parameters to** `evaluate_fn` " -"([#1334](https://github.com/adap/flower/pull/1334))" +"The new `FedTrimmedAvg` strategy implements Trimmed Mean by [Dong Yin, " +"2018](https://arxiv.org/abs/1803.01498)." msgstr "" -#: ../../source/ref-changelog.md:584 +#: ../../source/ref-changelog.md:308 msgid "" -"The `evaluate_fn` passed to built-in strategies like `FedAvg` now takes " -"three parameters: (1) The current round of federated learning/evaluation " -"(`server_round`), (2) the model parameters to evaluate (`parameters`), " -"and (3) a config dictionary (`config`)." +"**Introduce start_driver** " +"([#1697](https://github.com/adap/flower/pull/1697))" msgstr "" -#: ../../source/ref-changelog.md:586 +#: ../../source/ref-changelog.md:310 msgid "" -"**Rename** `rnd` **to** `server_round` " -"([#1321](https://github.com/adap/flower/pull/1321))" +"In addition to `start_server` and using the raw Driver API, there is a " +"new `start_driver` function that allows for running `start_server` " +"scripts as a Flower driver with only a single-line code change. Check out" +" the `mt-pytorch` code example to see a working example using " +"`start_driver`." msgstr "" -#: ../../source/ref-changelog.md:588 +#: ../../source/ref-changelog.md:312 msgid "" -"Several Flower methods and functions (`evaluate_fn`, `configure_fit`, " -"`aggregate_fit`, `configure_evaluate`, `aggregate_evaluate`) receive the " -"current round of federated learning/evaluation as their first parameter. " -"To improve reaability and avoid confusion with *random*, this parameter " -"has been renamed from `rnd` to `server_round`." +"**Add parameter aggregation to** `mt-pytorch` **code example** " +"([#1785](https://github.com/adap/flower/pull/1785))" msgstr "" -#: ../../source/ref-changelog.md:590 +#: ../../source/ref-changelog.md:314 msgid "" -"**Move** `flwr.dataset` **to** `flwr_baselines` " -"([#1273](https://github.com/adap/flower/pull/1273))" -msgstr "" - -#: ../../source/ref-changelog.md:592 -msgid "The experimental package `flwr.dataset` was migrated to Flower Baselines." +"The `mt-pytorch` example shows how to aggregate parameters when writing a" +" driver script. The included `driver.py` and `server.py` have been " +"aligned to demonstrate both the low-level way and the high-level way of " +"building server-side logic." msgstr "" -#: ../../source/ref-changelog.md:594 +#: ../../source/ref-changelog.md:316 msgid "" -"**Remove experimental strategies** " -"([#1280](https://github.com/adap/flower/pull/1280))" +"**Migrate experimental REST API to Starlette** " +"([2171](https://github.com/adap/flower/pull/2171))" msgstr "" -#: ../../source/ref-changelog.md:596 +#: ../../source/ref-changelog.md:318 msgid "" -"Remove unmaintained experimental strategies (`FastAndSlow`, `FedFSv0`, " -"`FedFSv1`)." +"The (experimental) REST API used to be implemented in " +"[FastAPI](https://fastapi.tiangolo.com/), but it has now been migrated to" +" use [Starlette](https://www.starlette.io/) directly." msgstr "" -#: ../../source/ref-changelog.md:598 +#: ../../source/ref-changelog.md:320 msgid "" -"**Rename** `Weights` **to** `NDArrays` " -"([#1258](https://github.com/adap/flower/pull/1258), " -"[#1259](https://github.com/adap/flower/pull/1259))" +"Please note: The REST request-response API is still experimental and will" +" likely change significantly over time." msgstr "" -#: ../../source/ref-changelog.md:600 +#: ../../source/ref-changelog.md:322 msgid "" -"`flwr.common.Weights` was renamed to `flwr.common.NDArrays` to better " -"capture what this type is all about." +"**Introduce experimental gRPC request-response API** " +"([#1867](https://github.com/adap/flower/pull/1867), " +"[#1901](https://github.com/adap/flower/pull/1901))" msgstr "" -#: ../../source/ref-changelog.md:602 +#: ../../source/ref-changelog.md:324 msgid "" -"**Remove antiquated** `force_final_distributed_eval` **from** " -"`start_server` ([#1258](https://github.com/adap/flower/pull/1258), " -"[#1259](https://github.com/adap/flower/pull/1259))" +"In addition to the existing gRPC API (based on bidirectional streaming) " +"and the experimental REST API, there is now a new gRPC API that uses a " +"request-response model to communicate with client nodes." msgstr "" -#: ../../source/ref-changelog.md:604 +#: ../../source/ref-changelog.md:326 msgid "" -"The `start_server` parameter `force_final_distributed_eval` has long been" -" a historic artefact, in this release it is finally gone for good." +"Please note: The gRPC request-response API is still experimental and will" +" likely change significantly over time." msgstr "" -#: ../../source/ref-changelog.md:606 +#: ../../source/ref-changelog.md:328 msgid "" -"**Make** `get_parameters` **configurable** " -"([#1242](https://github.com/adap/flower/pull/1242))" +"**Replace the experimental** `start_client(rest=True)` **with the new** " +"`start_client(transport=\"rest\")` " +"([#1880](https://github.com/adap/flower/pull/1880))" msgstr "" -#: ../../source/ref-changelog.md:608 +#: ../../source/ref-changelog.md:330 msgid "" -"The `get_parameters` method now accepts a configuration dictionary, just " -"like `get_properties`, `fit`, and `evaluate`." +"The (experimental) `start_client` argument `rest` was deprecated in " +"favour of a new argument `transport`. `start_client(transport=\"rest\")` " +"will yield the same behaviour as `start_client(rest=True)` did before. " +"All code should migrate to the new argument `transport`. The deprecated " +"argument `rest` will be removed in a future release." msgstr "" -#: ../../source/ref-changelog.md:610 +#: ../../source/ref-changelog.md:332 msgid "" -"**Replace** `num_rounds` **in** `start_simulation` **with new** `config` " -"**parameter** ([#1281](https://github.com/adap/flower/pull/1281))" +"**Add a new gRPC option** " +"([#2197](https://github.com/adap/flower/pull/2197))" msgstr "" -#: ../../source/ref-changelog.md:612 +#: ../../source/ref-changelog.md:334 msgid "" -"The `start_simulation` function now accepts a configuration dictionary " -"`config` instead of the `num_rounds` integer. This improves the " -"consistency between `start_simulation` and `start_server` and makes " -"transitioning between the two easier." +"We now start a gRPC server with the `grpc.keepalive_permit_without_calls`" +" option set to 0 by default. This prevents the clients from sending " +"keepalive pings when there is no outstanding stream." msgstr "" -#: ../../source/ref-changelog.md:616 +#: ../../source/ref-changelog.md:336 msgid "" -"**Support Python 3.10** " -"([#1320](https://github.com/adap/flower/pull/1320))" +"**Improve example notebooks** " +"([#2005](https://github.com/adap/flower/pull/2005))" msgstr "" -#: ../../source/ref-changelog.md:618 -msgid "" -"The previous Flower release introduced experimental support for Python " -"3.10, this release declares Python 3.10 support as stable." +#: ../../source/ref-changelog.md:338 +msgid "There's a new 30min Federated Learning PyTorch tutorial!" msgstr "" -#: ../../source/ref-changelog.md:620 +#: ../../source/ref-changelog.md:340 msgid "" -"**Make all** `Client` **and** `NumPyClient` **methods optional** " -"([#1260](https://github.com/adap/flower/pull/1260), " -"[#1277](https://github.com/adap/flower/pull/1277))" +"**Example updates** ([#1772](https://github.com/adap/flower/pull/1772), " +"[#1873](https://github.com/adap/flower/pull/1873), " +"[#1981](https://github.com/adap/flower/pull/1981), " +"[#1988](https://github.com/adap/flower/pull/1988), " +"[#1984](https://github.com/adap/flower/pull/1984), " +"[#1982](https://github.com/adap/flower/pull/1982), " +"[#2112](https://github.com/adap/flower/pull/2112), " +"[#2144](https://github.com/adap/flower/pull/2144), " +"[#2174](https://github.com/adap/flower/pull/2174), " +"[#2225](https://github.com/adap/flower/pull/2225), " +"[#2183](https://github.com/adap/flower/pull/2183))" msgstr "" -#: ../../source/ref-changelog.md:622 +#: ../../source/ref-changelog.md:342 msgid "" -"The `Client`/`NumPyClient` methods `get_properties`, `get_parameters`, " -"`fit`, and `evaluate` are all optional. This enables writing clients that" -" implement, for example, only `fit`, but no other method. No need to " -"implement `evaluate` when using centralized evaluation!" +"Many examples have received significant updates, including simplified " +"advanced-tensorflow and advanced-pytorch examples, improved macOS " +"compatibility of TensorFlow examples, and code examples for simulation. A" +" major upgrade is that all code examples now have a `requirements.txt` " +"(in addition to `pyproject.toml`)." msgstr "" -#: ../../source/ref-changelog.md:624 +#: ../../source/ref-changelog.md:344 msgid "" -"**Enable passing a** `Server` **instance to** `start_simulation` " -"([#1281](https://github.com/adap/flower/pull/1281))" +"**General improvements** " +"([#1872](https://github.com/adap/flower/pull/1872), " +"[#1866](https://github.com/adap/flower/pull/1866), " +"[#1884](https://github.com/adap/flower/pull/1884), " +"[#1837](https://github.com/adap/flower/pull/1837), " +"[#1477](https://github.com/adap/flower/pull/1477), " +"[#2171](https://github.com/adap/flower/pull/2171))" msgstr "" -#: ../../source/ref-changelog.md:626 -msgid "" -"Similar to `start_server`, `start_simulation` now accepts a full `Server`" -" instance. This enables users to heavily customize the execution of " -"eperiments and opens the door to running, for example, async FL using the" -" Virtual Client Engine." +#: ../../source/ref-changelog.md:352 +msgid "v1.4.0 (2023-04-21)" msgstr "" -#: ../../source/ref-changelog.md:628 +#: ../../source/ref-changelog.md:358 msgid "" -"**Update code examples** " -"([#1291](https://github.com/adap/flower/pull/1291), " -"[#1286](https://github.com/adap/flower/pull/1286), " -"[#1282](https://github.com/adap/flower/pull/1282))" +"`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, " +"`Chenyang Ma (Danny)`, `Daniel J. Beutel`, `Edoardo`, `Gautam Jajoo`, " +"`Iacob-Alexandru-Andrei`, `JDRanpariya`, `Jean Charle Yaacoub`, `Kunal " +"Sarkhel`, `L. Jiang`, `Lennart Behme`, `Max Kapsecker`, `Michał`, `Nic " +"Lane`, `Nikolaos Episkopos`, `Ragy`, `Saurav Maheshkar`, `Semo Yang`, " +"`Steve Laskaridis`, `Steven Hé (Sīchàng)`, `Taner Topal`" msgstr "" -#: ../../source/ref-changelog.md:630 +#: ../../source/ref-changelog.md:362 msgid "" -"Many code examples received small or even large maintenance updates, " -"among them are" +"**Introduce support for XGBoost (**`FedXgbNnAvg` **strategy and " +"example)** ([#1694](https://github.com/adap/flower/pull/1694), " +"[#1709](https://github.com/adap/flower/pull/1709), " +"[#1715](https://github.com/adap/flower/pull/1715), " +"[#1717](https://github.com/adap/flower/pull/1717), " +"[#1763](https://github.com/adap/flower/pull/1763), " +"[#1795](https://github.com/adap/flower/pull/1795))" msgstr "" -#: ../../source/ref-changelog.md:632 -msgid "`scikit-learn`" +#: ../../source/ref-changelog.md:364 +msgid "" +"XGBoost is a tree-based ensemble machine learning algorithm that uses " +"gradient boosting to improve model accuracy. We added a new `FedXgbNnAvg`" +" " +"[strategy](https://github.com/adap/flower/tree/main/src/py/flwr/server/strategy/fedxgb_nn_avg.py)," +" and a [code example](https://github.com/adap/flower/tree/main/examples" +"/xgboost-quickstart) that demonstrates the usage of this new strategy in " +"an XGBoost project." msgstr "" -#: ../../source/ref-changelog.md:633 -msgid "`simulation_pytorch`" +#: ../../source/ref-changelog.md:366 +msgid "" +"**Introduce iOS SDK (preview)** " +"([#1621](https://github.com/adap/flower/pull/1621), " +"[#1764](https://github.com/adap/flower/pull/1764))" msgstr "" -#: ../../source/ref-changelog.md:634 -msgid "`quickstart_pytorch`" +#: ../../source/ref-changelog.md:368 +msgid "" +"This is a major update for anyone wanting to implement Federated Learning" +" on iOS mobile devices. We now have a swift iOS SDK present under " +"[src/swift/flwr](https://github.com/adap/flower/tree/main/src/swift/flwr)" +" that will facilitate greatly the app creating process. To showcase its " +"use, the [iOS " +"example](https://github.com/adap/flower/tree/main/examples/ios) has also " +"been updated!" msgstr "" -#: ../../source/ref-changelog.md:635 -msgid "`quickstart_simulation`" +#: ../../source/ref-changelog.md:370 +msgid "" +"**Introduce new \"What is Federated Learning?\" tutorial** " +"([#1657](https://github.com/adap/flower/pull/1657), " +"[#1721](https://github.com/adap/flower/pull/1721))" msgstr "" -#: ../../source/ref-changelog.md:636 -msgid "`quickstart_tensorflow`" +#: ../../source/ref-changelog.md:372 +msgid "" +"A new [entry-level tutorial](https://flower.ai/docs/framework/tutorial-" +"what-is-federated-learning.html) in our documentation explains the basics" +" of Fedetated Learning. It enables anyone who's unfamiliar with Federated" +" Learning to start their journey with Flower. Forward it to anyone who's " +"interested in Federated Learning!" msgstr "" -#: ../../source/ref-changelog.md:637 -msgid "`advanced_tensorflow`" +#: ../../source/ref-changelog.md:374 +msgid "" +"**Introduce new Flower Baseline: FedProx MNIST** " +"([#1513](https://github.com/adap/flower/pull/1513), " +"[#1680](https://github.com/adap/flower/pull/1680), " +"[#1681](https://github.com/adap/flower/pull/1681), " +"[#1679](https://github.com/adap/flower/pull/1679))" msgstr "" -#: ../../source/ref-changelog.md:639 +#: ../../source/ref-changelog.md:376 msgid "" -"**Remove the obsolete simulation example** " -"([#1328](https://github.com/adap/flower/pull/1328))" +"This new baseline replicates the MNIST+CNN task from the paper [Federated" +" Optimization in Heterogeneous Networks (Li et al., " +"2018)](https://arxiv.org/abs/1812.06127). It uses the `FedProx` strategy," +" which aims at making convergence more robust in heterogeneous settings." msgstr "" -#: ../../source/ref-changelog.md:641 +#: ../../source/ref-changelog.md:378 msgid "" -"Removes the obsolete `simulation` example and renames " -"`quickstart_simulation` to `simulation_tensorflow` so it fits withs the " -"naming of `simulation_pytorch`" +"**Introduce new Flower Baseline: FedAvg FEMNIST** " +"([#1655](https://github.com/adap/flower/pull/1655))" msgstr "" -#: ../../source/ref-changelog.md:643 +#: ../../source/ref-changelog.md:380 msgid "" -"**Update documentation** " -"([#1223](https://github.com/adap/flower/pull/1223), " -"[#1209](https://github.com/adap/flower/pull/1209), " -"[#1251](https://github.com/adap/flower/pull/1251), " -"[#1257](https://github.com/adap/flower/pull/1257), " -"[#1267](https://github.com/adap/flower/pull/1267), " -"[#1268](https://github.com/adap/flower/pull/1268), " -"[#1300](https://github.com/adap/flower/pull/1300), " -"[#1304](https://github.com/adap/flower/pull/1304), " -"[#1305](https://github.com/adap/flower/pull/1305), " -"[#1307](https://github.com/adap/flower/pull/1307))" +"This new baseline replicates an experiment evaluating the performance of " +"the FedAvg algorithm on the FEMNIST dataset from the paper [LEAF: A " +"Benchmark for Federated Settings (Caldas et al., " +"2018)](https://arxiv.org/abs/1812.01097)." msgstr "" -#: ../../source/ref-changelog.md:645 +#: ../../source/ref-changelog.md:382 msgid "" -"One substantial documentation update fixes multiple smaller rendering " -"issues, makes titles more succinct to improve navigation, removes a " -"deprecated library, updates documentation dependencies, includes the " -"`flwr.common` module in the API reference, includes support for markdown-" -"based documentation, migrates the changelog from `.rst` to `.md`, and " -"fixes a number of smaller details!" +"**Introduce (experimental) REST API** " +"([#1594](https://github.com/adap/flower/pull/1594), " +"[#1690](https://github.com/adap/flower/pull/1690), " +"[#1695](https://github.com/adap/flower/pull/1695), " +"[#1712](https://github.com/adap/flower/pull/1712), " +"[#1802](https://github.com/adap/flower/pull/1802), " +"[#1770](https://github.com/adap/flower/pull/1770), " +"[#1733](https://github.com/adap/flower/pull/1733))" msgstr "" -#: ../../source/ref-changelog.md:647 ../../source/ref-changelog.md:702 -#: ../../source/ref-changelog.md:771 ../../source/ref-changelog.md:810 -msgid "**Minor updates**" +#: ../../source/ref-changelog.md:384 +msgid "" +"A new REST API has been introduced as an alternative to the gRPC-based " +"communication stack. In this initial version, the REST API only supports " +"anonymous clients." msgstr "" -#: ../../source/ref-changelog.md:649 +#: ../../source/ref-changelog.md:386 msgid "" -"Add round number to fit and evaluate log messages " -"([#1266](https://github.com/adap/flower/pull/1266))" +"Please note: The REST API is still experimental and will likely change " +"significantly over time." msgstr "" -#: ../../source/ref-changelog.md:650 +#: ../../source/ref-changelog.md:388 msgid "" -"Add secure gRPC connection to the `advanced_tensorflow` code example " -"([#847](https://github.com/adap/flower/pull/847))" +"**Improve the (experimental) Driver API** " +"([#1663](https://github.com/adap/flower/pull/1663), " +"[#1666](https://github.com/adap/flower/pull/1666), " +"[#1667](https://github.com/adap/flower/pull/1667), " +"[#1664](https://github.com/adap/flower/pull/1664), " +"[#1675](https://github.com/adap/flower/pull/1675), " +"[#1676](https://github.com/adap/flower/pull/1676), " +"[#1693](https://github.com/adap/flower/pull/1693), " +"[#1662](https://github.com/adap/flower/pull/1662), " +"[#1794](https://github.com/adap/flower/pull/1794))" msgstr "" -#: ../../source/ref-changelog.md:651 +#: ../../source/ref-changelog.md:390 msgid "" -"Update developer tooling " -"([#1231](https://github.com/adap/flower/pull/1231), " -"[#1276](https://github.com/adap/flower/pull/1276), " -"[#1301](https://github.com/adap/flower/pull/1301), " -"[#1310](https://github.com/adap/flower/pull/1310))" +"The Driver API is still an experimental feature, but this release " +"introduces some major upgrades. One of the main improvements is the " +"introduction of an SQLite database to store server state on disk (instead" +" of in-memory). Another improvement is that tasks (instructions or " +"results) that have been delivered will now be deleted. This greatly " +"improves the memory efficiency of a long-running Flower server." msgstr "" -#: ../../source/ref-changelog.md:652 +#: ../../source/ref-changelog.md:392 msgid "" -"Rename ProtoBuf messages to improve consistency " -"([#1214](https://github.com/adap/flower/pull/1214), " -"[#1258](https://github.com/adap/flower/pull/1258), " -"[#1259](https://github.com/adap/flower/pull/1259))" +"**Fix spilling issues related to Ray during simulations** " +"([#1698](https://github.com/adap/flower/pull/1698))" msgstr "" -#: ../../source/ref-changelog.md:654 -msgid "v0.19.0 (2022-05-18)" +#: ../../source/ref-changelog.md:394 +msgid "" +"While running long simulations, `ray` was sometimes spilling huge amounts" +" of data that would make the training unable to continue. This is now " +"fixed! 🎉" msgstr "" -#: ../../source/ref-changelog.md:658 +#: ../../source/ref-changelog.md:396 msgid "" -"**Flower Baselines (preview): FedOpt, FedBN, FedAvgM** " -"([#919](https://github.com/adap/flower/pull/919), " -"[#1127](https://github.com/adap/flower/pull/1127), " -"[#914](https://github.com/adap/flower/pull/914))" +"**Add new example using** `TabNet` **and Flower** " +"([#1725](https://github.com/adap/flower/pull/1725))" msgstr "" -#: ../../source/ref-changelog.md:660 +#: ../../source/ref-changelog.md:398 msgid "" -"The first preview release of Flower Baselines has arrived! We're " -"kickstarting Flower Baselines with implementations of FedOpt (FedYogi, " -"FedAdam, FedAdagrad), FedBN, and FedAvgM. Check the documentation on how " -"to use [Flower Baselines](https://flower.ai/docs/using-baselines.html). " -"With this first preview release we're also inviting the community to " -"[contribute their own baselines](https://flower.ai/docs/baselines/how-to-" -"contribute-baselines.html)." +"TabNet is a powerful and flexible framework for training machine learning" +" models on tabular data. We now have a federated example using Flower: " +"[quickstart-tabnet](https://github.com/adap/flower/tree/main/examples" +"/quickstart-tabnet)." msgstr "" -#: ../../source/ref-changelog.md:662 +#: ../../source/ref-changelog.md:400 msgid "" -"**C++ client SDK (preview) and code example** " -"([#1111](https://github.com/adap/flower/pull/1111))" +"**Add new how-to guide for monitoring simulations** " +"([#1649](https://github.com/adap/flower/pull/1649))" msgstr "" -#: ../../source/ref-changelog.md:664 +#: ../../source/ref-changelog.md:402 msgid "" -"Preview support for Flower clients written in C++. The C++ preview " -"includes a Flower client SDK and a quickstart code example that " -"demonstrates a simple C++ client using the SDK." +"We now have a documentation guide to help users monitor their performance" +" during simulations." msgstr "" -#: ../../source/ref-changelog.md:666 +#: ../../source/ref-changelog.md:404 msgid "" -"**Add experimental support for Python 3.10 and Python 3.11** " -"([#1135](https://github.com/adap/flower/pull/1135))" +"**Add training metrics to** `History` **object during simulations** " +"([#1696](https://github.com/adap/flower/pull/1696))" msgstr "" -#: ../../source/ref-changelog.md:668 +#: ../../source/ref-changelog.md:406 msgid "" -"Python 3.10 is the latest stable release of Python and Python 3.11 is due" -" to be released in October. This Flower release adds experimental support" -" for both Python versions." +"The `fit_metrics_aggregation_fn` can be used to aggregate training " +"metrics, but previous releases did not save the results in the `History` " +"object. This is now the case!" msgstr "" -#: ../../source/ref-changelog.md:670 +#: ../../source/ref-changelog.md:408 msgid "" -"**Aggregate custom metrics through user-provided functions** " -"([#1144](https://github.com/adap/flower/pull/1144))" +"**General improvements** " +"([#1659](https://github.com/adap/flower/pull/1659), " +"[#1646](https://github.com/adap/flower/pull/1646), " +"[#1647](https://github.com/adap/flower/pull/1647), " +"[#1471](https://github.com/adap/flower/pull/1471), " +"[#1648](https://github.com/adap/flower/pull/1648), " +"[#1651](https://github.com/adap/flower/pull/1651), " +"[#1652](https://github.com/adap/flower/pull/1652), " +"[#1653](https://github.com/adap/flower/pull/1653), " +"[#1659](https://github.com/adap/flower/pull/1659), " +"[#1665](https://github.com/adap/flower/pull/1665), " +"[#1670](https://github.com/adap/flower/pull/1670), " +"[#1672](https://github.com/adap/flower/pull/1672), " +"[#1677](https://github.com/adap/flower/pull/1677), " +"[#1684](https://github.com/adap/flower/pull/1684), " +"[#1683](https://github.com/adap/flower/pull/1683), " +"[#1686](https://github.com/adap/flower/pull/1686), " +"[#1682](https://github.com/adap/flower/pull/1682), " +"[#1685](https://github.com/adap/flower/pull/1685), " +"[#1692](https://github.com/adap/flower/pull/1692), " +"[#1705](https://github.com/adap/flower/pull/1705), " +"[#1708](https://github.com/adap/flower/pull/1708), " +"[#1711](https://github.com/adap/flower/pull/1711), " +"[#1713](https://github.com/adap/flower/pull/1713), " +"[#1714](https://github.com/adap/flower/pull/1714), " +"[#1718](https://github.com/adap/flower/pull/1718), " +"[#1716](https://github.com/adap/flower/pull/1716), " +"[#1723](https://github.com/adap/flower/pull/1723), " +"[#1735](https://github.com/adap/flower/pull/1735), " +"[#1678](https://github.com/adap/flower/pull/1678), " +"[#1750](https://github.com/adap/flower/pull/1750), " +"[#1753](https://github.com/adap/flower/pull/1753), " +"[#1736](https://github.com/adap/flower/pull/1736), " +"[#1766](https://github.com/adap/flower/pull/1766), " +"[#1760](https://github.com/adap/flower/pull/1760), " +"[#1775](https://github.com/adap/flower/pull/1775), " +"[#1776](https://github.com/adap/flower/pull/1776), " +"[#1777](https://github.com/adap/flower/pull/1777), " +"[#1779](https://github.com/adap/flower/pull/1779), " +"[#1784](https://github.com/adap/flower/pull/1784), " +"[#1773](https://github.com/adap/flower/pull/1773), " +"[#1755](https://github.com/adap/flower/pull/1755), " +"[#1789](https://github.com/adap/flower/pull/1789), " +"[#1788](https://github.com/adap/flower/pull/1788), " +"[#1798](https://github.com/adap/flower/pull/1798), " +"[#1799](https://github.com/adap/flower/pull/1799), " +"[#1739](https://github.com/adap/flower/pull/1739), " +"[#1800](https://github.com/adap/flower/pull/1800), " +"[#1804](https://github.com/adap/flower/pull/1804), " +"[#1805](https://github.com/adap/flower/pull/1805))" msgstr "" -#: ../../source/ref-changelog.md:672 -msgid "" -"Custom metrics (e.g., `accuracy`) can now be aggregated without having to" -" customize the strategy. Built-in strategies support two new arguments, " -"`fit_metrics_aggregation_fn` and `evaluate_metrics_aggregation_fn`, that " -"allow passing custom metric aggregation functions." +#: ../../source/ref-changelog.md:416 +msgid "v1.3.0 (2023-02-06)" msgstr "" -#: ../../source/ref-changelog.md:674 +#: ../../source/ref-changelog.md:422 msgid "" -"**User-configurable round timeout** " -"([#1162](https://github.com/adap/flower/pull/1162))" +"`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, " +"`Daniel J. Beutel`, `JDRanpariya`, `Lennart Behme`, `Taner Topal`" msgstr "" -#: ../../source/ref-changelog.md:676 +#: ../../source/ref-changelog.md:426 msgid "" -"A new configuration value allows the round timeout to be set for " -"`start_server` and `start_simulation`. If the `config` dictionary " -"contains a `round_timeout` key (with a `float` value in seconds), the " -"server will wait *at least* `round_timeout` seconds before it closes the " -"connection." +"**Add support for** `workload_id` **and** `group_id` **in Driver API** " +"([#1595](https://github.com/adap/flower/pull/1595))" msgstr "" -#: ../../source/ref-changelog.md:678 +#: ../../source/ref-changelog.md:428 msgid "" -"**Enable both federated evaluation and centralized evaluation to be used " -"at the same time in all built-in strategies** " -"([#1091](https://github.com/adap/flower/pull/1091))" +"The (experimental) Driver API now supports a `workload_id` that can be " +"used to identify which workload a task belongs to. It also supports a new" +" `group_id` that can be used, for example, to indicate the current " +"training round. Both the `workload_id` and `group_id` enable client nodes" +" to decide whether they want to handle a task or not." msgstr "" -#: ../../source/ref-changelog.md:680 +#: ../../source/ref-changelog.md:430 msgid "" -"Built-in strategies can now perform both federated evaluation (i.e., " -"client-side) and centralized evaluation (i.e., server-side) in the same " -"round. Federated evaluation can be disabled by setting `fraction_eval` to" -" `0.0`." +"**Make Driver API and Fleet API address configurable** " +"([#1637](https://github.com/adap/flower/pull/1637))" msgstr "" -#: ../../source/ref-changelog.md:682 +#: ../../source/ref-changelog.md:432 msgid "" -"**Two new Jupyter Notebook tutorials** " -"([#1141](https://github.com/adap/flower/pull/1141))" +"The (experimental) long-running Flower server (Driver API and Fleet API) " +"can now configure the server address of both Driver API (via `--driver-" +"api-address`) and Fleet API (via `--fleet-api-address`) when starting:" msgstr "" -#: ../../source/ref-changelog.md:684 +#: ../../source/ref-changelog.md:434 msgid "" -"Two Jupyter Notebook tutorials (compatible with Google Colab) explain " -"basic and intermediate Flower features:" +"`flower-server --driver-api-address \"0.0.0.0:8081\" --fleet-api-address " +"\"0.0.0.0:8086\"`" msgstr "" -#: ../../source/ref-changelog.md:686 -msgid "" -"*An Introduction to Federated Learning*: [Open in " -"Colab](https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-1" -"-Intro-to-FL-PyTorch.ipynb)" +#: ../../source/ref-changelog.md:436 +msgid "Both IPv4 and IPv6 addresses are supported." msgstr "" -#: ../../source/ref-changelog.md:688 +#: ../../source/ref-changelog.md:438 msgid "" -"*Using Strategies in Federated Learning*: [Open in " -"Colab](https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-2" -"-Strategies-in-FL-PyTorch.ipynb)" +"**Add new example of Federated Learning using fastai and Flower** " +"([#1598](https://github.com/adap/flower/pull/1598))" msgstr "" -#: ../../source/ref-changelog.md:690 +#: ../../source/ref-changelog.md:440 msgid "" -"**New FedAvgM strategy (Federated Averaging with Server Momentum)** " -"([#1076](https://github.com/adap/flower/pull/1076))" +"A new code example (`quickstart-fastai`) demonstrates federated learning " +"with [fastai](https://www.fast.ai/) and Flower. You can find it here: " +"[quickstart-fastai](https://github.com/adap/flower/tree/main/examples" +"/quickstart-fastai)." msgstr "" -#: ../../source/ref-changelog.md:692 +#: ../../source/ref-changelog.md:442 msgid "" -"The new `FedAvgM` strategy implements Federated Averaging with Server " -"Momentum \\[Hsu et al., 2019\\]." +"**Make Android example compatible with** `flwr >= 1.0.0` **and the latest" +" versions of Android** " +"([#1603](https://github.com/adap/flower/pull/1603))" msgstr "" -#: ../../source/ref-changelog.md:694 +#: ../../source/ref-changelog.md:444 msgid "" -"**New advanced PyTorch code example** " -"([#1007](https://github.com/adap/flower/pull/1007))" +"The Android code example has received a substantial update: the project " +"is compatible with Flower 1.0 (and later), the UI received a full " +"refresh, and the project is updated to be compatible with newer Android " +"tooling." msgstr "" -#: ../../source/ref-changelog.md:696 +#: ../../source/ref-changelog.md:446 msgid "" -"A new code example (`advanced_pytorch`) demonstrates advanced Flower " -"concepts with PyTorch." +"**Add new `FedProx` strategy** " +"([#1619](https://github.com/adap/flower/pull/1619))" msgstr "" -#: ../../source/ref-changelog.md:698 +#: ../../source/ref-changelog.md:448 msgid "" -"**New JAX code example** " -"([#906](https://github.com/adap/flower/pull/906), " -"[#1143](https://github.com/adap/flower/pull/1143))" +"This " +"[strategy](https://github.com/adap/flower/blob/main/src/py/flwr/server/strategy/fedprox.py)" +" is almost identical to " +"[`FedAvg`](https://github.com/adap/flower/blob/main/src/py/flwr/server/strategy/fedavg.py)," +" but helps users replicate what is described in this " +"[paper](https://arxiv.org/abs/1812.06127). It essentially adds a " +"parameter called `proximal_mu` to regularize the local models with " +"respect to the global models." msgstr "" -#: ../../source/ref-changelog.md:700 +#: ../../source/ref-changelog.md:450 msgid "" -"A new code example (`jax_from_centralized_to_federated`) shows federated " -"learning with JAX and Flower." +"**Add new metrics to telemetry events** " +"([#1640](https://github.com/adap/flower/pull/1640))" msgstr "" -#: ../../source/ref-changelog.md:704 +#: ../../source/ref-changelog.md:452 msgid "" -"New option to keep Ray running if Ray was already initialized in " -"`start_simulation` ([#1177](https://github.com/adap/flower/pull/1177))" +"An updated event structure allows, for example, the clustering of events " +"within the same workload." msgstr "" -#: ../../source/ref-changelog.md:705 +#: ../../source/ref-changelog.md:454 msgid "" -"Add support for custom `ClientManager` as a `start_simulation` parameter " -"([#1171](https://github.com/adap/flower/pull/1171))" +"**Add new custom strategy tutorial section** " +"[#1623](https://github.com/adap/flower/pull/1623)" msgstr "" -#: ../../source/ref-changelog.md:706 +#: ../../source/ref-changelog.md:456 msgid "" -"New documentation for [implementing " -"strategies](https://flower.ai/docs/framework/how-to-implement-" -"strategies.html) ([#1097](https://github.com/adap/flower/pull/1097), " -"[#1175](https://github.com/adap/flower/pull/1175))" +"The Flower tutorial now has a new section that covers implementing a " +"custom strategy from scratch: [Open in " +"Colab](https://colab.research.google.com/github/adap/flower/blob/main/doc/source" +"/tutorial-build-a-strategy-from-scratch-pytorch.ipynb)" msgstr "" -#: ../../source/ref-changelog.md:707 +#: ../../source/ref-changelog.md:458 msgid "" -"New mobile-friendly documentation theme " -"([#1174](https://github.com/adap/flower/pull/1174))" +"**Add new custom serialization tutorial section** " +"([#1622](https://github.com/adap/flower/pull/1622))" msgstr "" -#: ../../source/ref-changelog.md:708 +#: ../../source/ref-changelog.md:460 msgid "" -"Limit version range for (optional) `ray` dependency to include only " -"compatible releases (`>=1.9.2,<1.12.0`) " -"([#1205](https://github.com/adap/flower/pull/1205))" +"The Flower tutorial now has a new section that covers custom " +"serialization: [Open in " +"Colab](https://colab.research.google.com/github/adap/flower/blob/main/doc/source" +"/tutorial-customize-the-client-pytorch.ipynb)" msgstr "" -#: ../../source/ref-changelog.md:712 +#: ../../source/ref-changelog.md:462 msgid "" -"**Remove deprecated support for Python 3.6** " -"([#871](https://github.com/adap/flower/pull/871))" +"**General improvements** " +"([#1638](https://github.com/adap/flower/pull/1638), " +"[#1634](https://github.com/adap/flower/pull/1634), " +"[#1636](https://github.com/adap/flower/pull/1636), " +"[#1635](https://github.com/adap/flower/pull/1635), " +"[#1633](https://github.com/adap/flower/pull/1633), " +"[#1632](https://github.com/adap/flower/pull/1632), " +"[#1631](https://github.com/adap/flower/pull/1631), " +"[#1630](https://github.com/adap/flower/pull/1630), " +"[#1627](https://github.com/adap/flower/pull/1627), " +"[#1593](https://github.com/adap/flower/pull/1593), " +"[#1616](https://github.com/adap/flower/pull/1616), " +"[#1615](https://github.com/adap/flower/pull/1615), " +"[#1607](https://github.com/adap/flower/pull/1607), " +"[#1609](https://github.com/adap/flower/pull/1609), " +"[#1608](https://github.com/adap/flower/pull/1608), " +"[#1603](https://github.com/adap/flower/pull/1603), " +"[#1590](https://github.com/adap/flower/pull/1590), " +"[#1580](https://github.com/adap/flower/pull/1580), " +"[#1599](https://github.com/adap/flower/pull/1599), " +"[#1600](https://github.com/adap/flower/pull/1600), " +"[#1601](https://github.com/adap/flower/pull/1601), " +"[#1597](https://github.com/adap/flower/pull/1597), " +"[#1595](https://github.com/adap/flower/pull/1595), " +"[#1591](https://github.com/adap/flower/pull/1591), " +"[#1588](https://github.com/adap/flower/pull/1588), " +"[#1589](https://github.com/adap/flower/pull/1589), " +"[#1587](https://github.com/adap/flower/pull/1587), " +"[#1573](https://github.com/adap/flower/pull/1573), " +"[#1581](https://github.com/adap/flower/pull/1581), " +"[#1578](https://github.com/adap/flower/pull/1578), " +"[#1574](https://github.com/adap/flower/pull/1574), " +"[#1572](https://github.com/adap/flower/pull/1572), " +"[#1586](https://github.com/adap/flower/pull/1586))" msgstr "" -#: ../../source/ref-changelog.md:713 +#: ../../source/ref-changelog.md:466 msgid "" -"**Remove deprecated KerasClient** " -"([#857](https://github.com/adap/flower/pull/857))" +"**Updated documentation** " +"([#1629](https://github.com/adap/flower/pull/1629), " +"[#1628](https://github.com/adap/flower/pull/1628), " +"[#1620](https://github.com/adap/flower/pull/1620), " +"[#1618](https://github.com/adap/flower/pull/1618), " +"[#1617](https://github.com/adap/flower/pull/1617), " +"[#1613](https://github.com/adap/flower/pull/1613), " +"[#1614](https://github.com/adap/flower/pull/1614))" msgstr "" -#: ../../source/ref-changelog.md:714 +#: ../../source/ref-changelog.md:468 ../../source/ref-changelog.md:535 msgid "" -"**Remove deprecated no-op extra installs** " -"([#973](https://github.com/adap/flower/pull/973))" +"As usual, the documentation has improved quite a bit. It is another step " +"in our effort to make the Flower documentation the best documentation of " +"any project. Stay tuned and as always, feel free to provide feedback!" msgstr "" -#: ../../source/ref-changelog.md:715 +#: ../../source/ref-changelog.md:474 +msgid "v1.2.0 (2023-01-13)" +msgstr "" + +#: ../../source/ref-changelog.md:480 msgid "" -"**Remove deprecated proto fields from** `FitRes` **and** `EvaluateRes` " -"([#869](https://github.com/adap/flower/pull/869))" +"`Adam Narozniak`, `Charles Beauville`, `Daniel J. Beutel`, `Edoardo`, `L." +" Jiang`, `Ragy`, `Taner Topal`, `dannymcy`" msgstr "" -#: ../../source/ref-changelog.md:716 +#: ../../source/ref-changelog.md:484 msgid "" -"**Remove deprecated QffedAvg strategy (replaced by QFedAvg)** " -"([#1107](https://github.com/adap/flower/pull/1107))" +"**Introduce new Flower Baseline: FedAvg MNIST** " +"([#1497](https://github.com/adap/flower/pull/1497), " +"[#1552](https://github.com/adap/flower/pull/1552))" msgstr "" -#: ../../source/ref-changelog.md:717 +#: ../../source/ref-changelog.md:486 msgid "" -"**Remove deprecated DefaultStrategy strategy** " -"([#1142](https://github.com/adap/flower/pull/1142))" +"Over the coming weeks, we will be releasing a number of new reference " +"implementations useful especially to FL newcomers. They will typically " +"revisit well known papers from the literature, and be suitable for " +"integration in your own application or for experimentation, in order to " +"deepen your knowledge of FL in general. Today's release is the first in " +"this series. [Read more.](https://flower.ai/blog/2023-01-12-fl-starter-" +"pack-fedavg-mnist-cnn/)" msgstr "" -#: ../../source/ref-changelog.md:718 +#: ../../source/ref-changelog.md:488 msgid "" -"**Remove deprecated support for eval_fn accuracy return value** " -"([#1142](https://github.com/adap/flower/pull/1142))" +"**Improve GPU support in simulations** " +"([#1555](https://github.com/adap/flower/pull/1555))" msgstr "" -#: ../../source/ref-changelog.md:719 +#: ../../source/ref-changelog.md:490 msgid "" -"**Remove deprecated support for passing initial parameters as NumPy " -"ndarrays** ([#1142](https://github.com/adap/flower/pull/1142))" +"The Ray-based Virtual Client Engine (`start_simulation`) has been updated" +" to improve GPU support. The update includes some of the hard-earned " +"lessons from scaling simulations in GPU cluster environments. New " +"defaults make running GPU-based simulations substantially more robust." msgstr "" -#: ../../source/ref-changelog.md:721 -msgid "v0.18.0 (2022-02-28)" +#: ../../source/ref-changelog.md:492 +msgid "" +"**Improve GPU support in Jupyter Notebook tutorials** " +"([#1527](https://github.com/adap/flower/pull/1527), " +"[#1558](https://github.com/adap/flower/pull/1558))" msgstr "" -#: ../../source/ref-changelog.md:725 +#: ../../source/ref-changelog.md:494 msgid "" -"**Improved Virtual Client Engine compatibility with Jupyter Notebook / " -"Google Colab** ([#866](https://github.com/adap/flower/pull/866), " -"[#872](https://github.com/adap/flower/pull/872), " -"[#833](https://github.com/adap/flower/pull/833), " -"[#1036](https://github.com/adap/flower/pull/1036))" +"Some users reported that Jupyter Notebooks have not always been easy to " +"use on GPU instances. We listened and made improvements to all of our " +"Jupyter notebooks! Check out the updated notebooks here:" msgstr "" -#: ../../source/ref-changelog.md:727 +#: ../../source/ref-changelog.md:496 msgid "" -"Simulations (using the Virtual Client Engine through `start_simulation`) " -"now work more smoothly on Jupyter Notebooks (incl. Google Colab) after " -"installing Flower with the `simulation` extra (`pip install " -"flwr[simulation]`)." +"[An Introduction to Federated Learning](https://flower.ai/docs/framework" +"/tutorial-get-started-with-flower-pytorch.html)" msgstr "" -#: ../../source/ref-changelog.md:729 +#: ../../source/ref-changelog.md:497 msgid "" -"**New Jupyter Notebook code example** " -"([#833](https://github.com/adap/flower/pull/833))" +"[Strategies in Federated Learning](https://flower.ai/docs/framework" +"/tutorial-use-a-federated-learning-strategy-pytorch.html)" msgstr "" -#: ../../source/ref-changelog.md:731 +#: ../../source/ref-changelog.md:498 msgid "" -"A new code example (`quickstart_simulation`) demonstrates Flower " -"simulations using the Virtual Client Engine through Jupyter Notebook " -"(incl. Google Colab)." +"[Building a Strategy](https://flower.ai/docs/framework/tutorial-build-a" +"-strategy-from-scratch-pytorch.html)" msgstr "" -#: ../../source/ref-changelog.md:733 +#: ../../source/ref-changelog.md:499 msgid "" -"**Client properties (feature preview)** " -"([#795](https://github.com/adap/flower/pull/795))" +"[Client and NumPyClient](https://flower.ai/docs/framework/tutorial-" +"customize-the-client-pytorch.html)" msgstr "" -#: ../../source/ref-changelog.md:735 +#: ../../source/ref-changelog.md:501 msgid "" -"Clients can implement a new method `get_properties` to enable server-side" -" strategies to query client properties." +"**Introduce optional telemetry** " +"([#1533](https://github.com/adap/flower/pull/1533), " +"[#1544](https://github.com/adap/flower/pull/1544), " +"[#1584](https://github.com/adap/flower/pull/1584))" msgstr "" -#: ../../source/ref-changelog.md:737 +#: ../../source/ref-changelog.md:503 msgid "" -"**Experimental Android support with TFLite** " -"([#865](https://github.com/adap/flower/pull/865))" +"After a [request for " +"feedback](https://github.com/adap/flower/issues/1534) from the community," +" the Flower open-source project introduces optional collection of " +"*anonymous* usage metrics to make well-informed decisions to improve " +"Flower. Doing this enables the Flower team to understand how Flower is " +"used and what challenges users might face." msgstr "" -#: ../../source/ref-changelog.md:739 +#: ../../source/ref-changelog.md:505 msgid "" -"Android support has finally arrived in `main`! Flower is both client-" -"agnostic and framework-agnostic by design. One can integrate arbitrary " -"client platforms and with this release, using Flower on Android has " -"become a lot easier." +"**Flower is a friendly framework for collaborative AI and data science.**" +" Staying true to this statement, Flower makes it easy to disable " +"telemetry for users who do not want to share anonymous usage metrics. " +"[Read more.](https://flower.ai/docs/telemetry.html)." msgstr "" -#: ../../source/ref-changelog.md:741 +#: ../../source/ref-changelog.md:507 msgid "" -"The example uses TFLite on the client side, along with a new " -"`FedAvgAndroid` strategy. The Android client and `FedAvgAndroid` are " -"still experimental, but they are a first step towards a fully-fledged " -"Android SDK and a unified `FedAvg` implementation that integrated the new" -" functionality from `FedAvgAndroid`." +"**Introduce (experimental) Driver API** " +"([#1520](https://github.com/adap/flower/pull/1520), " +"[#1525](https://github.com/adap/flower/pull/1525), " +"[#1545](https://github.com/adap/flower/pull/1545), " +"[#1546](https://github.com/adap/flower/pull/1546), " +"[#1550](https://github.com/adap/flower/pull/1550), " +"[#1551](https://github.com/adap/flower/pull/1551), " +"[#1567](https://github.com/adap/flower/pull/1567))" msgstr "" -#: ../../source/ref-changelog.md:743 +#: ../../source/ref-changelog.md:509 msgid "" -"**Make gRPC keepalive time user-configurable and decrease default " -"keepalive time** ([#1069](https://github.com/adap/flower/pull/1069))" +"Flower now has a new (experimental) Driver API which will enable fully " +"programmable, async, and multi-tenant Federated Learning and Federated " +"Analytics applications. Phew, that's a lot! Going forward, the Driver API" +" will be the abstraction that many upcoming features will be built on - " +"and you can start building those things now, too." msgstr "" -#: ../../source/ref-changelog.md:745 +#: ../../source/ref-changelog.md:511 msgid "" -"The default gRPC keepalive time has been reduced to increase the " -"compatibility of Flower with more cloud environments (for example, " -"Microsoft Azure). Users can configure the keepalive time to customize the" -" gRPC stack based on specific requirements." +"The Driver API also enables a new execution mode in which the server runs" +" indefinitely. Multiple individual workloads can run concurrently and " +"start and stop their execution independent of the server. This is " +"especially useful for users who want to deploy Flower in production." msgstr "" -#: ../../source/ref-changelog.md:747 +#: ../../source/ref-changelog.md:513 msgid "" -"**New differential privacy example using Opacus and PyTorch** " -"([#805](https://github.com/adap/flower/pull/805))" +"To learn more, check out the `mt-pytorch` code example. We look forward " +"to you feedback!" msgstr "" -#: ../../source/ref-changelog.md:749 +#: ../../source/ref-changelog.md:515 msgid "" -"A new code example (`opacus`) demonstrates differentially-private " -"federated learning with Opacus, PyTorch, and Flower." +"Please note: *The Driver API is still experimental and will likely change" +" significantly over time.*" msgstr "" -#: ../../source/ref-changelog.md:751 +#: ../../source/ref-changelog.md:517 msgid "" -"**New Hugging Face Transformers code example** " -"([#863](https://github.com/adap/flower/pull/863))" +"**Add new Federated Analytics with Pandas example** " +"([#1469](https://github.com/adap/flower/pull/1469), " +"[#1535](https://github.com/adap/flower/pull/1535))" msgstr "" -#: ../../source/ref-changelog.md:753 +#: ../../source/ref-changelog.md:519 msgid "" -"A new code example (`quickstart_huggingface`) demonstrates usage of " -"Hugging Face Transformers with Flower." +"A new code example (`quickstart-pandas`) demonstrates federated analytics" +" with Pandas and Flower. You can find it here: [quickstart-" +"pandas](https://github.com/adap/flower/tree/main/examples/quickstart-" +"pandas)." msgstr "" -#: ../../source/ref-changelog.md:755 +#: ../../source/ref-changelog.md:521 msgid "" -"**New MLCube code example** " -"([#779](https://github.com/adap/flower/pull/779), " -"[#1034](https://github.com/adap/flower/pull/1034), " -"[#1065](https://github.com/adap/flower/pull/1065), " -"[#1090](https://github.com/adap/flower/pull/1090))" +"**Add new strategies: Krum and MultiKrum** " +"([#1481](https://github.com/adap/flower/pull/1481))" msgstr "" -#: ../../source/ref-changelog.md:757 +#: ../../source/ref-changelog.md:523 msgid "" -"A new code example (`quickstart_mlcube`) demonstrates usage of MLCube " -"with Flower." +"Edoardo, a computer science student at the Sapienza University of Rome, " +"contributed a new `Krum` strategy that enables users to easily use Krum " +"and MultiKrum in their workloads." msgstr "" -#: ../../source/ref-changelog.md:759 +#: ../../source/ref-changelog.md:525 msgid "" -"**SSL-enabled server and client** " -"([#842](https://github.com/adap/flower/pull/842), " -"[#844](https://github.com/adap/flower/pull/844), " -"[#845](https://github.com/adap/flower/pull/845), " -"[#847](https://github.com/adap/flower/pull/847), " -"[#993](https://github.com/adap/flower/pull/993), " -"[#994](https://github.com/adap/flower/pull/994))" +"**Update C++ example to be compatible with Flower v1.2.0** " +"([#1495](https://github.com/adap/flower/pull/1495))" msgstr "" -#: ../../source/ref-changelog.md:761 +#: ../../source/ref-changelog.md:527 msgid "" -"SSL enables secure encrypted connections between clients and servers. " -"This release open-sources the Flower secure gRPC implementation to make " -"encrypted communication channels accessible to all Flower users." +"The C++ code example has received a substantial update to make it " +"compatible with the latest version of Flower." msgstr "" -#: ../../source/ref-changelog.md:763 +#: ../../source/ref-changelog.md:529 msgid "" -"**Updated** `FedAdam` **and** `FedYogi` **strategies** " -"([#885](https://github.com/adap/flower/pull/885), " -"[#895](https://github.com/adap/flower/pull/895))" +"**General improvements** " +"([#1491](https://github.com/adap/flower/pull/1491), " +"[#1504](https://github.com/adap/flower/pull/1504), " +"[#1506](https://github.com/adap/flower/pull/1506), " +"[#1514](https://github.com/adap/flower/pull/1514), " +"[#1522](https://github.com/adap/flower/pull/1522), " +"[#1523](https://github.com/adap/flower/pull/1523), " +"[#1526](https://github.com/adap/flower/pull/1526), " +"[#1528](https://github.com/adap/flower/pull/1528), " +"[#1547](https://github.com/adap/flower/pull/1547), " +"[#1549](https://github.com/adap/flower/pull/1549), " +"[#1560](https://github.com/adap/flower/pull/1560), " +"[#1564](https://github.com/adap/flower/pull/1564), " +"[#1566](https://github.com/adap/flower/pull/1566))" msgstr "" -#: ../../source/ref-changelog.md:765 +#: ../../source/ref-changelog.md:533 msgid "" -"`FedAdam` and `FedAdam` match the latest version of the Adaptive " -"Federated Optimization paper." +"**Updated documentation** " +"([#1494](https://github.com/adap/flower/pull/1494), " +"[#1496](https://github.com/adap/flower/pull/1496), " +"[#1500](https://github.com/adap/flower/pull/1500), " +"[#1503](https://github.com/adap/flower/pull/1503), " +"[#1505](https://github.com/adap/flower/pull/1505), " +"[#1524](https://github.com/adap/flower/pull/1524), " +"[#1518](https://github.com/adap/flower/pull/1518), " +"[#1519](https://github.com/adap/flower/pull/1519), " +"[#1515](https://github.com/adap/flower/pull/1515))" msgstr "" -#: ../../source/ref-changelog.md:767 +#: ../../source/ref-changelog.md:537 msgid "" -"**Initialize** `start_simulation` **with a list of client IDs** " -"([#860](https://github.com/adap/flower/pull/860))" +"One highlight is the new [first time contributor " +"guide](https://flower.ai/docs/first-time-contributors.html): if you've " +"never contributed on GitHub before, this is the perfect place to start!" msgstr "" -#: ../../source/ref-changelog.md:769 -msgid "" -"`start_simulation` can now be called with a list of client IDs " -"(`clients_ids`, type: `List[str]`). Those IDs will be passed to the " -"`client_fn` whenever a client needs to be initialized, which can make it " -"easier to load data partitions that are not accessible through `int` " -"identifiers." +#: ../../source/ref-changelog.md:543 +msgid "v1.1.0 (2022-10-31)" msgstr "" -#: ../../source/ref-changelog.md:773 +#: ../../source/ref-changelog.md:547 msgid "" -"Update `num_examples` calculation in PyTorch code examples in " -"([#909](https://github.com/adap/flower/pull/909))" +"We would like to give our **special thanks** to all the contributors who " +"made the new version of Flower possible (in `git shortlog` order):" msgstr "" -#: ../../source/ref-changelog.md:774 +#: ../../source/ref-changelog.md:549 msgid "" -"Expose Flower version through `flwr.__version__` " -"([#952](https://github.com/adap/flower/pull/952))" +"`Akis Linardos`, `Christopher S`, `Daniel J. Beutel`, `George`, `Jan " +"Schlicht`, `Mohammad Fares`, `Pedro Porto Buarque de Gusmão`, `Philipp " +"Wiesner`, `Rob Luke`, `Taner Topal`, `VasundharaAgarwal`, " +"`danielnugraha`, `edogab33`" msgstr "" -#: ../../source/ref-changelog.md:775 +#: ../../source/ref-changelog.md:553 msgid "" -"`start_server` in `app.py` now returns a `History` object containing " -"metrics from training ([#974](https://github.com/adap/flower/pull/974))" +"**Introduce Differential Privacy wrappers (preview)** " +"([#1357](https://github.com/adap/flower/pull/1357), " +"[#1460](https://github.com/adap/flower/pull/1460))" msgstr "" -#: ../../source/ref-changelog.md:776 +#: ../../source/ref-changelog.md:555 msgid "" -"Make `max_workers` (used by `ThreadPoolExecutor`) configurable " -"([#978](https://github.com/adap/flower/pull/978))" +"The first (experimental) preview of pluggable Differential Privacy " +"wrappers enables easy configuration and usage of differential privacy " +"(DP). The pluggable DP wrappers enable framework-agnostic **and** " +"strategy-agnostic usage of both client-side DP and server-side DP. Head " +"over to the Flower docs, a new explainer goes into more detail." msgstr "" -#: ../../source/ref-changelog.md:777 +#: ../../source/ref-changelog.md:557 msgid "" -"Increase sleep time after server start to three seconds in all code " -"examples ([#1086](https://github.com/adap/flower/pull/1086))" +"**New iOS CoreML code example** " +"([#1289](https://github.com/adap/flower/pull/1289))" msgstr "" -#: ../../source/ref-changelog.md:778 +#: ../../source/ref-changelog.md:559 msgid "" -"Added a new FAQ section to the documentation " -"([#948](https://github.com/adap/flower/pull/948))" +"Flower goes iOS! A massive new code example shows how Flower clients can " +"be built for iOS. The code example contains both Flower iOS SDK " +"components that can be used for many tasks, and one task example running " +"on CoreML." msgstr "" -#: ../../source/ref-changelog.md:779 +#: ../../source/ref-changelog.md:561 msgid "" -"And many more under-the-hood changes, library updates, documentation " -"changes, and tooling improvements!" +"**New FedMedian strategy** " +"([#1461](https://github.com/adap/flower/pull/1461))" msgstr "" -#: ../../source/ref-changelog.md:783 +#: ../../source/ref-changelog.md:563 msgid "" -"**Removed** `flwr_example` **and** `flwr_experimental` **from release " -"build** ([#869](https://github.com/adap/flower/pull/869))" +"The new `FedMedian` strategy implements Federated Median (FedMedian) by " +"[Yin et al., 2018](https://arxiv.org/pdf/1803.01498v1.pdf)." msgstr "" -#: ../../source/ref-changelog.md:785 +#: ../../source/ref-changelog.md:565 msgid "" -"The packages `flwr_example` and `flwr_experimental` have been deprecated " -"since Flower 0.12.0 and they are not longer included in Flower release " -"builds. The associated extras (`baseline`, `examples-pytorch`, `examples-" -"tensorflow`, `http-logger`, `ops`) are now no-op and will be removed in " -"an upcoming release." +"**Log** `Client` **exceptions in Virtual Client Engine** " +"([#1493](https://github.com/adap/flower/pull/1493))" msgstr "" -#: ../../source/ref-changelog.md:787 -msgid "v0.17.0 (2021-09-24)" +#: ../../source/ref-changelog.md:567 +msgid "" +"All `Client` exceptions happening in the VCE are now logged by default " +"and not just exposed to the configured `Strategy` (via the `failures` " +"argument)." msgstr "" -#: ../../source/ref-changelog.md:791 +#: ../../source/ref-changelog.md:569 msgid "" -"**Experimental virtual client engine** " -"([#781](https://github.com/adap/flower/pull/781) " -"[#790](https://github.com/adap/flower/pull/790) " -"[#791](https://github.com/adap/flower/pull/791))" +"**Improve Virtual Client Engine internals** " +"([#1401](https://github.com/adap/flower/pull/1401), " +"[#1453](https://github.com/adap/flower/pull/1453))" msgstr "" -#: ../../source/ref-changelog.md:793 +#: ../../source/ref-changelog.md:571 msgid "" -"One of Flower's goals is to enable research at scale. This release " -"enables a first (experimental) peek at a major new feature, codenamed the" -" virtual client engine. Virtual clients enable simulations that scale to " -"a (very) large number of clients on a single machine or compute cluster. " -"The easiest way to test the new functionality is to look at the two new " -"code examples called `quickstart_simulation` and `simulation_pytorch`." +"Some internals of the Virtual Client Engine have been revamped. The VCE " +"now uses Ray 2.0 under the hood, the value type of the `client_resources`" +" dictionary changed to `float` to allow fractions of resources to be " +"allocated." msgstr "" -#: ../../source/ref-changelog.md:795 +#: ../../source/ref-changelog.md:573 msgid "" -"The feature is still experimental, so there's no stability guarantee for " -"the API. It's also not quite ready for prime time and comes with a few " -"known caveats. However, those who are curious are encouraged to try it " -"out and share their thoughts." +"**Support optional** `Client`**/**`NumPyClient` **methods in Virtual " +"Client Engine**" msgstr "" -#: ../../source/ref-changelog.md:797 +#: ../../source/ref-changelog.md:575 msgid "" -"**New built-in strategies** " -"([#828](https://github.com/adap/flower/pull/828) " -"[#822](https://github.com/adap/flower/pull/822))" +"The Virtual Client Engine now has full support for optional `Client` (and" +" `NumPyClient`) methods." msgstr "" -#: ../../source/ref-changelog.md:799 +#: ../../source/ref-changelog.md:577 msgid "" -"FedYogi - Federated learning strategy using Yogi on server-side. " -"Implementation based on https://arxiv.org/abs/2003.00295" +"**Provide type information to packages using** `flwr` " +"([#1377](https://github.com/adap/flower/pull/1377))" msgstr "" -#: ../../source/ref-changelog.md:800 +#: ../../source/ref-changelog.md:579 msgid "" -"FedAdam - Federated learning strategy using Adam on server-side. " -"Implementation based on https://arxiv.org/abs/2003.00295" +"The package `flwr` is now bundled with a `py.typed` file indicating that " +"the package is typed. This enables typing support for projects or " +"packages that use `flwr` by enabling them to improve their code using " +"static type checkers like `mypy`." msgstr "" -#: ../../source/ref-changelog.md:802 +#: ../../source/ref-changelog.md:581 msgid "" -"**New PyTorch Lightning code example** " -"([#617](https://github.com/adap/flower/pull/617))" +"**Updated code example** " +"([#1344](https://github.com/adap/flower/pull/1344), " +"[#1347](https://github.com/adap/flower/pull/1347))" msgstr "" -#: ../../source/ref-changelog.md:804 +#: ../../source/ref-changelog.md:583 msgid "" -"**New Variational Auto-Encoder code example** " -"([#752](https://github.com/adap/flower/pull/752))" +"The code examples covering scikit-learn and PyTorch Lightning have been " +"updated to work with the latest version of Flower." msgstr "" -#: ../../source/ref-changelog.md:806 +#: ../../source/ref-changelog.md:585 msgid "" -"**New scikit-learn code example** " -"([#748](https://github.com/adap/flower/pull/748))" +"**Updated documentation** " +"([#1355](https://github.com/adap/flower/pull/1355), " +"[#1558](https://github.com/adap/flower/pull/1558), " +"[#1379](https://github.com/adap/flower/pull/1379), " +"[#1380](https://github.com/adap/flower/pull/1380), " +"[#1381](https://github.com/adap/flower/pull/1381), " +"[#1332](https://github.com/adap/flower/pull/1332), " +"[#1391](https://github.com/adap/flower/pull/1391), " +"[#1403](https://github.com/adap/flower/pull/1403), " +"[#1364](https://github.com/adap/flower/pull/1364), " +"[#1409](https://github.com/adap/flower/pull/1409), " +"[#1419](https://github.com/adap/flower/pull/1419), " +"[#1444](https://github.com/adap/flower/pull/1444), " +"[#1448](https://github.com/adap/flower/pull/1448), " +"[#1417](https://github.com/adap/flower/pull/1417), " +"[#1449](https://github.com/adap/flower/pull/1449), " +"[#1465](https://github.com/adap/flower/pull/1465), " +"[#1467](https://github.com/adap/flower/pull/1467))" msgstr "" -#: ../../source/ref-changelog.md:808 +#: ../../source/ref-changelog.md:587 msgid "" -"**New experimental TensorBoard strategy** " -"([#789](https://github.com/adap/flower/pull/789))" +"There have been so many documentation updates that it doesn't even make " +"sense to list them individually." msgstr "" -#: ../../source/ref-changelog.md:812 +#: ../../source/ref-changelog.md:589 msgid "" -"Improved advanced TensorFlow code example " -"([#769](https://github.com/adap/flower/pull/769))" +"**Restructured documentation** " +"([#1387](https://github.com/adap/flower/pull/1387))" msgstr "" -#: ../../source/ref-changelog.md:813 +#: ../../source/ref-changelog.md:591 msgid "" -"Warning when `min_available_clients` is misconfigured " -"([#830](https://github.com/adap/flower/pull/830))" +"The documentation has been restructured to make it easier to navigate. " +"This is just the first step in a larger effort to make the Flower " +"documentation the best documentation of any project ever. Stay tuned!" msgstr "" -#: ../../source/ref-changelog.md:814 +#: ../../source/ref-changelog.md:593 msgid "" -"Improved gRPC server docs " -"([#841](https://github.com/adap/flower/pull/841))" +"**Open in Colab button** " +"([#1389](https://github.com/adap/flower/pull/1389))" msgstr "" -#: ../../source/ref-changelog.md:815 +#: ../../source/ref-changelog.md:595 msgid "" -"Improved error message in `NumPyClient` " -"([#851](https://github.com/adap/flower/pull/851))" +"The four parts of the Flower Federated Learning Tutorial now come with a " +"new `Open in Colab` button. No need to install anything on your local " +"machine, you can now use and learn about Flower in your browser, it's " +"only a single click away." msgstr "" -#: ../../source/ref-changelog.md:816 +#: ../../source/ref-changelog.md:597 msgid "" -"Improved PyTorch quickstart code example " -"([#852](https://github.com/adap/flower/pull/852))" +"**Improved tutorial** ([#1468](https://github.com/adap/flower/pull/1468)," +" [#1470](https://github.com/adap/flower/pull/1470), " +"[#1472](https://github.com/adap/flower/pull/1472), " +"[#1473](https://github.com/adap/flower/pull/1473), " +"[#1474](https://github.com/adap/flower/pull/1474), " +"[#1475](https://github.com/adap/flower/pull/1475))" msgstr "" -#: ../../source/ref-changelog.md:820 +#: ../../source/ref-changelog.md:599 msgid "" -"**Disabled final distributed evaluation** " -"([#800](https://github.com/adap/flower/pull/800))" +"The Flower Federated Learning Tutorial has two brand-new parts covering " +"custom strategies (still WIP) and the distinction between `Client` and " +"`NumPyClient`. The existing parts one and two have also been improved " +"(many small changes and fixes)." msgstr "" -#: ../../source/ref-changelog.md:822 -msgid "" -"Prior behaviour was to perform a final round of distributed evaluation on" -" all connected clients, which is often not required (e.g., when using " -"server-side evaluation). The prior behaviour can be enabled by passing " -"`force_final_distributed_eval=True` to `start_server`." +#: ../../source/ref-changelog.md:605 +msgid "v1.0.0 (2022-07-28)" msgstr "" -#: ../../source/ref-changelog.md:824 -msgid "" -"**Renamed q-FedAvg strategy** " -"([#802](https://github.com/adap/flower/pull/802))" +#: ../../source/ref-changelog.md:607 +msgid "Highlights" msgstr "" -#: ../../source/ref-changelog.md:826 -msgid "" -"The strategy named `QffedAvg` was renamed to `QFedAvg` to better reflect " -"the notation given in the original paper (q-FFL is the optimization " -"objective, q-FedAvg is the proposed solver). Note the original (now " -"deprecated) `QffedAvg` class is still available for compatibility reasons" -" (it will be removed in a future release)." +#: ../../source/ref-changelog.md:609 +msgid "Stable **Virtual Client Engine** (accessible via `start_simulation`)" msgstr "" -#: ../../source/ref-changelog.md:828 -msgid "" -"**Deprecated and renamed code example** `simulation_pytorch` **to** " -"`simulation_pytorch_legacy` " -"([#791](https://github.com/adap/flower/pull/791))" +#: ../../source/ref-changelog.md:610 +msgid "All `Client`/`NumPyClient` methods are now optional" msgstr "" -#: ../../source/ref-changelog.md:830 -msgid "" -"This example has been replaced by a new example. The new example is based" -" on the experimental virtual client engine, which will become the new " -"default way of doing most types of large-scale simulations in Flower. The" -" existing example was kept for reference purposes, but it might be " -"removed in the future." +#: ../../source/ref-changelog.md:611 +msgid "Configurable `get_parameters`" msgstr "" -#: ../../source/ref-changelog.md:832 -msgid "v0.16.0 (2021-05-11)" +#: ../../source/ref-changelog.md:612 +msgid "" +"Tons of small API cleanups resulting in a more coherent developer " +"experience" msgstr "" -#: ../../source/ref-changelog.md:836 +#: ../../source/ref-changelog.md:616 msgid "" -"**New built-in strategies** " -"([#549](https://github.com/adap/flower/pull/549))" +"We would like to give our **special thanks** to all the contributors who " +"made Flower 1.0 possible (in reverse [GitHub " +"Contributors](https://github.com/adap/flower/graphs/contributors) order):" msgstr "" -#: ../../source/ref-changelog.md:838 -msgid "(abstract) FedOpt" +#: ../../source/ref-changelog.md:618 +msgid "" +"[@rtaiello](https://github.com/rtaiello), " +"[@g-pichler](https://github.com/g-pichler), [@rob-" +"luke](https://github.com/rob-luke), [@andreea-zaharia](https://github.com" +"/andreea-zaharia), [@kinshukdua](https://github.com/kinshukdua), " +"[@nfnt](https://github.com/nfnt), " +"[@tatiana-s](https://github.com/tatiana-s), " +"[@TParcollet](https://github.com/TParcollet), " +"[@vballoli](https://github.com/vballoli), " +"[@negedng](https://github.com/negedng), " +"[@RISHIKESHAVAN](https://github.com/RISHIKESHAVAN), " +"[@hei411](https://github.com/hei411), " +"[@SebastianSpeitel](https://github.com/SebastianSpeitel), " +"[@AmitChaulwar](https://github.com/AmitChaulwar), " +"[@Rubiel1](https://github.com/Rubiel1), [@FANTOME-PAN](https://github.com" +"/FANTOME-PAN), [@Rono-BC](https://github.com/Rono-BC), " +"[@lbhm](https://github.com/lbhm), " +"[@sishtiaq](https://github.com/sishtiaq), " +"[@remde](https://github.com/remde), [@Jueun-Park](https://github.com" +"/Jueun-Park), [@architjen](https://github.com/architjen), " +"[@PratikGarai](https://github.com/PratikGarai), " +"[@mrinaald](https://github.com/mrinaald), " +"[@zliel](https://github.com/zliel), " +"[@MeiruiJiang](https://github.com/MeiruiJiang), " +"[@sancarlim](https://github.com/sancarlim), " +"[@gubertoli](https://github.com/gubertoli), " +"[@Vingt100](https://github.com/Vingt100), " +"[@MakGulati](https://github.com/MakGulati), " +"[@cozek](https://github.com/cozek), " +"[@jafermarq](https://github.com/jafermarq), " +"[@sisco0](https://github.com/sisco0), " +"[@akhilmathurs](https://github.com/akhilmathurs), " +"[@CanTuerk](https://github.com/CanTuerk), " +"[@mariaboerner1987](https://github.com/mariaboerner1987), " +"[@pedropgusmao](https://github.com/pedropgusmao), " +"[@tanertopal](https://github.com/tanertopal), " +"[@danieljanes](https://github.com/danieljanes)." msgstr "" -#: ../../source/ref-changelog.md:841 +#: ../../source/ref-changelog.md:622 msgid "" -"**Custom metrics for server and strategies** " -"([#717](https://github.com/adap/flower/pull/717))" +"**All arguments must be passed as keyword arguments** " +"([#1338](https://github.com/adap/flower/pull/1338))" msgstr "" -#: ../../source/ref-changelog.md:843 +#: ../../source/ref-changelog.md:624 msgid "" -"The Flower server is now fully task-agnostic, all remaining instances of " -"task-specific metrics (such as `accuracy`) have been replaced by custom " -"metrics dictionaries. Flower 0.15 introduced the capability to pass a " -"dictionary containing custom metrics from client to server. As of this " -"release, custom metrics replace task-specific metrics on the server." +"Pass all arguments as keyword arguments, positional arguments are not " +"longer supported. Code that uses positional arguments (e.g., " +"`start_client(\"127.0.0.1:8080\", FlowerClient())`) must add the keyword " +"for each positional argument (e.g., " +"`start_client(server_address=\"127.0.0.1:8080\", " +"client=FlowerClient())`)." msgstr "" -#: ../../source/ref-changelog.md:845 +#: ../../source/ref-changelog.md:626 msgid "" -"Custom metric dictionaries are now used in two user-facing APIs: they are" -" returned from Strategy methods `aggregate_fit`/`aggregate_evaluate` and " -"they enable evaluation functions passed to built-in strategies (via " -"`eval_fn`) to return more than two evaluation metrics. Strategies can " -"even return *aggregated* metrics dictionaries for the server to keep " -"track of." +"**Introduce configuration object** `ServerConfig` **in** `start_server` " +"**and** `start_simulation` " +"([#1317](https://github.com/adap/flower/pull/1317))" msgstr "" -#: ../../source/ref-changelog.md:847 +#: ../../source/ref-changelog.md:628 msgid "" -"Strategy implementations should migrate their `aggregate_fit` and " -"`aggregate_evaluate` methods to the new return type (e.g., by simply " -"returning an empty `{}`), server-side evaluation functions should migrate" -" from `return loss, accuracy` to `return loss, {\"accuracy\": accuracy}`." +"Instead of a config dictionary `{\"num_rounds\": 3, \"round_timeout\": " +"600.0}`, `start_server` and `start_simulation` now expect a configuration" +" object of type `flwr.server.ServerConfig`. `ServerConfig` takes the same" +" arguments that as the previous config dict, but it makes writing type-" +"safe code easier and the default parameters values more transparent." msgstr "" -#: ../../source/ref-changelog.md:849 +#: ../../source/ref-changelog.md:630 msgid "" -"Flower 0.15-style return types are deprecated (but still supported), " -"compatibility will be removed in a future release." +"**Rename built-in strategy parameters for clarity** " +"([#1334](https://github.com/adap/flower/pull/1334))" msgstr "" -#: ../../source/ref-changelog.md:851 +#: ../../source/ref-changelog.md:632 msgid "" -"**Migration warnings for deprecated functionality** " -"([#690](https://github.com/adap/flower/pull/690))" +"The following built-in strategy parameters were renamed to improve " +"readability and consistency with other API's:" msgstr "" -#: ../../source/ref-changelog.md:853 -msgid "" -"Earlier versions of Flower were often migrated to new APIs, while " -"maintaining compatibility with legacy APIs. This release introduces " -"detailed warning messages if usage of deprecated APIs is detected. The " -"new warning messages often provide details on how to migrate to more " -"recent APIs, thus easing the transition from one release to another." +#: ../../source/ref-changelog.md:634 +msgid "`fraction_eval` --> `fraction_evaluate`" msgstr "" -#: ../../source/ref-changelog.md:855 -msgid "" -"Improved docs and docstrings " -"([#691](https://github.com/adap/flower/pull/691) " -"[#692](https://github.com/adap/flower/pull/692) " -"[#713](https://github.com/adap/flower/pull/713))" +#: ../../source/ref-changelog.md:635 +msgid "`min_eval_clients` --> `min_evaluate_clients`" msgstr "" -#: ../../source/ref-changelog.md:857 -msgid "MXNet example and documentation" +#: ../../source/ref-changelog.md:636 +msgid "`eval_fn` --> `evaluate_fn`" msgstr "" -#: ../../source/ref-changelog.md:859 +#: ../../source/ref-changelog.md:638 msgid "" -"FedBN implementation in example PyTorch: From Centralized To Federated " -"([#696](https://github.com/adap/flower/pull/696) " -"[#702](https://github.com/adap/flower/pull/702) " -"[#705](https://github.com/adap/flower/pull/705))" +"**Update default arguments of built-in strategies** " +"([#1278](https://github.com/adap/flower/pull/1278))" msgstr "" -#: ../../source/ref-changelog.md:863 +#: ../../source/ref-changelog.md:640 msgid "" -"**Serialization-agnostic server** " -"([#721](https://github.com/adap/flower/pull/721))" +"All built-in strategies now use `fraction_fit=1.0` and " +"`fraction_evaluate=1.0`, which means they select *all* currently " +"available clients for training and evaluation. Projects that relied on " +"the previous default values can get the previous behaviour by " +"initializing the strategy in the following way:" msgstr "" -#: ../../source/ref-changelog.md:865 -msgid "" -"The Flower server is now fully serialization-agnostic. Prior usage of " -"class `Weights` (which represents parameters as deserialized NumPy " -"ndarrays) was replaced by class `Parameters` (e.g., in `Strategy`). " -"`Parameters` objects are fully serialization-agnostic and represents " -"parameters as byte arrays, the `tensor_type` attributes indicates how " -"these byte arrays should be interpreted (e.g., for " -"serialization/deserialization)." +#: ../../source/ref-changelog.md:642 +msgid "`strategy = FedAvg(fraction_fit=0.1, fraction_evaluate=0.1)`" msgstr "" -#: ../../source/ref-changelog.md:867 +#: ../../source/ref-changelog.md:644 msgid "" -"Built-in strategies implement this approach by handling serialization and" -" deserialization to/from `Weights` internally. Custom/3rd-party Strategy " -"implementations should update to the slightly changed Strategy method " -"definitions. Strategy authors can consult PR " -"[#721](https://github.com/adap/flower/pull/721) to see how strategies can" -" easily migrate to the new format." +"**Add** `server_round` **to** `Strategy.evaluate` " +"([#1334](https://github.com/adap/flower/pull/1334))" msgstr "" -#: ../../source/ref-changelog.md:869 +#: ../../source/ref-changelog.md:646 msgid "" -"Deprecated `flwr.server.Server.evaluate`, use " -"`flwr.server.Server.evaluate_round` instead " -"([#717](https://github.com/adap/flower/pull/717))" +"The `Strategy` method `evaluate` now receives the current round of " +"federated learning/evaluation as the first parameter." msgstr "" -#: ../../source/ref-changelog.md:871 -msgid "v0.15.0 (2021-03-12)" +#: ../../source/ref-changelog.md:648 +msgid "" +"**Add** `server_round` **and** `config` **parameters to** `evaluate_fn` " +"([#1334](https://github.com/adap/flower/pull/1334))" msgstr "" -#: ../../source/ref-changelog.md:875 +#: ../../source/ref-changelog.md:650 msgid "" -"**Server-side parameter initialization** " -"([#658](https://github.com/adap/flower/pull/658))" +"The `evaluate_fn` passed to built-in strategies like `FedAvg` now takes " +"three parameters: (1) The current round of federated learning/evaluation " +"(`server_round`), (2) the model parameters to evaluate (`parameters`), " +"and (3) a config dictionary (`config`)." msgstr "" -#: ../../source/ref-changelog.md:877 +#: ../../source/ref-changelog.md:652 msgid "" -"Model parameters can now be initialized on the server-side. Server-side " -"parameter initialization works via a new `Strategy` method called " -"`initialize_parameters`." +"**Rename** `rnd` **to** `server_round` " +"([#1321](https://github.com/adap/flower/pull/1321))" msgstr "" -#: ../../source/ref-changelog.md:879 +#: ../../source/ref-changelog.md:654 msgid "" -"Built-in strategies support a new constructor argument called " -"`initial_parameters` to set the initial parameters. Built-in strategies " -"will provide these initial parameters to the server on startup and then " -"delete them to free the memory afterwards." +"Several Flower methods and functions (`evaluate_fn`, `configure_fit`, " +"`aggregate_fit`, `configure_evaluate`, `aggregate_evaluate`) receive the " +"current round of federated learning/evaluation as their first parameter. " +"To improve reaability and avoid confusion with *random*, this parameter " +"has been renamed from `rnd` to `server_round`." msgstr "" -#: ../../source/ref-changelog.md:898 +#: ../../source/ref-changelog.md:656 msgid "" -"If no initial parameters are provided to the strategy, the server will " -"continue to use the current behaviour (namely, it will ask one of the " -"connected clients for its parameters and use these as the initial global " -"parameters)." +"**Move** `flwr.dataset` **to** `flwr_baselines` " +"([#1273](https://github.com/adap/flower/pull/1273))" msgstr "" -#: ../../source/ref-changelog.md:900 -msgid "Deprecations" +#: ../../source/ref-changelog.md:658 +msgid "The experimental package `flwr.dataset` was migrated to Flower Baselines." msgstr "" -#: ../../source/ref-changelog.md:902 +#: ../../source/ref-changelog.md:660 msgid "" -"Deprecate `flwr.server.strategy.DefaultStrategy` (migrate to " -"`flwr.server.strategy.FedAvg`, which is equivalent)" -msgstr "" - -#: ../../source/ref-changelog.md:904 -msgid "v0.14.0 (2021-02-18)" +"**Remove experimental strategies** " +"([#1280](https://github.com/adap/flower/pull/1280))" msgstr "" -#: ../../source/ref-changelog.md:908 +#: ../../source/ref-changelog.md:662 msgid "" -"**Generalized** `Client.fit` **and** `Client.evaluate` **return values** " -"([#610](https://github.com/adap/flower/pull/610) " -"[#572](https://github.com/adap/flower/pull/572) " -"[#633](https://github.com/adap/flower/pull/633))" +"Remove unmaintained experimental strategies (`FastAndSlow`, `FedFSv0`, " +"`FedFSv1`)." msgstr "" -#: ../../source/ref-changelog.md:910 +#: ../../source/ref-changelog.md:664 msgid "" -"Clients can now return an additional dictionary mapping `str` keys to " -"values of the following types: `bool`, `bytes`, `float`, `int`, `str`. " -"This means one can return almost arbitrary values from `fit`/`evaluate` " -"and make use of them on the server side!" +"**Rename** `Weights` **to** `NDArrays` " +"([#1258](https://github.com/adap/flower/pull/1258), " +"[#1259](https://github.com/adap/flower/pull/1259))" msgstr "" -#: ../../source/ref-changelog.md:912 +#: ../../source/ref-changelog.md:666 msgid "" -"This improvement also allowed for more consistent return types between " -"`fit` and `evaluate`: `evaluate` should now return a tuple `(float, int, " -"dict)` representing the loss, number of examples, and a dictionary " -"holding arbitrary problem-specific values like accuracy." +"`flwr.common.Weights` was renamed to `flwr.common.NDArrays` to better " +"capture what this type is all about." msgstr "" -#: ../../source/ref-changelog.md:914 +#: ../../source/ref-changelog.md:668 msgid "" -"In case you wondered: this feature is compatible with existing projects, " -"the additional dictionary return value is optional. New code should " -"however migrate to the new return types to be compatible with upcoming " -"Flower releases (`fit`: `List[np.ndarray], int, Dict[str, Scalar]`, " -"`evaluate`: `float, int, Dict[str, Scalar]`). See the example below for " -"details." +"**Remove antiquated** `force_final_distributed_eval` **from** " +"`start_server` ([#1258](https://github.com/adap/flower/pull/1258), " +"[#1259](https://github.com/adap/flower/pull/1259))" msgstr "" -#: ../../source/ref-changelog.md:916 +#: ../../source/ref-changelog.md:670 msgid "" -"*Code example:* note the additional dictionary return values in both " -"`FlwrClient.fit` and `FlwrClient.evaluate`:" +"The `start_server` parameter `force_final_distributed_eval` has long been" +" a historic artefact, in this release it is finally gone for good." msgstr "" -#: ../../source/ref-changelog.md:931 +#: ../../source/ref-changelog.md:672 msgid "" -"**Generalized** `config` **argument in** `Client.fit` **and** " -"`Client.evaluate` ([#595](https://github.com/adap/flower/pull/595))" +"**Make** `get_parameters` **configurable** " +"([#1242](https://github.com/adap/flower/pull/1242))" msgstr "" -#: ../../source/ref-changelog.md:933 +#: ../../source/ref-changelog.md:674 msgid "" -"The `config` argument used to be of type `Dict[str, str]`, which means " -"that dictionary values were expected to be strings. The new release " -"generalizes this to enable values of the following types: `bool`, " -"`bytes`, `float`, `int`, `str`." +"The `get_parameters` method now accepts a configuration dictionary, just " +"like `get_properties`, `fit`, and `evaluate`." msgstr "" -#: ../../source/ref-changelog.md:935 +#: ../../source/ref-changelog.md:676 msgid "" -"This means one can now pass almost arbitrary values to `fit`/`evaluate` " -"using the `config` dictionary. Yay, no more `str(epochs)` on the server-" -"side and `int(config[\"epochs\"])` on the client side!" +"**Replace** `num_rounds` **in** `start_simulation` **with new** `config` " +"**parameter** ([#1281](https://github.com/adap/flower/pull/1281))" msgstr "" -#: ../../source/ref-changelog.md:937 +#: ../../source/ref-changelog.md:678 msgid "" -"*Code example:* note that the `config` dictionary now contains non-`str` " -"values in both `Client.fit` and `Client.evaluate`:" -msgstr "" - -#: ../../source/ref-changelog.md:954 -msgid "v0.13.0 (2021-01-08)" +"The `start_simulation` function now accepts a configuration dictionary " +"`config` instead of the `num_rounds` integer. This improves the " +"consistency between `start_simulation` and `start_server` and makes " +"transitioning between the two easier." msgstr "" -#: ../../source/ref-changelog.md:958 +#: ../../source/ref-changelog.md:682 msgid "" -"New example: PyTorch From Centralized To Federated " -"([#549](https://github.com/adap/flower/pull/549))" +"**Support Python 3.10** " +"([#1320](https://github.com/adap/flower/pull/1320))" msgstr "" -#: ../../source/ref-changelog.md:959 -msgid "Improved documentation" +#: ../../source/ref-changelog.md:684 +msgid "" +"The previous Flower release introduced experimental support for Python " +"3.10, this release declares Python 3.10 support as stable." msgstr "" -#: ../../source/ref-changelog.md:960 -msgid "New documentation theme ([#551](https://github.com/adap/flower/pull/551))" +#: ../../source/ref-changelog.md:686 +msgid "" +"**Make all** `Client` **and** `NumPyClient` **methods optional** " +"([#1260](https://github.com/adap/flower/pull/1260), " +"[#1277](https://github.com/adap/flower/pull/1277))" msgstr "" -#: ../../source/ref-changelog.md:961 -msgid "New API reference ([#554](https://github.com/adap/flower/pull/554))" +#: ../../source/ref-changelog.md:688 +msgid "" +"The `Client`/`NumPyClient` methods `get_properties`, `get_parameters`, " +"`fit`, and `evaluate` are all optional. This enables writing clients that" +" implement, for example, only `fit`, but no other method. No need to " +"implement `evaluate` when using centralized evaluation!" msgstr "" -#: ../../source/ref-changelog.md:962 +#: ../../source/ref-changelog.md:690 msgid "" -"Updated examples documentation " -"([#549](https://github.com/adap/flower/pull/549))" +"**Enable passing a** `Server` **instance to** `start_simulation` " +"([#1281](https://github.com/adap/flower/pull/1281))" msgstr "" -#: ../../source/ref-changelog.md:963 +#: ../../source/ref-changelog.md:692 msgid "" -"Removed obsolete documentation " -"([#548](https://github.com/adap/flower/pull/548))" +"Similar to `start_server`, `start_simulation` now accepts a full `Server`" +" instance. This enables users to heavily customize the execution of " +"eperiments and opens the door to running, for example, async FL using the" +" Virtual Client Engine." msgstr "" -#: ../../source/ref-changelog.md:965 -msgid "Bugfix:" +#: ../../source/ref-changelog.md:694 +msgid "" +"**Update code examples** " +"([#1291](https://github.com/adap/flower/pull/1291), " +"[#1286](https://github.com/adap/flower/pull/1286), " +"[#1282](https://github.com/adap/flower/pull/1282))" msgstr "" -#: ../../source/ref-changelog.md:967 +#: ../../source/ref-changelog.md:696 msgid "" -"`Server.fit` does not disconnect clients when finished, disconnecting the" -" clients is now handled in `flwr.server.start_server` " -"([#553](https://github.com/adap/flower/pull/553) " -"[#540](https://github.com/adap/flower/issues/540))." +"Many code examples received small or even large maintenance updates, " +"among them are" msgstr "" -#: ../../source/ref-changelog.md:969 -msgid "v0.12.0 (2020-12-07)" +#: ../../source/ref-changelog.md:698 +msgid "`scikit-learn`" msgstr "" -#: ../../source/ref-changelog.md:971 ../../source/ref-changelog.md:987 -msgid "Important changes:" +#: ../../source/ref-changelog.md:699 +msgid "`simulation_pytorch`" msgstr "" -#: ../../source/ref-changelog.md:973 -msgid "" -"Added an example for embedded devices " -"([#507](https://github.com/adap/flower/pull/507))" +#: ../../source/ref-changelog.md:700 +msgid "`quickstart_pytorch`" msgstr "" -#: ../../source/ref-changelog.md:974 -msgid "" -"Added a new NumPyClient (in addition to the existing KerasClient) " -"([#504](https://github.com/adap/flower/pull/504) " -"[#508](https://github.com/adap/flower/pull/508))" +#: ../../source/ref-changelog.md:701 +msgid "`quickstart_simulation`" msgstr "" -#: ../../source/ref-changelog.md:975 -msgid "" -"Deprecated `flwr_example` package and started to migrate examples into " -"the top-level `examples` directory " -"([#494](https://github.com/adap/flower/pull/494) " -"[#512](https://github.com/adap/flower/pull/512))" +#: ../../source/ref-changelog.md:702 +msgid "`quickstart_tensorflow`" msgstr "" -#: ../../source/ref-changelog.md:977 -msgid "v0.11.0 (2020-11-30)" +#: ../../source/ref-changelog.md:703 +msgid "`advanced_tensorflow`" msgstr "" -#: ../../source/ref-changelog.md:979 -msgid "Incompatible changes:" +#: ../../source/ref-changelog.md:705 +msgid "" +"**Remove the obsolete simulation example** " +"([#1328](https://github.com/adap/flower/pull/1328))" msgstr "" -#: ../../source/ref-changelog.md:981 +#: ../../source/ref-changelog.md:707 msgid "" -"Renamed strategy methods " -"([#486](https://github.com/adap/flower/pull/486)) to unify the naming of " -"Flower's public APIs. Other public methods/functions (e.g., every method " -"in `Client`, but also `Strategy.evaluate`) do not use the `on_` prefix, " -"which is why we're removing it from the four methods in Strategy. To " -"migrate rename the following `Strategy` methods accordingly:" +"Removes the obsolete `simulation` example and renames " +"`quickstart_simulation` to `simulation_tensorflow` so it fits withs the " +"naming of `simulation_pytorch`" msgstr "" -#: ../../source/ref-changelog.md:982 -msgid "`on_configure_evaluate` => `configure_evaluate`" +#: ../../source/ref-changelog.md:709 +msgid "" +"**Update documentation** " +"([#1223](https://github.com/adap/flower/pull/1223), " +"[#1209](https://github.com/adap/flower/pull/1209), " +"[#1251](https://github.com/adap/flower/pull/1251), " +"[#1257](https://github.com/adap/flower/pull/1257), " +"[#1267](https://github.com/adap/flower/pull/1267), " +"[#1268](https://github.com/adap/flower/pull/1268), " +"[#1300](https://github.com/adap/flower/pull/1300), " +"[#1304](https://github.com/adap/flower/pull/1304), " +"[#1305](https://github.com/adap/flower/pull/1305), " +"[#1307](https://github.com/adap/flower/pull/1307))" msgstr "" -#: ../../source/ref-changelog.md:983 -msgid "`on_aggregate_evaluate` => `aggregate_evaluate`" +#: ../../source/ref-changelog.md:711 +msgid "" +"One substantial documentation update fixes multiple smaller rendering " +"issues, makes titles more succinct to improve navigation, removes a " +"deprecated library, updates documentation dependencies, includes the " +"`flwr.common` module in the API reference, includes support for markdown-" +"based documentation, migrates the changelog from `.rst` to `.md`, and " +"fixes a number of smaller details!" msgstr "" -#: ../../source/ref-changelog.md:984 -msgid "`on_configure_fit` => `configure_fit`" +#: ../../source/ref-changelog.md:713 ../../source/ref-changelog.md:768 +#: ../../source/ref-changelog.md:837 ../../source/ref-changelog.md:876 +msgid "**Minor updates**" msgstr "" -#: ../../source/ref-changelog.md:985 -msgid "`on_aggregate_fit` => `aggregate_fit`" +#: ../../source/ref-changelog.md:715 +msgid "" +"Add round number to fit and evaluate log messages " +"([#1266](https://github.com/adap/flower/pull/1266))" msgstr "" -#: ../../source/ref-changelog.md:989 +#: ../../source/ref-changelog.md:716 msgid "" -"Deprecated `DefaultStrategy` " -"([#479](https://github.com/adap/flower/pull/479)). To migrate use " -"`FedAvg` instead." +"Add secure gRPC connection to the `advanced_tensorflow` code example " +"([#847](https://github.com/adap/flower/pull/847))" msgstr "" -#: ../../source/ref-changelog.md:990 +#: ../../source/ref-changelog.md:717 msgid "" -"Simplified examples and baselines " -"([#484](https://github.com/adap/flower/pull/484))." +"Update developer tooling " +"([#1231](https://github.com/adap/flower/pull/1231), " +"[#1276](https://github.com/adap/flower/pull/1276), " +"[#1301](https://github.com/adap/flower/pull/1301), " +"[#1310](https://github.com/adap/flower/pull/1310))" msgstr "" -#: ../../source/ref-changelog.md:991 +#: ../../source/ref-changelog.md:718 msgid "" -"Removed presently unused `on_conclude_round` from strategy interface " -"([#483](https://github.com/adap/flower/pull/483))." +"Rename ProtoBuf messages to improve consistency " +"([#1214](https://github.com/adap/flower/pull/1214), " +"[#1258](https://github.com/adap/flower/pull/1258), " +"[#1259](https://github.com/adap/flower/pull/1259))" msgstr "" -#: ../../source/ref-changelog.md:992 -msgid "" -"Set minimal Python version to 3.6.1 instead of 3.6.9 " -"([#471](https://github.com/adap/flower/pull/471))." +#: ../../source/ref-changelog.md:720 +msgid "v0.19.0 (2022-05-18)" msgstr "" -#: ../../source/ref-changelog.md:993 +#: ../../source/ref-changelog.md:724 msgid "" -"Improved `Strategy` docstrings " -"([#470](https://github.com/adap/flower/pull/470))." +"**Flower Baselines (preview): FedOpt, FedBN, FedAvgM** " +"([#919](https://github.com/adap/flower/pull/919), " +"[#1127](https://github.com/adap/flower/pull/1127), " +"[#914](https://github.com/adap/flower/pull/914))" msgstr "" -#: ../../source/ref-example-projects.rst:2 -msgid "Example projects" +#: ../../source/ref-changelog.md:726 +msgid "" +"The first preview release of Flower Baselines has arrived! We're " +"kickstarting Flower Baselines with implementations of FedOpt (FedYogi, " +"FedAdam, FedAdagrad), FedBN, and FedAvgM. Check the documentation on how " +"to use [Flower Baselines](https://flower.ai/docs/using-baselines.html). " +"With this first preview release we're also inviting the community to " +"[contribute their own baselines](https://flower.ai/docs/baselines/how-to-" +"contribute-baselines.html)." msgstr "" -#: ../../source/ref-example-projects.rst:4 +#: ../../source/ref-changelog.md:728 msgid "" -"Flower comes with a number of usage examples. The examples demonstrate " -"how Flower can be used to federate different kinds of existing machine " -"learning pipelines, usually leveraging popular machine learning " -"frameworks such as `PyTorch `_ or `TensorFlow " -"`_." +"**C++ client SDK (preview) and code example** " +"([#1111](https://github.com/adap/flower/pull/1111))" msgstr "" -#: ../../source/ref-example-projects.rst:10 +#: ../../source/ref-changelog.md:730 msgid "" -"The following examples are available as standalone projects. Quickstart " -"TensorFlow/Keras ---------------------------" +"Preview support for Flower clients written in C++. The C++ preview " +"includes a Flower client SDK and a quickstart code example that " +"demonstrates a simple C++ client using the SDK." msgstr "" -#: ../../source/ref-example-projects.rst:14 +#: ../../source/ref-changelog.md:732 msgid "" -"The TensorFlow/Keras quickstart example shows CIFAR-10 image " -"classification with MobileNetV2:" +"**Add experimental support for Python 3.10 and Python 3.11** " +"([#1135](https://github.com/adap/flower/pull/1135))" msgstr "" -#: ../../source/ref-example-projects.rst:17 +#: ../../source/ref-changelog.md:734 msgid "" -"`Quickstart TensorFlow (Code) " -"`_" +"Python 3.10 is the latest stable release of Python and Python 3.11 is due" +" to be released in October. This Flower release adds experimental support" +" for both Python versions." msgstr "" -#: ../../source/ref-example-projects.rst:18 -msgid ":doc:`Quickstart TensorFlow (Tutorial) `" +#: ../../source/ref-changelog.md:736 +msgid "" +"**Aggregate custom metrics through user-provided functions** " +"([#1144](https://github.com/adap/flower/pull/1144))" msgstr "" -#: ../../source/ref-example-projects.rst:19 +#: ../../source/ref-changelog.md:738 msgid "" -"`Quickstart TensorFlow (Blog Post) `_" +"Custom metrics (e.g., `accuracy`) can now be aggregated without having to" +" customize the strategy. Built-in strategies support two new arguments, " +"`fit_metrics_aggregation_fn` and `evaluate_metrics_aggregation_fn`, that " +"allow passing custom metric aggregation functions." msgstr "" -#: ../../source/ref-example-projects.rst:23 -#: ../../source/tutorial-quickstart-pytorch.rst:5 -msgid "Quickstart PyTorch" +#: ../../source/ref-changelog.md:740 +msgid "" +"**User-configurable round timeout** " +"([#1162](https://github.com/adap/flower/pull/1162))" msgstr "" -#: ../../source/ref-example-projects.rst:25 +#: ../../source/ref-changelog.md:742 msgid "" -"The PyTorch quickstart example shows CIFAR-10 image classification with a" -" simple Convolutional Neural Network:" +"A new configuration value allows the round timeout to be set for " +"`start_server` and `start_simulation`. If the `config` dictionary " +"contains a `round_timeout` key (with a `float` value in seconds), the " +"server will wait *at least* `round_timeout` seconds before it closes the " +"connection." msgstr "" -#: ../../source/ref-example-projects.rst:28 +#: ../../source/ref-changelog.md:744 msgid "" -"`Quickstart PyTorch (Code) " -"`_" +"**Enable both federated evaluation and centralized evaluation to be used " +"at the same time in all built-in strategies** " +"([#1091](https://github.com/adap/flower/pull/1091))" msgstr "" -#: ../../source/ref-example-projects.rst:29 -msgid ":doc:`Quickstart PyTorch (Tutorial) `" +#: ../../source/ref-changelog.md:746 +msgid "" +"Built-in strategies can now perform both federated evaluation (i.e., " +"client-side) and centralized evaluation (i.e., server-side) in the same " +"round. Federated evaluation can be disabled by setting `fraction_eval` to" +" `0.0`." msgstr "" -#: ../../source/ref-example-projects.rst:33 -msgid "PyTorch: From Centralized To Federated" +#: ../../source/ref-changelog.md:748 +msgid "" +"**Two new Jupyter Notebook tutorials** " +"([#1141](https://github.com/adap/flower/pull/1141))" msgstr "" -#: ../../source/ref-example-projects.rst:35 +#: ../../source/ref-changelog.md:750 msgid "" -"This example shows how a regular PyTorch project can be federated using " -"Flower:" +"Two Jupyter Notebook tutorials (compatible with Google Colab) explain " +"basic and intermediate Flower features:" msgstr "" -#: ../../source/ref-example-projects.rst:37 +#: ../../source/ref-changelog.md:752 msgid "" -"`PyTorch: From Centralized To Federated (Code) " -"`_" +"*An Introduction to Federated Learning*: [Open in " +"Colab](https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-1" +"-Intro-to-FL-PyTorch.ipynb)" msgstr "" -#: ../../source/ref-example-projects.rst:38 +#: ../../source/ref-changelog.md:754 msgid "" -":doc:`PyTorch: From Centralized To Federated (Tutorial) `" +"*Using Strategies in Federated Learning*: [Open in " +"Colab](https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-2" +"-Strategies-in-FL-PyTorch.ipynb)" msgstr "" -#: ../../source/ref-example-projects.rst:42 -msgid "Federated Learning on Raspberry Pi and Nvidia Jetson" +#: ../../source/ref-changelog.md:756 +msgid "" +"**New FedAvgM strategy (Federated Averaging with Server Momentum)** " +"([#1076](https://github.com/adap/flower/pull/1076))" msgstr "" -#: ../../source/ref-example-projects.rst:44 +#: ../../source/ref-changelog.md:758 msgid "" -"This example shows how Flower can be used to build a federated learning " -"system that run across Raspberry Pi and Nvidia Jetson:" +"The new `FedAvgM` strategy implements Federated Averaging with Server " +"Momentum \\[Hsu et al., 2019\\]." msgstr "" -#: ../../source/ref-example-projects.rst:46 +#: ../../source/ref-changelog.md:760 msgid "" -"`Federated Learning on Raspberry Pi and Nvidia Jetson (Code) " -"`_" +"**New advanced PyTorch code example** " +"([#1007](https://github.com/adap/flower/pull/1007))" msgstr "" -#: ../../source/ref-example-projects.rst:47 +#: ../../source/ref-changelog.md:762 msgid "" -"`Federated Learning on Raspberry Pi and Nvidia Jetson (Blog Post) " -"`_" +"A new code example (`advanced_pytorch`) demonstrates advanced Flower " +"concepts with PyTorch." msgstr "" -#: ../../source/ref-faq.rst:4 +#: ../../source/ref-changelog.md:764 msgid "" -"This page collects answers to commonly asked questions about Federated " -"Learning with Flower." +"**New JAX code example** " +"([#906](https://github.com/adap/flower/pull/906), " +"[#1143](https://github.com/adap/flower/pull/1143))" msgstr "" -#: ../../source/ref-faq.rst -msgid ":fa:`eye,mr-1` Can Flower run on Jupyter Notebooks / Google Colab?" +#: ../../source/ref-changelog.md:766 +msgid "" +"A new code example (`jax_from_centralized_to_federated`) shows federated " +"learning with JAX and Flower." msgstr "" -#: ../../source/ref-faq.rst:8 +#: ../../source/ref-changelog.md:770 msgid "" -"Yes, it can! Flower even comes with a few under-the-hood optimizations to" -" make it work even better on Colab. Here's a quickstart example:" +"New option to keep Ray running if Ray was already initialized in " +"`start_simulation` ([#1177](https://github.com/adap/flower/pull/1177))" msgstr "" -#: ../../source/ref-faq.rst:10 +#: ../../source/ref-changelog.md:771 msgid "" -"`Flower simulation PyTorch " -"`_" +"Add support for custom `ClientManager` as a `start_simulation` parameter " +"([#1171](https://github.com/adap/flower/pull/1171))" msgstr "" -#: ../../source/ref-faq.rst:11 +#: ../../source/ref-changelog.md:772 msgid "" -"`Flower simulation TensorFlow/Keras " -"`_" +"New documentation for [implementing " +"strategies](https://flower.ai/docs/framework/how-to-implement-" +"strategies.html) ([#1097](https://github.com/adap/flower/pull/1097), " +"[#1175](https://github.com/adap/flower/pull/1175))" msgstr "" -#: ../../source/ref-faq.rst -msgid ":fa:`eye,mr-1` How can I run Federated Learning on a Raspberry Pi?" +#: ../../source/ref-changelog.md:773 +msgid "" +"New mobile-friendly documentation theme " +"([#1174](https://github.com/adap/flower/pull/1174))" msgstr "" -#: ../../source/ref-faq.rst:15 +#: ../../source/ref-changelog.md:774 msgid "" -"Find the `blog post about federated learning on embedded device here " -"`_" -" and the corresponding `GitHub code example " -"`_." +"Limit version range for (optional) `ray` dependency to include only " +"compatible releases (`>=1.9.2,<1.12.0`) " +"([#1205](https://github.com/adap/flower/pull/1205))" msgstr "" -#: ../../source/ref-faq.rst -msgid ":fa:`eye,mr-1` Does Flower support federated learning on Android devices?" +#: ../../source/ref-changelog.md:778 +msgid "" +"**Remove deprecated support for Python 3.6** " +"([#871](https://github.com/adap/flower/pull/871))" msgstr "" -#: ../../source/ref-faq.rst:19 +#: ../../source/ref-changelog.md:779 msgid "" -"Yes, it does. Please take a look at our `blog post " -"`_ or check out the code examples:" +"**Remove deprecated KerasClient** " +"([#857](https://github.com/adap/flower/pull/857))" msgstr "" -#: ../../source/ref-faq.rst:21 +#: ../../source/ref-changelog.md:780 msgid "" -"`Android Kotlin example `_" +"**Remove deprecated no-op extra installs** " +"([#973](https://github.com/adap/flower/pull/973))" msgstr "" -#: ../../source/ref-faq.rst:22 -msgid "`Android Java example `_" +#: ../../source/ref-changelog.md:781 +msgid "" +"**Remove deprecated proto fields from** `FitRes` **and** `EvaluateRes` " +"([#869](https://github.com/adap/flower/pull/869))" msgstr "" -#: ../../source/ref-faq.rst -msgid ":fa:`eye,mr-1` Can I combine federated learning with blockchain?" +#: ../../source/ref-changelog.md:782 +msgid "" +"**Remove deprecated QffedAvg strategy (replaced by QFedAvg)** " +"([#1107](https://github.com/adap/flower/pull/1107))" msgstr "" -#: ../../source/ref-faq.rst:26 +#: ../../source/ref-changelog.md:783 msgid "" -"Yes, of course. A list of available examples using Flower within a " -"blockchain environment is available here:" +"**Remove deprecated DefaultStrategy strategy** " +"([#1142](https://github.com/adap/flower/pull/1142))" msgstr "" -#: ../../source/ref-faq.rst:28 +#: ../../source/ref-changelog.md:784 msgid "" -"`Flower meets Nevermined GitHub Repository `_." +"**Remove deprecated support for eval_fn accuracy return value** " +"([#1142](https://github.com/adap/flower/pull/1142))" msgstr "" -#: ../../source/ref-faq.rst:29 +#: ../../source/ref-changelog.md:785 msgid "" -"`Flower meets Nevermined YouTube video " -"`_." +"**Remove deprecated support for passing initial parameters as NumPy " +"ndarrays** ([#1142](https://github.com/adap/flower/pull/1142))" msgstr "" -#: ../../source/ref-faq.rst:30 -msgid "" -"`Flower meets KOSMoS `_." +#: ../../source/ref-changelog.md:787 +msgid "v0.18.0 (2022-02-28)" msgstr "" -#: ../../source/ref-faq.rst:31 +#: ../../source/ref-changelog.md:791 msgid "" -"`Flower meets Talan blog post `_ ." +"**Improved Virtual Client Engine compatibility with Jupyter Notebook / " +"Google Colab** ([#866](https://github.com/adap/flower/pull/866), " +"[#872](https://github.com/adap/flower/pull/872), " +"[#833](https://github.com/adap/flower/pull/833), " +"[#1036](https://github.com/adap/flower/pull/1036))" msgstr "" -#: ../../source/ref-faq.rst:32 +#: ../../source/ref-changelog.md:793 msgid "" -"`Flower meets Talan GitHub Repository " -"`_ ." +"Simulations (using the Virtual Client Engine through `start_simulation`) " +"now work more smoothly on Jupyter Notebooks (incl. Google Colab) after " +"installing Flower with the `simulation` extra (`pip install " +"flwr[simulation]`)." msgstr "" -#: ../../source/ref-telemetry.md:1 -msgid "Telemetry" +#: ../../source/ref-changelog.md:795 +msgid "" +"**New Jupyter Notebook code example** " +"([#833](https://github.com/adap/flower/pull/833))" msgstr "" -#: ../../source/ref-telemetry.md:3 +#: ../../source/ref-changelog.md:797 msgid "" -"The Flower open-source project collects **anonymous** usage metrics to " -"make well-informed decisions to improve Flower. Doing this enables the " -"Flower team to understand how Flower is used and what challenges users " -"might face." +"A new code example (`quickstart_simulation`) demonstrates Flower " +"simulations using the Virtual Client Engine through Jupyter Notebook " +"(incl. Google Colab)." msgstr "" -#: ../../source/ref-telemetry.md:5 +#: ../../source/ref-changelog.md:799 msgid "" -"**Flower is a friendly framework for collaborative AI and data science.**" -" Staying true to this statement, Flower makes it easy to disable " -"telemetry for users that do not want to share anonymous usage metrics." +"**Client properties (feature preview)** " +"([#795](https://github.com/adap/flower/pull/795))" msgstr "" -#: ../../source/ref-telemetry.md:7 -msgid "Principles" +#: ../../source/ref-changelog.md:801 +msgid "" +"Clients can implement a new method `get_properties` to enable server-side" +" strategies to query client properties." msgstr "" -#: ../../source/ref-telemetry.md:9 -msgid "We follow strong principles guarding anonymous usage metrics collection:" +#: ../../source/ref-changelog.md:803 +msgid "" +"**Experimental Android support with TFLite** " +"([#865](https://github.com/adap/flower/pull/865))" msgstr "" -#: ../../source/ref-telemetry.md:11 +#: ../../source/ref-changelog.md:805 msgid "" -"**Optional:** You will always be able to disable telemetry; read on to " -"learn “[How to opt-out](#how-to-opt-out)”." +"Android support has finally arrived in `main`! Flower is both client-" +"agnostic and framework-agnostic by design. One can integrate arbitrary " +"client platforms and with this release, using Flower on Android has " +"become a lot easier." msgstr "" -#: ../../source/ref-telemetry.md:12 +#: ../../source/ref-changelog.md:807 msgid "" -"**Anonymous:** The reported usage metrics are anonymous and do not " -"contain any personally identifiable information (PII). See “[Collected " -"metrics](#collected-metrics)” to understand what metrics are being " -"reported." +"The example uses TFLite on the client side, along with a new " +"`FedAvgAndroid` strategy. The Android client and `FedAvgAndroid` are " +"still experimental, but they are a first step towards a fully-fledged " +"Android SDK and a unified `FedAvg` implementation that integrated the new" +" functionality from `FedAvgAndroid`." msgstr "" -#: ../../source/ref-telemetry.md:13 +#: ../../source/ref-changelog.md:809 msgid "" -"**Transparent:** You can easily inspect what anonymous metrics are being " -"reported; see the section “[How to inspect what is being reported](#how-" -"to-inspect-what-is-being-reported)”" +"**Make gRPC keepalive time user-configurable and decrease default " +"keepalive time** ([#1069](https://github.com/adap/flower/pull/1069))" msgstr "" -#: ../../source/ref-telemetry.md:14 +#: ../../source/ref-changelog.md:811 msgid "" -"**Open for feedback:** You can always reach out to us if you have " -"feedback; see the section “[How to contact us](#how-to-contact-us)” for " -"details." +"The default gRPC keepalive time has been reduced to increase the " +"compatibility of Flower with more cloud environments (for example, " +"Microsoft Azure). Users can configure the keepalive time to customize the" +" gRPC stack based on specific requirements." msgstr "" -#: ../../source/ref-telemetry.md:16 -msgid "How to opt-out" +#: ../../source/ref-changelog.md:813 +msgid "" +"**New differential privacy example using Opacus and PyTorch** " +"([#805](https://github.com/adap/flower/pull/805))" msgstr "" -#: ../../source/ref-telemetry.md:18 +#: ../../source/ref-changelog.md:815 msgid "" -"When Flower starts, it will check for an environment variable called " -"`FLWR_TELEMETRY_ENABLED`. Telemetry can easily be disabled by setting " -"`FLWR_TELEMETRY_ENABLED=0`. Assuming you are starting a Flower server or " -"client, simply do so by prepending your command as in:" +"A new code example (`opacus`) demonstrates differentially-private " +"federated learning with Opacus, PyTorch, and Flower." msgstr "" -#: ../../source/ref-telemetry.md:24 +#: ../../source/ref-changelog.md:817 msgid "" -"Alternatively, you can export `FLWR_TELEMETRY_ENABLED=0` in, for example," -" `.bashrc` (or whatever configuration file applies to your environment) " -"to disable Flower telemetry permanently." +"**New Hugging Face Transformers code example** " +"([#863](https://github.com/adap/flower/pull/863))" msgstr "" -#: ../../source/ref-telemetry.md:26 -msgid "Collected metrics" +#: ../../source/ref-changelog.md:819 +msgid "" +"A new code example (`quickstart_huggingface`) demonstrates usage of " +"Hugging Face Transformers with Flower." msgstr "" -#: ../../source/ref-telemetry.md:28 -msgid "Flower telemetry collects the following metrics:" +#: ../../source/ref-changelog.md:821 +msgid "" +"**New MLCube code example** " +"([#779](https://github.com/adap/flower/pull/779), " +"[#1034](https://github.com/adap/flower/pull/1034), " +"[#1065](https://github.com/adap/flower/pull/1065), " +"[#1090](https://github.com/adap/flower/pull/1090))" msgstr "" -#: ../../source/ref-telemetry.md:30 +#: ../../source/ref-changelog.md:823 msgid "" -"**Flower version.** Understand which versions of Flower are currently " -"being used. This helps us to decide whether we should invest effort into " -"releasing a patch version for an older version of Flower or instead use " -"the bandwidth to build new features." +"A new code example (`quickstart_mlcube`) demonstrates usage of MLCube " +"with Flower." msgstr "" -#: ../../source/ref-telemetry.md:32 +#: ../../source/ref-changelog.md:825 msgid "" -"**Operating system.** Enables us to answer questions such as: *Should we " -"create more guides for Linux, macOS, or Windows?*" +"**SSL-enabled server and client** " +"([#842](https://github.com/adap/flower/pull/842), " +"[#844](https://github.com/adap/flower/pull/844), " +"[#845](https://github.com/adap/flower/pull/845), " +"[#847](https://github.com/adap/flower/pull/847), " +"[#993](https://github.com/adap/flower/pull/993), " +"[#994](https://github.com/adap/flower/pull/994))" msgstr "" -#: ../../source/ref-telemetry.md:34 +#: ../../source/ref-changelog.md:827 msgid "" -"**Python version.** Knowing the Python version helps us, for example, to " -"decide whether we should invest effort into supporting old versions of " -"Python or stop supporting them and start taking advantage of new Python " -"features." +"SSL enables secure encrypted connections between clients and servers. " +"This release open-sources the Flower secure gRPC implementation to make " +"encrypted communication channels accessible to all Flower users." msgstr "" -#: ../../source/ref-telemetry.md:36 +#: ../../source/ref-changelog.md:829 msgid "" -"**Hardware properties.** Understanding the hardware environment that " -"Flower is being used in helps to decide whether we should, for example, " -"put more effort into supporting low-resource environments." +"**Updated** `FedAdam` **and** `FedYogi` **strategies** " +"([#885](https://github.com/adap/flower/pull/885), " +"[#895](https://github.com/adap/flower/pull/895))" msgstr "" -#: ../../source/ref-telemetry.md:38 +#: ../../source/ref-changelog.md:831 msgid "" -"**Execution mode.** Knowing what execution mode Flower starts in enables " -"us to understand how heavily certain features are being used and better " -"prioritize based on that." +"`FedAdam` and `FedAdam` match the latest version of the Adaptive " +"Federated Optimization paper." msgstr "" -#: ../../source/ref-telemetry.md:40 +#: ../../source/ref-changelog.md:833 msgid "" -"**Cluster.** Flower telemetry assigns a random in-memory cluster ID each " -"time a Flower workload starts. This allows us to understand which device " -"types not only start Flower workloads but also successfully complete " -"them." +"**Initialize** `start_simulation` **with a list of client IDs** " +"([#860](https://github.com/adap/flower/pull/860))" msgstr "" -#: ../../source/ref-telemetry.md:42 +#: ../../source/ref-changelog.md:835 msgid "" -"**Source.** Flower telemetry tries to store a random source ID in " -"`~/.flwr/source` the first time a telemetry event is generated. The " -"source ID is important to identify whether an issue is recurring or " -"whether an issue is triggered by multiple clusters running concurrently " -"(which often happens in simulation). For example, if a device runs " -"multiple workloads at the same time, and this results in an issue, then, " -"in order to reproduce the issue, multiple workloads must be started at " -"the same time." +"`start_simulation` can now be called with a list of client IDs " +"(`clients_ids`, type: `List[str]`). Those IDs will be passed to the " +"`client_fn` whenever a client needs to be initialized, which can make it " +"easier to load data partitions that are not accessible through `int` " +"identifiers." msgstr "" -#: ../../source/ref-telemetry.md:44 +#: ../../source/ref-changelog.md:839 msgid "" -"You may delete the source ID at any time. If you wish for all events " -"logged under a specific source ID to be deleted, you can send a deletion " -"request mentioning the source ID to `telemetry@flower.ai`. All events " -"related to that source ID will then be permanently deleted." +"Update `num_examples` calculation in PyTorch code examples in " +"([#909](https://github.com/adap/flower/pull/909))" msgstr "" -#: ../../source/ref-telemetry.md:46 +#: ../../source/ref-changelog.md:840 msgid "" -"We will not collect any personally identifiable information. If you think" -" any of the metrics collected could be misused in any way, please [get in" -" touch with us](#how-to-contact-us). We will update this page to reflect " -"any changes to the metrics collected and publish changes in the " -"changelog." +"Expose Flower version through `flwr.__version__` " +"([#952](https://github.com/adap/flower/pull/952))" msgstr "" -#: ../../source/ref-telemetry.md:48 +#: ../../source/ref-changelog.md:841 msgid "" -"If you think other metrics would be helpful for us to better guide our " -"decisions, please let us know! We will carefully review them; if we are " -"confident that they do not compromise user privacy, we may add them." +"`start_server` in `app.py` now returns a `History` object containing " +"metrics from training ([#974](https://github.com/adap/flower/pull/974))" msgstr "" -#: ../../source/ref-telemetry.md:50 -msgid "How to inspect what is being reported" +#: ../../source/ref-changelog.md:842 +msgid "" +"Make `max_workers` (used by `ThreadPoolExecutor`) configurable " +"([#978](https://github.com/adap/flower/pull/978))" msgstr "" -#: ../../source/ref-telemetry.md:52 +#: ../../source/ref-changelog.md:843 msgid "" -"We wanted to make it very easy for you to inspect what anonymous usage " -"metrics are reported. You can view all the reported telemetry information" -" by setting the environment variable `FLWR_TELEMETRY_LOGGING=1`. Logging " -"is disabled by default. You may use logging independently from " -"`FLWR_TELEMETRY_ENABLED` so that you can inspect the telemetry feature " -"without sending any metrics." +"Increase sleep time after server start to three seconds in all code " +"examples ([#1086](https://github.com/adap/flower/pull/1086))" msgstr "" -#: ../../source/ref-telemetry.md:58 +#: ../../source/ref-changelog.md:844 msgid "" -"The inspect Flower telemetry without sending any anonymous usage metrics," -" use both environment variables:" +"Added a new FAQ section to the documentation " +"([#948](https://github.com/adap/flower/pull/948))" msgstr "" -#: ../../source/ref-telemetry.md:64 -msgid "How to contact us" +#: ../../source/ref-changelog.md:845 +msgid "" +"And many more under-the-hood changes, library updates, documentation " +"changes, and tooling improvements!" msgstr "" -#: ../../source/ref-telemetry.md:66 +#: ../../source/ref-changelog.md:849 msgid "" -"We want to hear from you. If you have any feedback or ideas on how to " -"improve the way we handle anonymous usage metrics, reach out to us via " -"[Slack](https://flower.ai/join-slack/) (channel `#telemetry`) or email " -"(`telemetry@flower.ai`)." +"**Removed** `flwr_example` **and** `flwr_experimental` **from release " +"build** ([#869](https://github.com/adap/flower/pull/869))" msgstr "" -#: ../../source/tutorial-quickstart-android.rst:-1 +#: ../../source/ref-changelog.md:851 msgid "" -"Read this Federated Learning quickstart tutorial for creating an Android " -"app using Flower." +"The packages `flwr_example` and `flwr_experimental` have been deprecated " +"since Flower 0.12.0 and they are not longer included in Flower release " +"builds. The associated extras (`baseline`, `examples-pytorch`, `examples-" +"tensorflow`, `http-logger`, `ops`) are now no-op and will be removed in " +"an upcoming release." msgstr "" -#: ../../source/tutorial-quickstart-android.rst:5 -msgid "Quickstart Android" +#: ../../source/ref-changelog.md:853 +msgid "v0.17.0 (2021-09-24)" msgstr "" -#: ../../source/tutorial-quickstart-android.rst:10 +#: ../../source/ref-changelog.md:857 msgid "" -"Let's build a federated learning system using TFLite and Flower on " -"Android!" +"**Experimental virtual client engine** " +"([#781](https://github.com/adap/flower/pull/781) " +"[#790](https://github.com/adap/flower/pull/790) " +"[#791](https://github.com/adap/flower/pull/791))" msgstr "" -#: ../../source/tutorial-quickstart-android.rst:12 +#: ../../source/ref-changelog.md:859 msgid "" -"Please refer to the `full code example " -"`_ to learn " -"more." +"One of Flower's goals is to enable research at scale. This release " +"enables a first (experimental) peek at a major new feature, codenamed the" +" virtual client engine. Virtual clients enable simulations that scale to " +"a (very) large number of clients on a single machine or compute cluster. " +"The easiest way to test the new functionality is to look at the two new " +"code examples called `quickstart_simulation` and `simulation_pytorch`." msgstr "" -#: ../../source/tutorial-quickstart-fastai.rst:-1 +#: ../../source/ref-changelog.md:861 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with FastAI to train a vision model on CIFAR-10." -msgstr "" - -#: ../../source/tutorial-quickstart-fastai.rst:5 -msgid "Quickstart fastai" +"The feature is still experimental, so there's no stability guarantee for " +"the API. It's also not quite ready for prime time and comes with a few " +"known caveats. However, those who are curious are encouraged to try it " +"out and share their thoughts." msgstr "" -#: ../../source/tutorial-quickstart-fastai.rst:10 -msgid "Let's build a federated learning system using fastai and Flower!" +#: ../../source/ref-changelog.md:863 +msgid "" +"**New built-in strategies** " +"([#828](https://github.com/adap/flower/pull/828) " +"[#822](https://github.com/adap/flower/pull/822))" msgstr "" -#: ../../source/tutorial-quickstart-fastai.rst:12 +#: ../../source/ref-changelog.md:865 msgid "" -"Please refer to the `full code example " -"`_ " -"to learn more." +"FedYogi - Federated learning strategy using Yogi on server-side. " +"Implementation based on https://arxiv.org/abs/2003.00295" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:-1 +#: ../../source/ref-changelog.md:866 msgid "" -"Check out this Federating Learning quickstart tutorial for using Flower " -"with HuggingFace Transformers in order to fine-tune an LLM." +"FedAdam - Federated learning strategy using Adam on server-side. " +"Implementation based on https://arxiv.org/abs/2003.00295" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:5 -msgid "Quickstart 🤗 Transformers" +#: ../../source/ref-changelog.md:868 +msgid "" +"**New PyTorch Lightning code example** " +"([#617](https://github.com/adap/flower/pull/617))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:10 +#: ../../source/ref-changelog.md:870 msgid "" -"Let's build a federated learning system using Hugging Face Transformers " -"and Flower!" +"**New Variational Auto-Encoder code example** " +"([#752](https://github.com/adap/flower/pull/752))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:12 +#: ../../source/ref-changelog.md:872 msgid "" -"We will leverage Hugging Face to federate the training of language models" -" over multiple clients using Flower. More specifically, we will fine-tune" -" a pre-trained Transformer model (distilBERT) for sequence classification" -" over a dataset of IMDB ratings. The end goal is to detect if a movie " -"rating is positive or negative." +"**New scikit-learn code example** " +"([#748](https://github.com/adap/flower/pull/748))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:18 -msgid "Dependencies" +#: ../../source/ref-changelog.md:874 +msgid "" +"**New experimental TensorBoard strategy** " +"([#789](https://github.com/adap/flower/pull/789))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:20 +#: ../../source/ref-changelog.md:878 msgid "" -"To follow along this tutorial you will need to install the following " -"packages: :code:`datasets`, :code:`evaluate`, :code:`flwr`, " -":code:`torch`, and :code:`transformers`. This can be done using " -":code:`pip`:" +"Improved advanced TensorFlow code example " +"([#769](https://github.com/adap/flower/pull/769))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:30 -msgid "Standard Hugging Face workflow" +#: ../../source/ref-changelog.md:879 +msgid "" +"Warning when `min_available_clients` is misconfigured " +"([#830](https://github.com/adap/flower/pull/830))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:33 -msgid "Handling the data" +#: ../../source/ref-changelog.md:880 +msgid "" +"Improved gRPC server docs " +"([#841](https://github.com/adap/flower/pull/841))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:35 +#: ../../source/ref-changelog.md:881 msgid "" -"To fetch the IMDB dataset, we will use Hugging Face's :code:`datasets` " -"library. We then need to tokenize the data and create :code:`PyTorch` " -"dataloaders, this is all done in the :code:`load_data` function:" +"Improved error message in `NumPyClient` " +"([#851](https://github.com/adap/flower/pull/851))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:81 -msgid "Training and testing the model" +#: ../../source/ref-changelog.md:882 +msgid "" +"Improved PyTorch quickstart code example " +"([#852](https://github.com/adap/flower/pull/852))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:83 +#: ../../source/ref-changelog.md:886 msgid "" -"Once we have a way of creating our trainloader and testloader, we can " -"take care of the training and testing. This is very similar to any " -":code:`PyTorch` training or testing loop:" +"**Disabled final distributed evaluation** " +"([#800](https://github.com/adap/flower/pull/800))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:121 -msgid "Creating the model itself" +#: ../../source/ref-changelog.md:888 +msgid "" +"Prior behaviour was to perform a final round of distributed evaluation on" +" all connected clients, which is often not required (e.g., when using " +"server-side evaluation). The prior behaviour can be enabled by passing " +"`force_final_distributed_eval=True` to `start_server`." msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:123 +#: ../../source/ref-changelog.md:890 msgid "" -"To create the model itself, we will just load the pre-trained distillBERT" -" model using Hugging Face’s :code:`AutoModelForSequenceClassification` :" +"**Renamed q-FedAvg strategy** " +"([#802](https://github.com/adap/flower/pull/802))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:136 -msgid "Federating the example" +#: ../../source/ref-changelog.md:892 +msgid "" +"The strategy named `QffedAvg` was renamed to `QFedAvg` to better reflect " +"the notation given in the original paper (q-FFL is the optimization " +"objective, q-FedAvg is the proposed solver). Note the original (now " +"deprecated) `QffedAvg` class is still available for compatibility reasons" +" (it will be removed in a future release)." msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:139 -msgid "Creating the IMDBClient" +#: ../../source/ref-changelog.md:894 +msgid "" +"**Deprecated and renamed code example** `simulation_pytorch` **to** " +"`simulation_pytorch_legacy` " +"([#791](https://github.com/adap/flower/pull/791))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:141 +#: ../../source/ref-changelog.md:896 msgid "" -"To federate our example to multiple clients, we first need to write our " -"Flower client class (inheriting from :code:`flwr.client.NumPyClient`). " -"This is very easy, as our model is a standard :code:`PyTorch` model:" +"This example has been replaced by a new example. The new example is based" +" on the experimental virtual client engine, which will become the new " +"default way of doing most types of large-scale simulations in Flower. The" +" existing example was kept for reference purposes, but it might be " +"removed in the future." msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:169 -msgid "" -"The :code:`get_parameters` function lets the server get the client's " -"parameters. Inversely, the :code:`set_parameters` function allows the " -"server to send its parameters to the client. Finally, the :code:`fit` " -"function trains the model locally for the client, and the " -":code:`evaluate` function tests the model locally and returns the " -"relevant metrics." +#: ../../source/ref-changelog.md:898 +msgid "v0.16.0 (2021-05-11)" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:175 -msgid "Starting the server" +#: ../../source/ref-changelog.md:902 +msgid "" +"**New built-in strategies** " +"([#549](https://github.com/adap/flower/pull/549))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:177 -msgid "" -"Now that we have a way to instantiate clients, we need to create our " -"server in order to aggregate the results. Using Flower, this can be done " -"very easily by first choosing a strategy (here, we are using " -":code:`FedAvg`, which will define the global weights as the average of " -"all the clients' weights at each round) and then using the " -":code:`flwr.server.start_server` function:" +#: ../../source/ref-changelog.md:904 +msgid "(abstract) FedOpt" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:205 +#: ../../source/ref-changelog.md:907 msgid "" -"The :code:`weighted_average` function is there to provide a way to " -"aggregate the metrics distributed amongst the clients (basically this " -"allows us to display a nice average accuracy and loss for every round)." +"**Custom metrics for server and strategies** " +"([#717](https://github.com/adap/flower/pull/717))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:209 -msgid "Putting everything together" +#: ../../source/ref-changelog.md:909 +msgid "" +"The Flower server is now fully task-agnostic, all remaining instances of " +"task-specific metrics (such as `accuracy`) have been replaced by custom " +"metrics dictionaries. Flower 0.15 introduced the capability to pass a " +"dictionary containing custom metrics from client to server. As of this " +"release, custom metrics replace task-specific metrics on the server." msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:211 -msgid "We can now start client instances using:" +#: ../../source/ref-changelog.md:911 +msgid "" +"Custom metric dictionaries are now used in two user-facing APIs: they are" +" returned from Strategy methods `aggregate_fit`/`aggregate_evaluate` and " +"they enable evaluation functions passed to built-in strategies (via " +"`eval_fn`) to return more than two evaluation metrics. Strategies can " +"even return *aggregated* metrics dictionaries for the server to keep " +"track of." msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:221 +#: ../../source/ref-changelog.md:913 msgid "" -"And they will be able to connect to the server and start the federated " -"training." +"Strategy implementations should migrate their `aggregate_fit` and " +"`aggregate_evaluate` methods to the new return type (e.g., by simply " +"returning an empty `{}`), server-side evaluation functions should migrate" +" from `return loss, accuracy` to `return loss, {\"accuracy\": accuracy}`." msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:223 +#: ../../source/ref-changelog.md:915 msgid "" -"If you want to check out everything put together, you should check out " -"the `full code example `_ ." +"Flower 0.15-style return types are deprecated (but still supported), " +"compatibility will be removed in a future release." msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:226 +#: ../../source/ref-changelog.md:917 msgid "" -"Of course, this is a very basic example, and a lot can be added or " -"modified, it was just to showcase how simply we could federate a Hugging " -"Face workflow using Flower." +"**Migration warnings for deprecated functionality** " +"([#690](https://github.com/adap/flower/pull/690))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:229 +#: ../../source/ref-changelog.md:919 msgid "" -"Note that in this example we used :code:`PyTorch`, but we could have very" -" well used :code:`TensorFlow`." +"Earlier versions of Flower were often migrated to new APIs, while " +"maintaining compatibility with legacy APIs. This release introduces " +"detailed warning messages if usage of deprecated APIs is detected. The " +"new warning messages often provide details on how to migrate to more " +"recent APIs, thus easing the transition from one release to another." msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:-1 +#: ../../source/ref-changelog.md:921 msgid "" -"Read this Federated Learning quickstart tutorial for creating an iOS app " -"using Flower to train a neural network on MNIST." +"Improved docs and docstrings " +"([#691](https://github.com/adap/flower/pull/691) " +"[#692](https://github.com/adap/flower/pull/692) " +"[#713](https://github.com/adap/flower/pull/713))" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:5 -msgid "Quickstart iOS" +#: ../../source/ref-changelog.md:923 +msgid "MXNet example and documentation" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:10 +#: ../../source/ref-changelog.md:925 msgid "" -"In this tutorial we will learn how to train a Neural Network on MNIST " -"using Flower and CoreML on iOS devices." +"FedBN implementation in example PyTorch: From Centralized To Federated " +"([#696](https://github.com/adap/flower/pull/696) " +"[#702](https://github.com/adap/flower/pull/702) " +"[#705](https://github.com/adap/flower/pull/705))" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:12 +#: ../../source/ref-changelog.md:929 msgid "" -"First of all, for running the Flower Python server, it is recommended to " -"create a virtual environment and run everything within a :doc:`virtualenv" -" `. For the Flower client " -"implementation in iOS, it is recommended to use Xcode as our IDE." +"**Serialization-agnostic server** " +"([#721](https://github.com/adap/flower/pull/721))" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:15 +#: ../../source/ref-changelog.md:931 msgid "" -"Our example consists of one Python *server* and two iPhone *clients* that" -" all have the same model." +"The Flower server is now fully serialization-agnostic. Prior usage of " +"class `Weights` (which represents parameters as deserialized NumPy " +"ndarrays) was replaced by class `Parameters` (e.g., in `Strategy`). " +"`Parameters` objects are fully serialization-agnostic and represents " +"parameters as byte arrays, the `tensor_type` attributes indicates how " +"these byte arrays should be interpreted (e.g., for " +"serialization/deserialization)." msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:17 +#: ../../source/ref-changelog.md:933 msgid "" -"*Clients* are responsible for generating individual weight updates for " -"the model based on their local datasets. These updates are then sent to " -"the *server* which will aggregate them to produce a better model. " -"Finally, the *server* sends this improved version of the model back to " -"each *client*. A complete cycle of weight updates is called a *round*." +"Built-in strategies implement this approach by handling serialization and" +" deserialization to/from `Weights` internally. Custom/3rd-party Strategy " +"implementations should update to the slightly changed Strategy method " +"definitions. Strategy authors can consult PR " +"[#721](https://github.com/adap/flower/pull/721) to see how strategies can" +" easily migrate to the new format." msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:21 +#: ../../source/ref-changelog.md:935 msgid "" -"Now that we have a rough idea of what is going on, let's get started to " -"setup our Flower server environment. We first need to install Flower. You" -" can do this by using pip:" +"Deprecated `flwr.server.Server.evaluate`, use " +"`flwr.server.Server.evaluate_round` instead " +"([#717](https://github.com/adap/flower/pull/717))" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:27 -msgid "Or Poetry:" +#: ../../source/ref-changelog.md:937 +msgid "v0.15.0 (2021-03-12)" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:34 -#: ../../source/tutorial-quickstart-mxnet.rst:36 -#: ../../source/tutorial-quickstart-pytorch.rst:37 -#: ../../source/tutorial-quickstart-scikitlearn.rst:40 -#: ../../source/tutorial-quickstart-tensorflow.rst:29 -#: ../../source/tutorial-quickstart-xgboost.rst:55 -msgid "Flower Client" +#: ../../source/ref-changelog.md:941 +msgid "" +"**Server-side parameter initialization** " +"([#658](https://github.com/adap/flower/pull/658))" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:36 +#: ../../source/ref-changelog.md:943 msgid "" -"Now that we have all our dependencies installed, let's run a simple " -"distributed training using CoreML as our local training pipeline and " -"MNIST as our dataset. For simplicity reasons we will use the complete " -"Flower client with CoreML, that has been implemented and stored inside " -"the Swift SDK. The client implementation can be seen below:" +"Model parameters can now be initialized on the server-side. Server-side " +"parameter initialization works via a new `Strategy` method called " +"`initialize_parameters`." msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:72 +#: ../../source/ref-changelog.md:945 msgid "" -"Let's create a new application project in Xcode and add :code:`flwr` as a" -" dependency in your project. For our application, we will store the logic" -" of our app in :code:`FLiOSModel.swift` and the UI elements in " -":code:`ContentView.swift`. We will focus more on :code:`FLiOSModel.swift`" -" in this quickstart. Please refer to the `full code example " -"`_ to learn more " -"about the app." +"Built-in strategies support a new constructor argument called " +"`initial_parameters` to set the initial parameters. Built-in strategies " +"will provide these initial parameters to the server on startup and then " +"delete them to free the memory afterwards." msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:75 -msgid "Import Flower and CoreML related packages in :code:`FLiOSModel.swift`:" +#: ../../source/ref-changelog.md:964 +msgid "" +"If no initial parameters are provided to the strategy, the server will " +"continue to use the current behaviour (namely, it will ask one of the " +"connected clients for its parameters and use these as the initial global " +"parameters)." msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:83 -msgid "" -"Then add the mlmodel to the project simply by drag-and-drop, the mlmodel " -"will be bundled inside the application during deployment to your iOS " -"device. We need to pass the url to access mlmodel and run CoreML machine " -"learning processes, it can be retrieved by calling the function " -":code:`Bundle.main.url`. For the MNIST dataset, we need to preprocess it " -"into :code:`MLBatchProvider` object. The preprocessing is done inside " -":code:`DataLoader.swift`." +#: ../../source/ref-changelog.md:966 +msgid "Deprecations" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:99 +#: ../../source/ref-changelog.md:968 msgid "" -"Since CoreML does not allow the model parameters to be seen before " -"training, and accessing the model parameters during or after the training" -" can only be done by specifying the layer name, we need to know this " -"information beforehand, through looking at the model specification, which" -" are written as proto files. The implementation can be seen in " -":code:`MLModelInspect`." +"Deprecate `flwr.server.strategy.DefaultStrategy` (migrate to " +"`flwr.server.strategy.FedAvg`, which is equivalent)" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:102 -msgid "" -"After we have all of the necessary information, let's create our Flower " -"client." +#: ../../source/ref-changelog.md:970 +msgid "v0.14.0 (2021-02-18)" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:117 +#: ../../source/ref-changelog.md:974 msgid "" -"Then start the Flower gRPC client and start communicating to the server " -"by passing our Flower client to the function :code:`startFlwrGRPC`." +"**Generalized** `Client.fit` **and** `Client.evaluate` **return values** " +"([#610](https://github.com/adap/flower/pull/610) " +"[#572](https://github.com/adap/flower/pull/572) " +"[#633](https://github.com/adap/flower/pull/633))" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:124 +#: ../../source/ref-changelog.md:976 msgid "" -"That's it for the client. We only have to implement :code:`Client` or " -"call the provided :code:`MLFlwrClient` and call :code:`startFlwrGRPC()`. " -"The attribute :code:`hostname` and :code:`port` tells the client which " -"server to connect to. This can be done by entering the hostname and port " -"in the application before clicking the start button to start the " -"federated learning process." +"Clients can now return an additional dictionary mapping `str` keys to " +"values of the following types: `bool`, `bytes`, `float`, `int`, `str`. " +"This means one can return almost arbitrary values from `fit`/`evaluate` " +"and make use of them on the server side!" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:129 -#: ../../source/tutorial-quickstart-mxnet.rst:226 -#: ../../source/tutorial-quickstart-pytorch.rst:203 -#: ../../source/tutorial-quickstart-scikitlearn.rst:157 -#: ../../source/tutorial-quickstart-tensorflow.rst:98 -#: ../../source/tutorial-quickstart-xgboost.rst:309 -msgid "Flower Server" +#: ../../source/ref-changelog.md:978 +msgid "" +"This improvement also allowed for more consistent return types between " +"`fit` and `evaluate`: `evaluate` should now return a tuple `(float, int, " +"dict)` representing the loss, number of examples, and a dictionary " +"holding arbitrary problem-specific values like accuracy." msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:131 -#: ../../source/tutorial-quickstart-mxnet.rst:228 -#: ../../source/tutorial-quickstart-pytorch.rst:205 -#: ../../source/tutorial-quickstart-tensorflow.rst:100 +#: ../../source/ref-changelog.md:980 msgid "" -"For simple workloads we can start a Flower server and leave all the " -"configuration possibilities at their default values. In a file named " -":code:`server.py`, import Flower and start the server:" +"In case you wondered: this feature is compatible with existing projects, " +"the additional dictionary return value is optional. New code should " +"however migrate to the new return types to be compatible with upcoming " +"Flower releases (`fit`: `List[np.ndarray], int, Dict[str, Scalar]`, " +"`evaluate`: `float, int, Dict[str, Scalar]`). See the example below for " +"details." msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:142 -#: ../../source/tutorial-quickstart-mxnet.rst:239 -#: ../../source/tutorial-quickstart-pytorch.rst:216 -#: ../../source/tutorial-quickstart-scikitlearn.rst:215 -#: ../../source/tutorial-quickstart-tensorflow.rst:112 -msgid "Train the model, federated!" +#: ../../source/ref-changelog.md:982 +msgid "" +"*Code example:* note the additional dictionary return values in both " +"`FlwrClient.fit` and `FlwrClient.evaluate`:" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:144 -#: ../../source/tutorial-quickstart-pytorch.rst:218 -#: ../../source/tutorial-quickstart-tensorflow.rst:114 -#: ../../source/tutorial-quickstart-xgboost.rst:525 +#: ../../source/ref-changelog.md:997 msgid "" -"With both client and server ready, we can now run everything and see " -"federated learning in action. FL systems usually have a server and " -"multiple clients. We therefore have to start the server first:" -msgstr "" - -#: ../../source/tutorial-quickstart-ios.rst:152 -msgid "" -"Once the server is running we can start the clients in different " -"terminals. Build and run the client through your Xcode, one through Xcode" -" Simulator and the other by deploying it to your iPhone. To see more " -"about how to deploy your app to iPhone or Simulator visit `here " -"`_." +"**Generalized** `config` **argument in** `Client.fit` **and** " +"`Client.evaluate` ([#595](https://github.com/adap/flower/pull/595))" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:156 +#: ../../source/ref-changelog.md:999 msgid "" -"Congratulations! You've successfully built and run your first federated " -"learning system in your ios device. The full `source code " -"`_ for this " -"example can be found in :code:`examples/ios`." +"The `config` argument used to be of type `Dict[str, str]`, which means " +"that dictionary values were expected to be strings. The new release " +"generalizes this to enable values of the following types: `bool`, " +"`bytes`, `float`, `int`, `str`." msgstr "" -#: ../../source/tutorial-quickstart-jax.rst:-1 +#: ../../source/ref-changelog.md:1001 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with Jax to train a linear regression model on a scikit-learn dataset." -msgstr "" - -#: ../../source/tutorial-quickstart-jax.rst:5 -msgid "Quickstart JAX" +"This means one can now pass almost arbitrary values to `fit`/`evaluate` " +"using the `config` dictionary. Yay, no more `str(epochs)` on the server-" +"side and `int(config[\"epochs\"])` on the client side!" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:-1 +#: ../../source/ref-changelog.md:1003 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with MXNet to train a Sequential model on MNIST." +"*Code example:* note that the `config` dictionary now contains non-`str` " +"values in both `Client.fit` and `Client.evaluate`:" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:5 -msgid "Quickstart MXNet" +#: ../../source/ref-changelog.md:1020 +msgid "v0.13.0 (2021-01-08)" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:7 +#: ../../source/ref-changelog.md:1024 msgid "" -"MXNet is no longer maintained and has been moved into `Attic " -"`_. As a result, we would " -"encourage you to use other ML frameworks alongside Flower, for example, " -"PyTorch. This tutorial might be removed in future versions of Flower." +"New example: PyTorch From Centralized To Federated " +"([#549](https://github.com/adap/flower/pull/549))" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:12 -msgid "" -"In this tutorial, we will learn how to train a :code:`Sequential` model " -"on MNIST using Flower and MXNet." +#: ../../source/ref-changelog.md:1025 +msgid "Improved documentation" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:14 -#: ../../source/tutorial-quickstart-scikitlearn.rst:12 -msgid "" -"It is recommended to create a virtual environment and run everything " -"within this :doc:`virtualenv `." +#: ../../source/ref-changelog.md:1026 +msgid "New documentation theme ([#551](https://github.com/adap/flower/pull/551))" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:16 -#: ../../source/tutorial-quickstart-pytorch.rst:17 -#: ../../source/tutorial-quickstart-scikitlearn.rst:14 -msgid "" -"Our example consists of one *server* and two *clients* all having the " -"same model." +#: ../../source/ref-changelog.md:1027 +msgid "New API reference ([#554](https://github.com/adap/flower/pull/554))" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:18 -#: ../../source/tutorial-quickstart-scikitlearn.rst:16 +#: ../../source/ref-changelog.md:1028 msgid "" -"*Clients* are responsible for generating individual model parameter " -"updates for the model based on their local datasets. These updates are " -"then sent to the *server* which will aggregate them to produce an updated" -" global model. Finally, the *server* sends this improved version of the " -"model back to each *client*. A complete cycle of parameters updates is " -"called a *round*." +"Updated examples documentation " +"([#549](https://github.com/adap/flower/pull/549))" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:22 -#: ../../source/tutorial-quickstart-scikitlearn.rst:20 +#: ../../source/ref-changelog.md:1029 msgid "" -"Now that we have a rough idea of what is going on, let's get started. We " -"first need to install Flower. You can do this by running:" +"Removed obsolete documentation " +"([#548](https://github.com/adap/flower/pull/548))" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:28 -msgid "Since we want to use MXNet, let's go ahead and install it:" +#: ../../source/ref-changelog.md:1031 +msgid "Bugfix:" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:38 +#: ../../source/ref-changelog.md:1033 msgid "" -"Now that we have all our dependencies installed, let's run a simple " -"distributed training with two clients and one server. Our training " -"procedure and network architecture are based on MXNet´s `Hand-written " -"Digit Recognition tutorial " -"`_." +"`Server.fit` does not disconnect clients when finished, disconnecting the" +" clients is now handled in `flwr.server.start_server` " +"([#553](https://github.com/adap/flower/pull/553) " +"[#540](https://github.com/adap/flower/issues/540))." msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:40 -msgid "" -"In a file called :code:`client.py`, import Flower and MXNet related " -"packages:" +#: ../../source/ref-changelog.md:1035 +msgid "v0.12.0 (2020-12-07)" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:55 -msgid "In addition, define the device allocation in MXNet with:" +#: ../../source/ref-changelog.md:1037 ../../source/ref-changelog.md:1053 +msgid "Important changes:" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:61 +#: ../../source/ref-changelog.md:1039 msgid "" -"We use MXNet to load MNIST, a popular image classification dataset of " -"handwritten digits for machine learning. The MXNet utility " -":code:`mx.test_utils.get_mnist()` downloads the training and test data." +"Added an example for embedded devices " +"([#507](https://github.com/adap/flower/pull/507))" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:75 +#: ../../source/ref-changelog.md:1040 msgid "" -"Define the training and loss with MXNet. We train the model by looping " -"over the dataset, measure the corresponding loss, and optimize it." +"Added a new NumPyClient (in addition to the existing KerasClient) " +"([#504](https://github.com/adap/flower/pull/504) " +"[#508](https://github.com/adap/flower/pull/508))" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:113 +#: ../../source/ref-changelog.md:1041 msgid "" -"Next, we define the validation of our machine learning model. We loop " -"over the test set and measure both loss and accuracy on the test set." +"Deprecated `flwr_example` package and started to migrate examples into " +"the top-level `examples` directory " +"([#494](https://github.com/adap/flower/pull/494) " +"[#512](https://github.com/adap/flower/pull/512))" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:137 -msgid "" -"After defining the training and testing of a MXNet machine learning " -"model, we use these functions to implement a Flower client." +#: ../../source/ref-changelog.md:1043 +msgid "v0.11.0 (2020-11-30)" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:139 -msgid "Our Flower clients will use a simple :code:`Sequential` model:" +#: ../../source/ref-changelog.md:1045 +msgid "Incompatible changes:" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:158 +#: ../../source/ref-changelog.md:1047 msgid "" -"After loading the dataset with :code:`load_data()` we perform one forward" -" propagation to initialize the model and model parameters with " -":code:`model(init)`. Next, we implement a Flower client." +"Renamed strategy methods " +"([#486](https://github.com/adap/flower/pull/486)) to unify the naming of " +"Flower's public APIs. Other public methods/functions (e.g., every method " +"in `Client`, but also `Strategy.evaluate`) do not use the `on_` prefix, " +"which is why we're removing it from the four methods in Strategy. To " +"migrate rename the following `Strategy` methods accordingly:" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:160 -#: ../../source/tutorial-quickstart-pytorch.rst:144 -#: ../../source/tutorial-quickstart-tensorflow.rst:54 -msgid "" -"The Flower server interacts with clients through an interface called " -":code:`Client`. When the server selects a particular client for training," -" it sends training instructions over the network. The client receives " -"those instructions and calls one of the :code:`Client` methods to run " -"your code (i.e., to train the neural network we defined earlier)." +#: ../../source/ref-changelog.md:1048 +msgid "`on_configure_evaluate` => `configure_evaluate`" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:166 -msgid "" -"Flower provides a convenience class called :code:`NumPyClient` which " -"makes it easier to implement the :code:`Client` interface when your " -"workload uses MXNet. Implementing :code:`NumPyClient` usually means " -"defining the following methods (:code:`set_parameters` is optional " -"though):" +#: ../../source/ref-changelog.md:1049 +msgid "`on_aggregate_evaluate` => `aggregate_evaluate`" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:172 -#: ../../source/tutorial-quickstart-pytorch.rst:156 -#: ../../source/tutorial-quickstart-scikitlearn.rst:109 -msgid "return the model weight as a list of NumPy ndarrays" +#: ../../source/ref-changelog.md:1050 +msgid "`on_configure_fit` => `configure_fit`" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:173 -#: ../../source/tutorial-quickstart-pytorch.rst:157 -#: ../../source/tutorial-quickstart-scikitlearn.rst:111 -msgid ":code:`set_parameters` (optional)" +#: ../../source/ref-changelog.md:1051 +msgid "`on_aggregate_fit` => `aggregate_fit`" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:174 -#: ../../source/tutorial-quickstart-pytorch.rst:158 -#: ../../source/tutorial-quickstart-scikitlearn.rst:111 +#: ../../source/ref-changelog.md:1055 msgid "" -"update the local model weights with the parameters received from the " -"server" +"Deprecated `DefaultStrategy` " +"([#479](https://github.com/adap/flower/pull/479)). To migrate use " +"`FedAvg` instead." msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:176 -#: ../../source/tutorial-quickstart-pytorch.rst:160 -#: ../../source/tutorial-quickstart-scikitlearn.rst:114 -msgid "set the local model weights" +#: ../../source/ref-changelog.md:1056 +msgid "" +"Simplified examples and baselines " +"([#484](https://github.com/adap/flower/pull/484))." msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:177 -#: ../../source/tutorial-quickstart-pytorch.rst:161 -#: ../../source/tutorial-quickstart-scikitlearn.rst:115 -msgid "train the local model" +#: ../../source/ref-changelog.md:1057 +msgid "" +"Removed presently unused `on_conclude_round` from strategy interface " +"([#483](https://github.com/adap/flower/pull/483))." msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:178 -#: ../../source/tutorial-quickstart-pytorch.rst:162 -#: ../../source/tutorial-quickstart-scikitlearn.rst:116 -msgid "receive the updated local model weights" +#: ../../source/ref-changelog.md:1058 +msgid "" +"Set minimal Python version to 3.6.1 instead of 3.6.9 " +"([#471](https://github.com/adap/flower/pull/471))." msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:180 -#: ../../source/tutorial-quickstart-pytorch.rst:164 -#: ../../source/tutorial-quickstart-scikitlearn.rst:118 -msgid "test the local model" +#: ../../source/ref-changelog.md:1059 +msgid "" +"Improved `Strategy` docstrings " +"([#470](https://github.com/adap/flower/pull/470))." msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:182 -msgid "They can be implemented in the following way:" +#: ../../source/ref-example-projects.rst:2 +msgid "Example projects" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:212 +#: ../../source/ref-example-projects.rst:4 msgid "" -"We can now create an instance of our class :code:`MNISTClient` and add " -"one line to actually run this client:" +"Flower comes with a number of usage examples. The examples demonstrate " +"how Flower can be used to federate different kinds of existing machine " +"learning pipelines, usually leveraging popular machine learning " +"frameworks such as `PyTorch `_ or `TensorFlow " +"`_." msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:219 +#: ../../source/ref-example-projects.rst:10 msgid "" -"That's it for the client. We only have to implement :code:`Client` or " -":code:`NumPyClient` and call :code:`fl.client.start_client()` or " -":code:`fl.client.start_numpy_client()`. The string " -":code:`\"0.0.0.0:8080\"` tells the client which server to connect to. In " -"our case we can run the server and the client on the same machine, " -"therefore we use :code:`\"0.0.0.0:8080\"`. If we run a truly federated " -"workload with the server and clients running on different machines, all " -"that needs to change is the :code:`server_address` we pass to the client." +"The following examples are available as standalone projects. Quickstart " +"TensorFlow/Keras ---------------------------" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:241 +#: ../../source/ref-example-projects.rst:14 msgid "" -"With both client and server ready, we can now run everything and see " -"federated learning in action. Federated learning systems usually have a " -"server and multiple clients. We therefore have to start the server first:" +"The TensorFlow/Keras quickstart example shows CIFAR-10 image " +"classification with MobileNetV2:" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:249 -#: ../../source/tutorial-quickstart-pytorch.rst:226 -#: ../../source/tutorial-quickstart-scikitlearn.rst:224 -#: ../../source/tutorial-quickstart-tensorflow.rst:122 -#: ../../source/tutorial-quickstart-xgboost.rst:533 +#: ../../source/ref-example-projects.rst:17 msgid "" -"Once the server is running we can start the clients in different " -"terminals. Open a new terminal and start the first client:" +"`Quickstart TensorFlow (Code) " +"`_" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:256 -#: ../../source/tutorial-quickstart-pytorch.rst:233 -#: ../../source/tutorial-quickstart-scikitlearn.rst:231 -#: ../../source/tutorial-quickstart-tensorflow.rst:129 -#: ../../source/tutorial-quickstart-xgboost.rst:540 -msgid "Open another terminal and start the second client:" +#: ../../source/ref-example-projects.rst:18 +msgid ":doc:`Quickstart TensorFlow (Tutorial) `" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:262 -#: ../../source/tutorial-quickstart-pytorch.rst:239 -#: ../../source/tutorial-quickstart-scikitlearn.rst:237 -#: ../../source/tutorial-quickstart-xgboost.rst:546 +#: ../../source/ref-example-projects.rst:19 msgid "" -"Each client will have its own dataset. You should now see how the " -"training does in the very first terminal (the one that started the " -"server):" +"`Quickstart TensorFlow (Blog Post) `_" +msgstr "" + +#: ../../source/ref-example-projects.rst:23 +#: ../../source/tutorial-quickstart-pytorch.rst:5 +msgid "Quickstart PyTorch" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:294 +#: ../../source/ref-example-projects.rst:25 msgid "" -"Congratulations! You've successfully built and run your first federated " -"learning system. The full `source code " -"`_ for this example can be found in :code:`examples" -"/quickstart-mxnet`." +"The PyTorch quickstart example shows CIFAR-10 image classification with a" +" simple Convolutional Neural Network:" msgstr "" -#: ../../source/tutorial-quickstart-pandas.rst:-1 +#: ../../source/ref-example-projects.rst:28 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with Pandas to perform Federated Analytics." +"`Quickstart PyTorch (Code) " +"`_" msgstr "" -#: ../../source/tutorial-quickstart-pandas.rst:5 -msgid "Quickstart Pandas" +#: ../../source/ref-example-projects.rst:29 +msgid ":doc:`Quickstart PyTorch (Tutorial) `" msgstr "" -#: ../../source/tutorial-quickstart-pandas.rst:10 -msgid "Let's build a federated analytics system using Pandas and Flower!" +#: ../../source/ref-example-projects.rst:33 +msgid "PyTorch: From Centralized To Federated" msgstr "" -#: ../../source/tutorial-quickstart-pandas.rst:12 +#: ../../source/ref-example-projects.rst:35 msgid "" -"Please refer to the `full code example " -"`_ " -"to learn more." +"This example shows how a regular PyTorch project can be federated using " +"Flower:" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:-1 +#: ../../source/ref-example-projects.rst:37 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with PyTorch to train a CNN model on MNIST." +"`PyTorch: From Centralized To Federated (Code) " +"`_" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:13 +#: ../../source/ref-example-projects.rst:38 msgid "" -"In this tutorial we will learn how to train a Convolutional Neural " -"Network on CIFAR10 using Flower and PyTorch." +":doc:`PyTorch: From Centralized To Federated (Tutorial) `" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:15 -#: ../../source/tutorial-quickstart-xgboost.rst:39 -msgid "" -"First of all, it is recommended to create a virtual environment and run " -"everything within a :doc:`virtualenv `." +#: ../../source/ref-example-projects.rst:42 +msgid "Federated Learning on Raspberry Pi and Nvidia Jetson" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:19 +#: ../../source/ref-example-projects.rst:44 msgid "" -"*Clients* are responsible for generating individual weight-updates for " -"the model based on their local datasets. These updates are then sent to " -"the *server* which will aggregate them to produce a better model. " -"Finally, the *server* sends this improved version of the model back to " -"each *client*. A complete cycle of weight updates is called a *round*." +"This example shows how Flower can be used to build a federated learning " +"system that run across Raspberry Pi and Nvidia Jetson:" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:23 +#: ../../source/ref-example-projects.rst:46 msgid "" -"Now that we have a rough idea of what is going on, let's get started. We " -"first need to install Flower. You can do this by running :" +"`Federated Learning on Raspberry Pi and Nvidia Jetson (Code) " +"`_" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:29 +#: ../../source/ref-example-projects.rst:47 msgid "" -"Since we want to use PyTorch to solve a computer vision task, let's go " -"ahead and install PyTorch and the **torchvision** library:" +"`Federated Learning on Raspberry Pi and Nvidia Jetson (Blog Post) " +"`_" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:39 +#: ../../source/ref-faq.rst:4 msgid "" -"Now that we have all our dependencies installed, let's run a simple " -"distributed training with two clients and one server. Our training " -"procedure and network architecture are based on PyTorch's `Deep Learning " -"with PyTorch " -"`_." +"This page collects answers to commonly asked questions about Federated " +"Learning with Flower." msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:41 -msgid "" -"In a file called :code:`client.py`, import Flower and PyTorch related " -"packages:" +#: ../../source/ref-faq.rst +msgid ":fa:`eye,mr-1` Can Flower run on Jupyter Notebooks / Google Colab?" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:56 -msgid "In addition, we define the device allocation in PyTorch with:" +#: ../../source/ref-faq.rst:8 +msgid "" +"Yes, it can! Flower even comes with a few under-the-hood optimizations to" +" make it work even better on Colab. Here's a quickstart example:" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:62 +#: ../../source/ref-faq.rst:10 msgid "" -"We use PyTorch to load CIFAR10, a popular colored image classification " -"dataset for machine learning. The PyTorch :code:`DataLoader()` downloads " -"the training and test data that are then normalized." +"`Flower simulation PyTorch " +"`_" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:78 +#: ../../source/ref-faq.rst:11 msgid "" -"Define the loss and optimizer with PyTorch. The training of the dataset " -"is done by looping over the dataset, measure the corresponding loss and " -"optimize it." +"`Flower simulation TensorFlow/Keras " +"`_" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:94 -msgid "" -"Define then the validation of the machine learning network. We loop over" -" the test set and measure the loss and accuracy of the test set." +#: ../../source/ref-faq.rst +msgid ":fa:`eye,mr-1` How can I run Federated Learning on a Raspberry Pi?" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:113 +#: ../../source/ref-faq.rst:15 msgid "" -"After defining the training and testing of a PyTorch machine learning " -"model, we use the functions for the Flower clients." +"Find the `blog post about federated learning on embedded device here " +"`_" +" and the corresponding `GitHub code example " +"`_." msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:115 -msgid "" -"The Flower clients will use a simple CNN adapted from 'PyTorch: A 60 " -"Minute Blitz':" +#: ../../source/ref-faq.rst +msgid ":fa:`eye,mr-1` Does Flower support federated learning on Android devices?" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:142 +#: ../../source/ref-faq.rst:19 msgid "" -"After loading the data set with :code:`load_data()` we define the Flower " -"interface." +"Yes, it does. Please take a look at our `blog post " +"`_ or check out the code examples:" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:150 +#: ../../source/ref-faq.rst:21 msgid "" -"Flower provides a convenience class called :code:`NumPyClient` which " -"makes it easier to implement the :code:`Client` interface when your " -"workload uses PyTorch. Implementing :code:`NumPyClient` usually means " -"defining the following methods (:code:`set_parameters` is optional " -"though):" +"`Android Kotlin example `_" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:166 -msgid "which can be implemented in the following way:" +#: ../../source/ref-faq.rst:22 +msgid "`Android Java example `_" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:189 -#: ../../source/tutorial-quickstart-tensorflow.rst:82 -msgid "" -"We can now create an instance of our class :code:`CifarClient` and add " -"one line to actually run this client:" +#: ../../source/ref-faq.rst +msgid ":fa:`eye,mr-1` Can I combine federated learning with blockchain?" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:196 -#: ../../source/tutorial-quickstart-tensorflow.rst:90 +#: ../../source/ref-faq.rst:26 msgid "" -"That's it for the client. We only have to implement :code:`Client` or " -":code:`NumPyClient` and call :code:`fl.client.start_client()`. If you " -"implement a client of type :code:`NumPyClient` you'll need to first call " -"its :code:`to_client()` method. The string :code:`\"[::]:8080\"` tells " -"the client which server to connect to. In our case we can run the server " -"and the client on the same machine, therefore we use " -":code:`\"[::]:8080\"`. If we run a truly federated workload with the " -"server and clients running on different machines, all that needs to " -"change is the :code:`server_address` we point the client at." +"Yes, of course. A list of available examples using Flower within a " +"blockchain environment is available here:" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:271 +#: ../../source/ref-faq.rst:28 msgid "" -"Congratulations! You've successfully built and run your first federated " -"learning system. The full `source code " -"`_ for this example can be found in :code:`examples" -"/quickstart-pytorch`." +"`Flower meets Nevermined GitHub Repository `_." msgstr "" -#: ../../source/tutorial-quickstart-pytorch-lightning.rst:-1 +#: ../../source/ref-faq.rst:29 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with PyTorch Lightning to train an Auto Encoder model on MNIST." +"`Flower meets Nevermined YouTube video " +"`_." msgstr "" -#: ../../source/tutorial-quickstart-pytorch-lightning.rst:5 -msgid "Quickstart PyTorch Lightning" +#: ../../source/ref-faq.rst:30 +msgid "" +"`Flower meets KOSMoS `_." msgstr "" -#: ../../source/tutorial-quickstart-pytorch-lightning.rst:10 +#: ../../source/ref-faq.rst:31 msgid "" -"Let's build a horizontal federated learning system using PyTorch " -"Lightning and Flower!" +"`Flower meets Talan blog post `_ ." msgstr "" -#: ../../source/tutorial-quickstart-pytorch-lightning.rst:12 +#: ../../source/ref-faq.rst:32 msgid "" -"Please refer to the `full code example " -"`_ to learn more." +"`Flower meets Talan GitHub Repository " +"`_ ." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:-1 -msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with scikit-learn to train a linear regression model." +#: ../../source/ref-telemetry.md:1 +msgid "Telemetry" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:5 -msgid "Quickstart scikit-learn" +#: ../../source/ref-telemetry.md:3 +msgid "" +"The Flower open-source project collects **anonymous** usage metrics to " +"make well-informed decisions to improve Flower. Doing this enables the " +"Flower team to understand how Flower is used and what challenges users " +"might face." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:10 +#: ../../source/ref-telemetry.md:5 msgid "" -"In this tutorial, we will learn how to train a :code:`Logistic " -"Regression` model on MNIST using Flower and scikit-learn." +"**Flower is a friendly framework for collaborative AI and data science.**" +" Staying true to this statement, Flower makes it easy to disable " +"telemetry for users that do not want to share anonymous usage metrics." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:26 -msgid "Since we want to use scikit-learn, let's go ahead and install it:" +#: ../../source/ref-telemetry.md:7 +msgid "Principles" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:32 -msgid "Or simply install all dependencies using Poetry:" +#: ../../source/ref-telemetry.md:9 +msgid "We follow strong principles guarding anonymous usage metrics collection:" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:42 +#: ../../source/ref-telemetry.md:11 msgid "" -"Now that we have all our dependencies installed, let's run a simple " -"distributed training with two clients and one server. However, before " -"setting up the client and server, we will define all functionalities that" -" we need for our federated learning setup within :code:`utils.py`. The " -":code:`utils.py` contains different functions defining all the machine " -"learning basics:" +"**Optional:** You will always be able to disable telemetry; read on to " +"learn “[How to opt-out](#how-to-opt-out)”." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:45 -msgid ":code:`get_model_parameters()`" +#: ../../source/ref-telemetry.md:12 +msgid "" +"**Anonymous:** The reported usage metrics are anonymous and do not " +"contain any personally identifiable information (PII). See “[Collected " +"metrics](#collected-metrics)” to understand what metrics are being " +"reported." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:46 -msgid "Returns the parameters of a :code:`sklearn` LogisticRegression model" +#: ../../source/ref-telemetry.md:13 +msgid "" +"**Transparent:** You can easily inspect what anonymous metrics are being " +"reported; see the section “[How to inspect what is being reported](#how-" +"to-inspect-what-is-being-reported)”" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:47 -msgid ":code:`set_model_params()`" +#: ../../source/ref-telemetry.md:14 +msgid "" +"**Open for feedback:** You can always reach out to us if you have " +"feedback; see the section “[How to contact us](#how-to-contact-us)” for " +"details." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:48 -msgid "Sets the parameters of a :code:`sklean` LogisticRegression model" +#: ../../source/ref-telemetry.md:16 +msgid "How to opt-out" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:49 -msgid ":code:`set_initial_params()`" +#: ../../source/ref-telemetry.md:18 +msgid "" +"When Flower starts, it will check for an environment variable called " +"`FLWR_TELEMETRY_ENABLED`. Telemetry can easily be disabled by setting " +"`FLWR_TELEMETRY_ENABLED=0`. Assuming you are starting a Flower server or " +"client, simply do so by prepending your command as in:" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:50 -msgid "Initializes the model parameters that the Flower server will ask for" +#: ../../source/ref-telemetry.md:24 +msgid "" +"Alternatively, you can export `FLWR_TELEMETRY_ENABLED=0` in, for example," +" `.bashrc` (or whatever configuration file applies to your environment) " +"to disable Flower telemetry permanently." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:51 -msgid ":code:`load_mnist()`" +#: ../../source/ref-telemetry.md:26 +msgid "Collected metrics" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:52 -msgid "Loads the MNIST dataset using OpenML" +#: ../../source/ref-telemetry.md:28 +msgid "Flower telemetry collects the following metrics:" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:53 -msgid ":code:`shuffle()`" +#: ../../source/ref-telemetry.md:30 +msgid "" +"**Flower version.** Understand which versions of Flower are currently " +"being used. This helps us to decide whether we should invest effort into " +"releasing a patch version for an older version of Flower or instead use " +"the bandwidth to build new features." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:54 -msgid "Shuffles data and its label" +#: ../../source/ref-telemetry.md:32 +msgid "" +"**Operating system.** Enables us to answer questions such as: *Should we " +"create more guides for Linux, macOS, or Windows?*" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:56 -msgid ":code:`partition()`" +#: ../../source/ref-telemetry.md:34 +msgid "" +"**Python version.** Knowing the Python version helps us, for example, to " +"decide whether we should invest effort into supporting old versions of " +"Python or stop supporting them and start taking advantage of new Python " +"features." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:56 -msgid "Splits datasets into a number of partitions" +#: ../../source/ref-telemetry.md:36 +msgid "" +"**Hardware properties.** Understanding the hardware environment that " +"Flower is being used in helps to decide whether we should, for example, " +"put more effort into supporting low-resource environments." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:58 +#: ../../source/ref-telemetry.md:38 msgid "" -"Please check out :code:`utils.py` `here " -"`_ for more details. The pre-defined functions are used in" -" the :code:`client.py` and imported. The :code:`client.py` also requires " -"to import several packages such as Flower and scikit-learn:" +"**Execution mode.** Knowing what execution mode Flower starts in enables " +"us to understand how heavily certain features are being used and better " +"prioritize based on that." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:73 +#: ../../source/ref-telemetry.md:40 msgid "" -"We load the MNIST dataset from `OpenML " -"`_, a popular " -"image classification dataset of handwritten digits for machine learning. " -"The utility :code:`utils.load_mnist()` downloads the training and test " -"data. The training set is split afterwards into 10 partitions with " -":code:`utils.partition()`." +"**Cluster.** Flower telemetry assigns a random in-memory cluster ID each " +"time a Flower workload starts. This allows us to understand which device " +"types not only start Flower workloads but also successfully complete " +"them." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:85 +#: ../../source/ref-telemetry.md:42 msgid "" -"Next, the logistic regression model is defined and initialized with " -":code:`utils.set_initial_params()`." +"**Source.** Flower telemetry tries to store a random source ID in " +"`~/.flwr/source` the first time a telemetry event is generated. The " +"source ID is important to identify whether an issue is recurring or " +"whether an issue is triggered by multiple clusters running concurrently " +"(which often happens in simulation). For example, if a device runs " +"multiple workloads at the same time, and this results in an issue, then, " +"in order to reproduce the issue, multiple workloads must be started at " +"the same time." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:97 +#: ../../source/ref-telemetry.md:44 msgid "" -"The Flower server interacts with clients through an interface called " -":code:`Client`. When the server selects a particular client for training," -" it sends training instructions over the network. The client receives " -"those instructions and calls one of the :code:`Client` methods to run " -"your code (i.e., to fit the logistic regression we defined earlier)." +"You may delete the source ID at any time. If you wish for all events " +"logged under a specific source ID to be deleted, you can send a deletion " +"request mentioning the source ID to `telemetry@flower.ai`. All events " +"related to that source ID will then be permanently deleted." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:103 +#: ../../source/ref-telemetry.md:46 msgid "" -"Flower provides a convenience class called :code:`NumPyClient` which " -"makes it easier to implement the :code:`Client` interface when your " -"workload uses scikit-learn. Implementing :code:`NumPyClient` usually " -"means defining the following methods (:code:`set_parameters` is optional " -"though):" +"We will not collect any personally identifiable information. If you think" +" any of the metrics collected could be misused in any way, please [get in" +" touch with us](#how-to-contact-us). We will update this page to reflect " +"any changes to the metrics collected and publish changes in the " +"changelog." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:112 -msgid "is directly imported with :code:`utils.set_model_params()`" +#: ../../source/ref-telemetry.md:48 +msgid "" +"If you think other metrics would be helpful for us to better guide our " +"decisions, please let us know! We will carefully review them; if we are " +"confident that they do not compromise user privacy, we may add them." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:120 -msgid "The methods can be implemented in the following way:" +#: ../../source/ref-telemetry.md:50 +msgid "How to inspect what is being reported" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:143 +#: ../../source/ref-telemetry.md:52 msgid "" -"We can now create an instance of our class :code:`MnistClient` and add " -"one line to actually run this client:" +"We wanted to make it very easy for you to inspect what anonymous usage " +"metrics are reported. You can view all the reported telemetry information" +" by setting the environment variable `FLWR_TELEMETRY_LOGGING=1`. Logging " +"is disabled by default. You may use logging independently from " +"`FLWR_TELEMETRY_ENABLED` so that you can inspect the telemetry feature " +"without sending any metrics." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:150 +#: ../../source/ref-telemetry.md:58 msgid "" -"That's it for the client. We only have to implement :code:`Client` or " -":code:`NumPyClient` and call :code:`fl.client.start_client()`. If you " -"implement a client of type :code:`NumPyClient` you'll need to first call " -"its :code:`to_client()` method. The string :code:`\"0.0.0.0:8080\"` tells" -" the client which server to connect to. In our case we can run the server" -" and the client on the same machine, therefore we use " -":code:`\"0.0.0.0:8080\"`. If we run a truly federated workload with the " -"server and clients running on different machines, all that needs to " -"change is the :code:`server_address` we pass to the client." +"The inspect Flower telemetry without sending any anonymous usage metrics," +" use both environment variables:" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:159 -msgid "" -"The following Flower server is a little bit more advanced and returns an " -"evaluation function for the server-side evaluation. First, we import " -"again all required libraries such as Flower and scikit-learn." +#: ../../source/ref-telemetry.md:64 +msgid "How to contact us" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:162 -msgid ":code:`server.py`, import Flower and start the server:" +#: ../../source/ref-telemetry.md:66 +msgid "" +"We want to hear from you. If you have any feedback or ideas on how to " +"improve the way we handle anonymous usage metrics, reach out to us via " +"[Slack](https://flower.ai/join-slack/) (channel `#telemetry`) or email " +"(`telemetry@flower.ai`)." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:173 +#: ../../source/tutorial-quickstart-android.rst:-1 msgid "" -"The number of federated learning rounds is set in :code:`fit_round()` and" -" the evaluation is defined in :code:`get_evaluate_fn()`. The evaluation " -"function is called after each federated learning round and gives you " -"information about loss and accuracy." +"Read this Federated Learning quickstart tutorial for creating an Android " +"app using Flower." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:198 -msgid "" -"The :code:`main` contains the server-side parameter initialization " -":code:`utils.set_initial_params()` as well as the aggregation strategy " -":code:`fl.server.strategy:FedAvg()`. The strategy is the default one, " -"federated averaging (or FedAvg), with two clients and evaluation after " -"each federated learning round. The server can be started with the command" -" :code:`fl.server.start_server(server_address=\"0.0.0.0:8080\", " -"strategy=strategy, config=fl.server.ServerConfig(num_rounds=3))`." +#: ../../source/tutorial-quickstart-android.rst:5 +msgid "Quickstart Android" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:217 +#: ../../source/tutorial-quickstart-android.rst:10 msgid "" -"With both client and server ready, we can now run everything and see " -"federated learning in action. Federated learning systems usually have a " -"server and multiple clients. We, therefore, have to start the server " -"first:" +"Let's build a federated learning system using TFLite and Flower on " +"Android!" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:271 +#: ../../source/tutorial-quickstart-android.rst:12 msgid "" -"Congratulations! You've successfully built and run your first federated " -"learning system. The full `source code " -"`_ for this example can be found in :code:`examples/sklearn-logreg-" -"mnist`." +"Please refer to the `full code example " +"`_ to learn " +"more." msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:-1 +#: ../../source/tutorial-quickstart-fastai.rst:-1 msgid "" "Check out this Federated Learning quickstart tutorial for using Flower " -"with TensorFlow to train a MobilNetV2 model on CIFAR-10." +"with FastAI to train a vision model on CIFAR-10." msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:5 -msgid "Quickstart TensorFlow" +#: ../../source/tutorial-quickstart-fastai.rst:5 +msgid "Quickstart fastai" msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:13 -msgid "Let's build a federated learning system in less than 20 lines of code!" +#: ../../source/tutorial-quickstart-fastai.rst:10 +msgid "Let's build a federated learning system using fastai and Flower!" msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:15 -msgid "Before Flower can be imported we have to install it:" +#: ../../source/tutorial-quickstart-fastai.rst:12 +msgid "" +"Please refer to the `full code example " +"`_ " +"to learn more." msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:21 +#: ../../source/tutorial-quickstart-huggingface.rst:-1 msgid "" -"Since we want to use the Keras API of TensorFlow (TF), we have to install" -" TF as well:" +"Check out this Federating Learning quickstart tutorial for using Flower " +"with HuggingFace Transformers in order to fine-tune an LLM." msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:31 -msgid "Next, in a file called :code:`client.py`, import Flower and TensorFlow:" +#: ../../source/tutorial-quickstart-huggingface.rst:5 +msgid "Quickstart 🤗 Transformers" msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:38 +#: ../../source/tutorial-quickstart-huggingface.rst:10 msgid "" -"We use the Keras utilities of TF to load CIFAR10, a popular colored image" -" classification dataset for machine learning. The call to " -":code:`tf.keras.datasets.cifar10.load_data()` downloads CIFAR10, caches " -"it locally, and then returns the entire training and test set as NumPy " -"ndarrays." +"Let's build a federated learning system using Hugging Face Transformers " +"and Flower!" msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:47 +#: ../../source/tutorial-quickstart-huggingface.rst:12 msgid "" -"Next, we need a model. For the purpose of this tutorial, we use " -"MobilNetV2 with 10 output classes:" +"We will leverage Hugging Face to federate the training of language models" +" over multiple clients using Flower. More specifically, we will fine-tune" +" a pre-trained Transformer model (distilBERT) for sequence classification" +" over a dataset of IMDB ratings. The end goal is to detect if a movie " +"rating is positive or negative." msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:60 -msgid "" -"Flower provides a convenience class called :code:`NumPyClient` which " -"makes it easier to implement the :code:`Client` interface when your " -"workload uses Keras. The :code:`NumPyClient` interface defines three " -"methods which can be implemented in the following way:" +#: ../../source/tutorial-quickstart-huggingface.rst:18 +msgid "Dependencies" msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:135 -msgid "Each client will have its own dataset." +#: ../../source/tutorial-quickstart-huggingface.rst:20 +msgid "" +"To follow along this tutorial you will need to install the following " +"packages: :code:`datasets`, :code:`evaluate`, :code:`flwr`, " +":code:`torch`, and :code:`transformers`. This can be done using " +":code:`pip`:" msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:137 -msgid "" -"You should now see how the training does in the very first terminal (the " -"one that started the server):" +#: ../../source/tutorial-quickstart-huggingface.rst:30 +msgid "Standard Hugging Face workflow" msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:169 -msgid "" -"Congratulations! You've successfully built and run your first federated " -"learning system. The full `source code " -"`_ for this can be found in :code:`examples" -"/quickstart-tensorflow/client.py`." +#: ../../source/tutorial-quickstart-huggingface.rst:33 +msgid "Handling the data" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:-1 +#: ../../source/tutorial-quickstart-huggingface.rst:35 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with XGBoost to train classification models on trees." +"To fetch the IMDB dataset, we will use Hugging Face's :code:`datasets` " +"library. We then need to tokenize the data and create :code:`PyTorch` " +"dataloaders, this is all done in the :code:`load_data` function:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:5 -msgid "Quickstart XGBoost" +#: ../../source/tutorial-quickstart-huggingface.rst:81 +msgid "Training and testing the model" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:14 -msgid "Federated XGBoost" +#: ../../source/tutorial-quickstart-huggingface.rst:83 +msgid "" +"Once we have a way of creating our trainloader and testloader, we can " +"take care of the training and testing. This is very similar to any " +":code:`PyTorch` training or testing loop:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:16 -msgid "" -"EXtreme Gradient Boosting (**XGBoost**) is a robust and efficient " -"implementation of gradient-boosted decision tree (**GBDT**), that " -"maximises the computational boundaries for boosted tree methods. It's " -"primarily designed to enhance both the performance and computational " -"speed of machine learning models. In XGBoost, trees are constructed " -"concurrently, unlike the sequential approach taken by GBDT." +#: ../../source/tutorial-quickstart-huggingface.rst:121 +msgid "Creating the model itself" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:20 +#: ../../source/tutorial-quickstart-huggingface.rst:123 msgid "" -"Often, for tabular data on medium-sized datasets with fewer than 10k " -"training examples, XGBoost surpasses the results of deep learning " -"techniques." +"To create the model itself, we will just load the pre-trained distillBERT" +" model using Hugging Face’s :code:`AutoModelForSequenceClassification` :" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:23 -msgid "Why federated XGBoost?" +#: ../../source/tutorial-quickstart-huggingface.rst:136 +msgid "Federating the example" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:25 -msgid "" -"Indeed, as the demand for data privacy and decentralized learning grows, " -"there's an increasing requirement to implement federated XGBoost systems " -"for specialised applications, like survival analysis and financial fraud " -"detection." +#: ../../source/tutorial-quickstart-huggingface.rst:139 +msgid "Creating the IMDBClient" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:27 +#: ../../source/tutorial-quickstart-huggingface.rst:141 msgid "" -"Federated learning ensures that raw data remains on the local device, " -"making it an attractive approach for sensitive domains where data " -"security and privacy are paramount. Given the robustness and efficiency " -"of XGBoost, combining it with federated learning offers a promising " -"solution for these specific challenges." +"To federate our example to multiple clients, we first need to write our " +"Flower client class (inheriting from :code:`flwr.client.NumPyClient`). " +"This is very easy, as our model is a standard :code:`PyTorch` model:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:30 +#: ../../source/tutorial-quickstart-huggingface.rst:169 msgid "" -"In this tutorial we will learn how to train a federated XGBoost model on " -"HIGGS dataset using Flower and :code:`xgboost` package. We use a simple " -"example (`full code xgboost-quickstart " -"`_)" -" with two *clients* and one *server* to demonstrate how federated XGBoost" -" works, and then we dive into a more complex example (`full code xgboost-" -"comprehensive `_) to run various experiments." +"The :code:`get_parameters` function lets the server get the client's " +"parameters. Inversely, the :code:`set_parameters` function allows the " +"server to send its parameters to the client. Finally, the :code:`fit` " +"function trains the model locally for the client, and the " +":code:`evaluate` function tests the model locally and returns the " +"relevant metrics." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:37 -msgid "Environment Setup" +#: ../../source/tutorial-quickstart-huggingface.rst:175 +msgid "Starting the server" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:41 +#: ../../source/tutorial-quickstart-huggingface.rst:177 msgid "" -"We first need to install Flower and Flower Datasets. You can do this by " -"running :" +"Now that we have a way to instantiate clients, we need to create our " +"server in order to aggregate the results. Using Flower, this can be done " +"very easily by first choosing a strategy (here, we are using " +":code:`FedAvg`, which will define the global weights as the average of " +"all the clients' weights at each round) and then using the " +":code:`flwr.server.start_server` function:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:47 +#: ../../source/tutorial-quickstart-huggingface.rst:205 msgid "" -"Since we want to use :code:`xgboost` package to build up XGBoost trees, " -"let's go ahead and install :code:`xgboost`:" +"The :code:`weighted_average` function is there to provide a way to " +"aggregate the metrics distributed amongst the clients (basically this " +"allows us to display a nice average accuracy and loss for every round)." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:57 -msgid "" -"*Clients* are responsible for generating individual weight-updates for " -"the model based on their local datasets. Now that we have all our " -"dependencies installed, let's run a simple distributed training with two " -"clients and one server." +#: ../../source/tutorial-quickstart-huggingface.rst:209 +msgid "Putting everything together" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:60 -msgid "" -"In a file called :code:`client.py`, import xgboost, Flower, Flower " -"Datasets and other related functions:" +#: ../../source/tutorial-quickstart-huggingface.rst:211 +msgid "We can now start client instances using:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:87 -msgid "Dataset partition and hyper-parameter selection" +#: ../../source/tutorial-quickstart-huggingface.rst:221 +msgid "" +"And they will be able to connect to the server and start the federated " +"training." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:89 +#: ../../source/tutorial-quickstart-huggingface.rst:223 msgid "" -"Prior to local training, we require loading the HIGGS dataset from Flower" -" Datasets and conduct data partitioning for FL:" +"If you want to check out everything put together, you should check out " +"the `full code example `_ ." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:102 +#: ../../source/tutorial-quickstart-huggingface.rst:226 msgid "" -"In this example, we split the dataset into two partitions with uniform " -"distribution (:code:`IidPartitioner(num_partitions=2)`). Then, we load " -"the partition for the given client based on :code:`node_id`:" +"Of course, this is a very basic example, and a lot can be added or " +"modified, it was just to showcase how simply we could federate a Hugging " +"Face workflow using Flower." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:121 +#: ../../source/tutorial-quickstart-huggingface.rst:229 msgid "" -"After that, we do train/test splitting on the given partition (client's " -"local data), and transform data format for :code:`xgboost` package." +"Note that in this example we used :code:`PyTorch`, but we could have very" +" well used :code:`TensorFlow`." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:134 +#: ../../source/tutorial-quickstart-ios.rst:-1 msgid "" -"The functions of :code:`train_test_split` and " -":code:`transform_dataset_to_dmatrix` are defined as below:" +"Read this Federated Learning quickstart tutorial for creating an iOS app " +"using Flower to train a neural network on MNIST." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:158 -msgid "Finally, we define the hyper-parameters used for XGBoost training." +#: ../../source/tutorial-quickstart-ios.rst:5 +msgid "Quickstart iOS" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:174 +#: ../../source/tutorial-quickstart-ios.rst:10 msgid "" -"The :code:`num_local_round` represents the number of iterations for local" -" tree boost. We use CPU for the training in default. One can shift it to " -"GPU by setting :code:`tree_method` to :code:`gpu_hist`. We use AUC as " -"evaluation metric." -msgstr "" - -#: ../../source/tutorial-quickstart-xgboost.rst:181 -msgid "Flower client definition for XGBoost" +"In this tutorial we will learn how to train a Neural Network on MNIST " +"using Flower and CoreML on iOS devices." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:183 +#: ../../source/tutorial-quickstart-ios.rst:12 msgid "" -"After loading the dataset we define the Flower client. We follow the " -"general rule to define :code:`XgbClient` class inherited from " -":code:`fl.client.Client`." +"First of all, for running the Flower Python server, it is recommended to " +"create a virtual environment and run everything within a :doc:`virtualenv" +" `. For the Flower client " +"implementation in iOS, it is recommended to use Xcode as our IDE." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:193 +#: ../../source/tutorial-quickstart-ios.rst:15 msgid "" -"The :code:`self.bst` is used to keep the Booster objects that remain " -"consistent across rounds, allowing them to store predictions from trees " -"integrated in earlier rounds and maintain other essential data structures" -" for training." +"Our example consists of one Python *server* and two iPhone *clients* that" +" all have the same model." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:196 +#: ../../source/tutorial-quickstart-ios.rst:17 msgid "" -"Then, we override :code:`get_parameters`, :code:`fit` and " -":code:`evaluate` methods insides :code:`XgbClient` class as follows." +"*Clients* are responsible for generating individual weight updates for " +"the model based on their local datasets. These updates are then sent to " +"the *server* which will aggregate them to produce a better model. " +"Finally, the *server* sends this improved version of the model back to " +"each *client*. A complete cycle of weight updates is called a *round*." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:210 +#: ../../source/tutorial-quickstart-ios.rst:21 msgid "" -"Unlike neural network training, XGBoost trees are not started from a " -"specified random weights. In this case, we do not use " -":code:`get_parameters` and :code:`set_parameters` to initialise model " -"parameters for XGBoost. As a result, let's return an empty tensor in " -":code:`get_parameters` when it is called by the server at the first " -"round." +"Now that we have a rough idea of what is going on, let's get started to " +"setup our Flower server environment. We first need to install Flower. You" +" can do this by using pip:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:251 -msgid "" -"In :code:`fit`, at the first round, we call :code:`xgb.train()` to build " -"up the first set of trees. the returned Booster object and config are " -"stored in :code:`self.bst` and :code:`self.config`, respectively. From " -"the second round, we load the global model sent from server to " -":code:`self.bst`, and then update model weights on local training data " -"with function :code:`local_boost` as follows:" +#: ../../source/tutorial-quickstart-ios.rst:27 +msgid "Or Poetry:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:269 -msgid "" -"Given :code:`num_local_round`, we update trees by calling " -":code:`self.bst.update` method. After training, the last " -":code:`N=num_local_round` trees will be extracted to send to the server." +#: ../../source/tutorial-quickstart-ios.rst:34 +#: ../../source/tutorial-quickstart-pytorch.rst:37 +#: ../../source/tutorial-quickstart-scikitlearn.rst:40 +#: ../../source/tutorial-quickstart-tensorflow.rst:29 +#: ../../source/tutorial-quickstart-xgboost.rst:55 +msgid "Flower Client" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:291 +#: ../../source/tutorial-quickstart-ios.rst:36 msgid "" -"In :code:`evaluate`, we call :code:`self.bst.eval_set` function to " -"conduct evaluation on valid set. The AUC value will be returned." +"Now that we have all our dependencies installed, let's run a simple " +"distributed training using CoreML as our local training pipeline and " +"MNIST as our dataset. For simplicity reasons we will use the complete " +"Flower client with CoreML, that has been implemented and stored inside " +"the Swift SDK. The client implementation can be seen below:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:294 +#: ../../source/tutorial-quickstart-ios.rst:72 msgid "" -"Now, we can create an instance of our class :code:`XgbClient` and add one" -" line to actually run this client:" +"Let's create a new application project in Xcode and add :code:`flwr` as a" +" dependency in your project. For our application, we will store the logic" +" of our app in :code:`FLiOSModel.swift` and the UI elements in " +":code:`ContentView.swift`. We will focus more on :code:`FLiOSModel.swift`" +" in this quickstart. Please refer to the `full code example " +"`_ to learn more " +"about the app." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:300 -msgid "" -"That's it for the client. We only have to implement :code:`Client`and " -"call :code:`fl.client.start_client()`. The string :code:`\"[::]:8080\"` " -"tells the client which server to connect to. In our case we can run the " -"server and the client on the same machine, therefore we use " -":code:`\"[::]:8080\"`. If we run a truly federated workload with the " -"server and clients running on different machines, all that needs to " -"change is the :code:`server_address` we point the client at." +#: ../../source/tutorial-quickstart-ios.rst:75 +msgid "Import Flower and CoreML related packages in :code:`FLiOSModel.swift`:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:311 +#: ../../source/tutorial-quickstart-ios.rst:83 msgid "" -"These updates are then sent to the *server* which will aggregate them to " -"produce a better model. Finally, the *server* sends this improved version" -" of the model back to each *client* to finish a complete FL round." +"Then add the mlmodel to the project simply by drag-and-drop, the mlmodel " +"will be bundled inside the application during deployment to your iOS " +"device. We need to pass the url to access mlmodel and run CoreML machine " +"learning processes, it can be retrieved by calling the function " +":code:`Bundle.main.url`. For the MNIST dataset, we need to preprocess it " +"into :code:`MLBatchProvider` object. The preprocessing is done inside " +":code:`DataLoader.swift`." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:314 +#: ../../source/tutorial-quickstart-ios.rst:99 msgid "" -"In a file named :code:`server.py`, import Flower and FedXgbBagging from " -":code:`flwr.server.strategy`." +"Since CoreML does not allow the model parameters to be seen before " +"training, and accessing the model parameters during or after the training" +" can only be done by specifying the layer name, we need to know this " +"information beforehand, through looking at the model specification, which" +" are written as proto files. The implementation can be seen in " +":code:`MLModelInspect`." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:316 -msgid "We first define a strategy for XGBoost bagging aggregation." +#: ../../source/tutorial-quickstart-ios.rst:102 +msgid "" +"After we have all of the necessary information, let's create our Flower " +"client." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:339 +#: ../../source/tutorial-quickstart-ios.rst:117 msgid "" -"We use two clients for this example. An " -":code:`evaluate_metrics_aggregation` function is defined to collect and " -"wighted average the AUC values from clients." +"Then start the Flower gRPC client and start communicating to the server " +"by passing our Flower client to the function :code:`startFlwrGRPC`." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:342 -msgid "Then, we start the server:" +#: ../../source/tutorial-quickstart-ios.rst:124 +msgid "" +"That's it for the client. We only have to implement :code:`Client` or " +"call the provided :code:`MLFlwrClient` and call :code:`startFlwrGRPC()`. " +"The attribute :code:`hostname` and :code:`port` tells the client which " +"server to connect to. This can be done by entering the hostname and port " +"in the application before clicking the start button to start the " +"federated learning process." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:354 -msgid "Tree-based bagging aggregation" +#: ../../source/tutorial-quickstart-ios.rst:129 +#: ../../source/tutorial-quickstart-pytorch.rst:203 +#: ../../source/tutorial-quickstart-scikitlearn.rst:167 +#: ../../source/tutorial-quickstart-tensorflow.rst:98 +#: ../../source/tutorial-quickstart-xgboost.rst:309 +msgid "Flower Server" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:356 +#: ../../source/tutorial-quickstart-ios.rst:131 +#: ../../source/tutorial-quickstart-pytorch.rst:205 +#: ../../source/tutorial-quickstart-tensorflow.rst:100 msgid "" -"You must be curious about how bagging aggregation works. Let's look into " -"the details." +"For simple workloads we can start a Flower server and leave all the " +"configuration possibilities at their default values. In a file named " +":code:`server.py`, import Flower and start the server:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:358 -msgid "" -"In file :code:`flwr.server.strategy.fedxgb_bagging.py`, we define " -":code:`FedXgbBagging` inherited from :code:`flwr.server.strategy.FedAvg`." -" Then, we override the :code:`aggregate_fit`, :code:`aggregate_evaluate` " -"and :code:`evaluate` methods as follows:" +#: ../../source/tutorial-quickstart-ios.rst:142 +#: ../../source/tutorial-quickstart-pytorch.rst:216 +#: ../../source/tutorial-quickstart-scikitlearn.rst:230 +#: ../../source/tutorial-quickstart-tensorflow.rst:112 +msgid "Train the model, federated!" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:454 +#: ../../source/tutorial-quickstart-ios.rst:144 +#: ../../source/tutorial-quickstart-pytorch.rst:218 +#: ../../source/tutorial-quickstart-tensorflow.rst:114 +#: ../../source/tutorial-quickstart-xgboost.rst:525 msgid "" -"In :code:`aggregate_fit`, we sequentially aggregate the clients' XGBoost " -"trees by calling :code:`aggregate()` function:" +"With both client and server ready, we can now run everything and see " +"federated learning in action. FL systems usually have a server and " +"multiple clients. We therefore have to start the server first:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:513 +#: ../../source/tutorial-quickstart-ios.rst:152 msgid "" -"In this function, we first fetch the number of trees and the number of " -"parallel trees for the current and previous model by calling " -":code:`_get_tree_nums`. Then, the fetched information will be aggregated." -" After that, the trees (containing model weights) are aggregated to " -"generate a new tree model." -msgstr "" - -#: ../../source/tutorial-quickstart-xgboost.rst:518 -msgid "" -"After traversal of all clients' models, a new global model is generated, " -"followed by the serialisation, and sending back to each client." -msgstr "" - -#: ../../source/tutorial-quickstart-xgboost.rst:523 -msgid "Launch Federated XGBoost!" +"Once the server is running we can start the clients in different " +"terminals. Build and run the client through your Xcode, one through Xcode" +" Simulator and the other by deploying it to your iPhone. To see more " +"about how to deploy your app to iPhone or Simulator visit `here " +"`_." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:585 +#: ../../source/tutorial-quickstart-ios.rst:156 msgid "" "Congratulations! You've successfully built and run your first federated " -"XGBoost system. The AUC values can be checked in " -":code:`metrics_distributed`. One can see that the average AUC increases " -"over FL rounds." +"learning system in your ios device. The full `source code " +"`_ for this " +"example can be found in :code:`examples/ios`." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:590 +#: ../../source/tutorial-quickstart-jax.rst:-1 msgid "" -"The full `source code `_ for this example can be found in :code:`examples" -"/xgboost-quickstart`." +"Check out this Federated Learning quickstart tutorial for using Flower " +"with Jax to train a linear regression model on a scikit-learn dataset." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:594 -msgid "Comprehensive Federated XGBoost" +#: ../../source/tutorial-quickstart-jax.rst:5 +msgid "Quickstart JAX" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:596 +#: ../../source/tutorial-quickstart-pandas.rst:-1 msgid "" -"Now that you have known how federated XGBoost work with Flower, it's time" -" to run some more comprehensive experiments by customising the " -"experimental settings. In the xgboost-comprehensive example (`full code " -"`_), we provide more options to define various experimental" -" setups, including aggregation strategies, data partitioning and " -"centralised/distributed evaluation. We also support :doc:`Flower " -"simulation ` making it easy to simulate large " -"client cohorts in a resource-aware manner. Let's take a look!" +"Check out this Federated Learning quickstart tutorial for using Flower " +"with Pandas to perform Federated Analytics." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:603 -msgid "Cyclic training" +#: ../../source/tutorial-quickstart-pandas.rst:5 +msgid "Quickstart Pandas" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:605 -msgid "" -"In addition to bagging aggregation, we offer a cyclic training scheme, " -"which performs FL in a client-by-client fashion. Instead of aggregating " -"multiple clients, there is only one single client participating in the " -"training per round in the cyclic training scenario. The trained local " -"XGBoost trees will be passed to the next client as an initialised model " -"for next round's boosting." +#: ../../source/tutorial-quickstart-pandas.rst:10 +msgid "Let's build a federated analytics system using Pandas and Flower!" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:609 +#: ../../source/tutorial-quickstart-pandas.rst:12 msgid "" -"To do this, we first customise a :code:`ClientManager` in " -":code:`server_utils.py`:" +"Please refer to the `full code example " +"`_ " +"to learn more." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:649 +#: ../../source/tutorial-quickstart-pytorch.rst:-1 msgid "" -"The customised :code:`ClientManager` samples all available clients in " -"each FL round based on the order of connection to the server. Then, we " -"define a new strategy :code:`FedXgbCyclic` in " -":code:`flwr.server.strategy.fedxgb_cyclic.py`, in order to sequentially " -"select only one client in given round and pass the received model to next" -" client." +"Check out this Federated Learning quickstart tutorial for using Flower " +"with PyTorch to train a CNN model on MNIST." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:690 +#: ../../source/tutorial-quickstart-pytorch.rst:13 msgid "" -"Unlike the original :code:`FedAvg`, we don't perform aggregation here. " -"Instead, we just make a copy of the received client model as global model" -" by overriding :code:`aggregate_fit`." +"In this tutorial we will learn how to train a Convolutional Neural " +"Network on CIFAR10 using Flower and PyTorch." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:693 +#: ../../source/tutorial-quickstart-pytorch.rst:15 +#: ../../source/tutorial-quickstart-xgboost.rst:39 msgid "" -"Also, the customised :code:`configure_fit` and :code:`configure_evaluate`" -" methods ensure the clients to be sequentially selected given FL round:" +"First of all, it is recommended to create a virtual environment and run " +"everything within a :doc:`virtualenv `." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:757 -msgid "Customised data partitioning" +#: ../../source/tutorial-quickstart-pytorch.rst:17 +#: ../../source/tutorial-quickstart-scikitlearn.rst:14 +msgid "" +"Our example consists of one *server* and two *clients* all having the " +"same model." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:759 +#: ../../source/tutorial-quickstart-pytorch.rst:19 msgid "" -"In :code:`dataset.py`, we have a function :code:`instantiate_partitioner`" -" to instantiate the data partitioner based on the given " -":code:`num_partitions` and :code:`partitioner_type`. Currently, we " -"provide four supported partitioner type to simulate the uniformity/non-" -"uniformity in data quantity (uniform, linear, square, exponential)." +"*Clients* are responsible for generating individual weight-updates for " +"the model based on their local datasets. These updates are then sent to " +"the *server* which will aggregate them to produce a better model. " +"Finally, the *server* sends this improved version of the model back to " +"each *client*. A complete cycle of weight updates is called a *round*." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:790 -msgid "Customised centralised/distributed evaluation" +#: ../../source/tutorial-quickstart-pytorch.rst:23 +msgid "" +"Now that we have a rough idea of what is going on, let's get started. We " +"first need to install Flower. You can do this by running :" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:792 +#: ../../source/tutorial-quickstart-pytorch.rst:29 msgid "" -"To facilitate centralised evaluation, we define a function in " -":code:`server_utils.py`:" +"Since we want to use PyTorch to solve a computer vision task, let's go " +"ahead and install PyTorch and the **torchvision** library:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:824 +#: ../../source/tutorial-quickstart-pytorch.rst:39 msgid "" -"This function returns a evaluation function which instantiates a " -":code:`Booster` object and loads the global model weights to it. The " -"evaluation is conducted by calling :code:`eval_set()` method, and the " -"tested AUC value is reported." +"Now that we have all our dependencies installed, let's run a simple " +"distributed training with two clients and one server. Our training " +"procedure and network architecture are based on PyTorch's `Deep Learning " +"with PyTorch " +"`_." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:827 +#: ../../source/tutorial-quickstart-pytorch.rst:41 msgid "" -"As for distributed evaluation on the clients, it's same as the quick-" -"start example by overriding the :code:`evaluate()` method insides the " -":code:`XgbClient` class in :code:`client_utils.py`." +"In a file called :code:`client.py`, import Flower and PyTorch related " +"packages:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:831 -msgid "Flower simulation" +#: ../../source/tutorial-quickstart-pytorch.rst:56 +msgid "In addition, we define the device allocation in PyTorch with:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:832 +#: ../../source/tutorial-quickstart-pytorch.rst:62 msgid "" -"We also provide an example code (:code:`sim.py`) to use the simulation " -"capabilities of Flower to simulate federated XGBoost training on either a" -" single machine or a cluster of machines." +"We use PyTorch to load CIFAR10, a popular colored image classification " +"dataset for machine learning. The PyTorch :code:`DataLoader()` downloads " +"the training and test data that are then normalized." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:866 +#: ../../source/tutorial-quickstart-pytorch.rst:78 msgid "" -"After importing all required packages, we define a :code:`main()` " -"function to perform the simulation process:" +"Define the loss and optimizer with PyTorch. The training of the dataset " +"is done by looping over the dataset, measure the corresponding loss and " +"optimize it." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:921 +#: ../../source/tutorial-quickstart-pytorch.rst:94 msgid "" -"We first load the dataset and perform data partitioning, and the pre-" -"processed data is stored in a :code:`list`. After the simulation begins, " -"the clients won't need to pre-process their partitions again." +"Define then the validation of the machine learning network. We loop over" +" the test set and measure the loss and accuracy of the test set." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:924 -msgid "Then, we define the strategies and other hyper-parameters:" +#: ../../source/tutorial-quickstart-pytorch.rst:113 +msgid "" +"After defining the training and testing of a PyTorch machine learning " +"model, we use the functions for the Flower clients." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:975 +#: ../../source/tutorial-quickstart-pytorch.rst:115 msgid "" -"After that, we start the simulation by calling " -":code:`fl.simulation.start_simulation`:" +"The Flower clients will use a simple CNN adapted from 'PyTorch: A 60 " +"Minute Blitz':" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:995 +#: ../../source/tutorial-quickstart-pytorch.rst:142 msgid "" -"One of key parameters for :code:`start_simulation` is :code:`client_fn` " -"which returns a function to construct a client. We define it as follows:" +"After loading the data set with :code:`load_data()` we define the Flower " +"interface." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1038 -msgid "Arguments parser" +#: ../../source/tutorial-quickstart-pytorch.rst:144 +#: ../../source/tutorial-quickstart-tensorflow.rst:54 +msgid "" +"The Flower server interacts with clients through an interface called " +":code:`Client`. When the server selects a particular client for training," +" it sends training instructions over the network. The client receives " +"those instructions and calls one of the :code:`Client` methods to run " +"your code (i.e., to train the neural network we defined earlier)." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1040 +#: ../../source/tutorial-quickstart-pytorch.rst:150 msgid "" -"In :code:`utils.py`, we define the arguments parsers for clients, server " -"and simulation, allowing users to specify different experimental " -"settings. Let's first see the sever side:" +"Flower provides a convenience class called :code:`NumPyClient` which " +"makes it easier to implement the :code:`Client` interface when your " +"workload uses PyTorch. Implementing :code:`NumPyClient` usually means " +"defining the following methods (:code:`set_parameters` is optional " +"though):" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1086 -msgid "" -"This allows user to specify training strategies / the number of total " -"clients / FL rounds / participating clients / clients for evaluation, and" -" evaluation fashion. Note that with :code:`--centralised-eval`, the sever" -" will do centralised evaluation and all functionalities for client " -"evaluation will be disabled." +#: ../../source/tutorial-quickstart-pytorch.rst:156 +#: ../../source/tutorial-quickstart-scikitlearn.rst:119 +msgid "return the model weight as a list of NumPy ndarrays" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1090 -msgid "Then, the argument parser on client side:" +#: ../../source/tutorial-quickstart-pytorch.rst:157 +#: ../../source/tutorial-quickstart-scikitlearn.rst:121 +msgid ":code:`set_parameters` (optional)" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1144 +#: ../../source/tutorial-quickstart-pytorch.rst:158 +#: ../../source/tutorial-quickstart-scikitlearn.rst:121 msgid "" -"This defines various options for client data partitioning. Besides, " -"clients also have an option to conduct evaluation on centralised test set" -" by setting :code:`--centralised-eval`, as well as an option to perform " -"scaled learning rate based on the number of clients by setting :code" -":`--scaled-lr`." +"update the local model weights with the parameters received from the " +"server" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1148 -msgid "We also have an argument parser for simulation:" +#: ../../source/tutorial-quickstart-pytorch.rst:160 +#: ../../source/tutorial-quickstart-scikitlearn.rst:124 +msgid "set the local model weights" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1226 -msgid "This integrates all arguments for both client and server sides." +#: ../../source/tutorial-quickstart-pytorch.rst:161 +#: ../../source/tutorial-quickstart-scikitlearn.rst:125 +msgid "train the local model" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1229 -msgid "Example commands" +#: ../../source/tutorial-quickstart-pytorch.rst:162 +#: ../../source/tutorial-quickstart-scikitlearn.rst:126 +msgid "receive the updated local model weights" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1231 -msgid "" -"To run a centralised evaluated experiment with bagging strategy on 5 " -"clients with exponential distribution for 50 rounds, we first start the " -"server as below:" +#: ../../source/tutorial-quickstart-pytorch.rst:164 +#: ../../source/tutorial-quickstart-scikitlearn.rst:128 +msgid "test the local model" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1238 -msgid "Then, on each client terminal, we start the clients:" -msgstr "" +#: ../../source/tutorial-quickstart-pytorch.rst:166 +msgid "which can be implemented in the following way:" +msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1244 -msgid "To run the same experiment with Flower simulation:" +#: ../../source/tutorial-quickstart-pytorch.rst:189 +#: ../../source/tutorial-quickstart-tensorflow.rst:82 +msgid "" +"We can now create an instance of our class :code:`CifarClient` and add " +"one line to actually run this client:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1250 +#: ../../source/tutorial-quickstart-pytorch.rst:196 +#: ../../source/tutorial-quickstart-tensorflow.rst:90 msgid "" -"The full `code `_ for this comprehensive example can be found in" -" :code:`examples/xgboost-comprehensive`." +"That's it for the client. We only have to implement :code:`Client` or " +":code:`NumPyClient` and call :code:`fl.client.start_client()`. If you " +"implement a client of type :code:`NumPyClient` you'll need to first call " +"its :code:`to_client()` method. The string :code:`\"[::]:8080\"` tells " +"the client which server to connect to. In our case we can run the server " +"and the client on the same machine, therefore we use " +":code:`\"[::]:8080\"`. If we run a truly federated workload with the " +"server and clients running on different machines, all that needs to " +"change is the :code:`server_address` we point the client at." msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:9 -msgid "Build a strategy from scratch" +#: ../../source/tutorial-quickstart-pytorch.rst:226 +#: ../../source/tutorial-quickstart-scikitlearn.rst:239 +#: ../../source/tutorial-quickstart-tensorflow.rst:122 +#: ../../source/tutorial-quickstart-xgboost.rst:533 +msgid "" +"Once the server is running we can start the clients in different " +"terminals. Open a new terminal and start the first client:" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:11 +#: ../../source/tutorial-quickstart-pytorch.rst:233 +#: ../../source/tutorial-quickstart-scikitlearn.rst:246 +#: ../../source/tutorial-quickstart-tensorflow.rst:129 +#: ../../source/tutorial-quickstart-xgboost.rst:540 +msgid "Open another terminal and start the second client:" +msgstr "" + +#: ../../source/tutorial-quickstart-pytorch.rst:239 +#: ../../source/tutorial-quickstart-scikitlearn.rst:252 +#: ../../source/tutorial-quickstart-xgboost.rst:546 msgid "" -"Welcome to the third part of the Flower federated learning tutorial. In " -"previous parts of this tutorial, we introduced federated learning with " -"PyTorch and Flower (`part 1 `__) and we learned how strategies " -"can be used to customize the execution on both the server and the clients" -" (`part 2 `__)." +"Each client will have its own dataset. You should now see how the " +"training does in the very first terminal (the one that started the " +"server):" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:13 +#: ../../source/tutorial-quickstart-pytorch.rst:271 msgid "" -"In this notebook, we'll continue to customize the federated learning " -"system we built previously by creating a custom version of FedAvg (again," -" using `Flower `__ and `PyTorch " -"`__)." +"Congratulations! You've successfully built and run your first federated " +"learning system. The full `source code " +"`_ for this example can be found in :code:`examples" +"/quickstart-pytorch`." msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:15 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:16 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:15 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:15 +#: ../../source/tutorial-quickstart-pytorch-lightning.rst:-1 msgid "" -"`Star Flower on GitHub `__ ⭐️ and join " -"the Flower community on Slack to connect, ask questions, and get help: " -"`Join Slack `__ 🌼 We'd love to hear from " -"you in the ``#introductions`` channel! And if anything is unclear, head " -"over to the ``#questions`` channel." +"Check out this Federated Learning quickstart tutorial for using Flower " +"with PyTorch Lightning to train an Auto Encoder model on MNIST." msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:17 -msgid "Let's build a new ``Strategy`` from scratch!" +#: ../../source/tutorial-quickstart-pytorch-lightning.rst:5 +msgid "Quickstart PyTorch Lightning" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:29 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:29 -msgid "Preparation" +#: ../../source/tutorial-quickstart-pytorch-lightning.rst:10 +msgid "" +"Let's build a horizontal federated learning system using PyTorch " +"Lightning and Flower!" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:31 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:32 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:31 +#: ../../source/tutorial-quickstart-pytorch-lightning.rst:12 msgid "" -"Before we begin with the actual code, let's make sure that we have " -"everything we need." +"Please refer to the `full code example " +"`_ to learn more." msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:43 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:44 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:43 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:43 -msgid "Installing dependencies" +#: ../../source/tutorial-quickstart-scikitlearn.rst:-1 +msgid "" +"Check out this Federated Learning quickstart tutorial for using Flower " +"with scikit-learn to train a linear regression model." msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:45 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:46 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:45 -msgid "First, we install the necessary packages:" +#: ../../source/tutorial-quickstart-scikitlearn.rst:5 +msgid "Quickstart scikit-learn" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:65 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:66 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:65 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:65 +#: ../../source/tutorial-quickstart-scikitlearn.rst:10 msgid "" -"Now that we have all dependencies installed, we can import everything we " -"need for this tutorial:" +"In this tutorial, we will learn how to train a :code:`Logistic " +"Regression` model on MNIST using Flower and scikit-learn." msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:101 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:102 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:101 +#: ../../source/tutorial-quickstart-scikitlearn.rst:12 msgid "" -"It is possible to switch to a runtime that has GPU acceleration enabled " -"(on Google Colab: ``Runtime > Change runtime type > Hardware acclerator: " -"GPU > Save``). Note, however, that Google Colab is not always able to " -"offer GPU acceleration. If you see an error related to GPU availability " -"in one of the following sections, consider switching back to CPU-based " -"execution by setting ``DEVICE = torch.device(\"cpu\")``. If the runtime " -"has GPU acceleration enabled, you should see the output ``Training on " -"cuda``, otherwise it'll say ``Training on cpu``." +"It is recommended to create a virtual environment and run everything " +"within this :doc:`virtualenv `." msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:114 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:115 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:114 -msgid "Data loading" +#: ../../source/tutorial-quickstart-scikitlearn.rst:16 +msgid "" +"*Clients* are responsible for generating individual model parameter " +"updates for the model based on their local datasets. These updates are " +"then sent to the *server* which will aggregate them to produce an updated" +" global model. Finally, the *server* sends this improved version of the " +"model back to each *client*. A complete cycle of parameters updates is " +"called a *round*." msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:116 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:116 +#: ../../source/tutorial-quickstart-scikitlearn.rst:20 msgid "" -"Let's now load the CIFAR-10 training and test set, partition them into " -"ten smaller datasets (each split into training and validation set), and " -"wrap everything in their own ``DataLoader``. We introduce a new parameter" -" ``num_clients`` which allows us to call ``load_datasets`` with different" -" numbers of clients." +"Now that we have a rough idea of what is going on, let's get started. We " +"first need to install Flower. You can do this by running:" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:167 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:168 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:167 -msgid "Model training/evaluation" +#: ../../source/tutorial-quickstart-scikitlearn.rst:26 +msgid "Since we want to use scikit-learn, let's go ahead and install it:" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:169 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:170 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:169 +#: ../../source/tutorial-quickstart-scikitlearn.rst:32 +msgid "Or simply install all dependencies using Poetry:" +msgstr "" + +#: ../../source/tutorial-quickstart-scikitlearn.rst:42 msgid "" -"Let's continue with the usual model definition (including " -"``set_parameters`` and ``get_parameters``), training and test functions:" +"Now that we have all our dependencies installed, let's run a simple " +"distributed training with two clients and one server. However, before " +"setting up the client and server, we will define all functionalities that" +" we need for our federated learning setup within :code:`utils.py`. The " +":code:`utils.py` contains different functions defining all the machine " +"learning basics:" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:258 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:258 -msgid "Flower client" +#: ../../source/tutorial-quickstart-scikitlearn.rst:45 +msgid ":code:`get_model_parameters()`" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:260 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:260 -msgid "" -"To implement the Flower client, we (again) create a subclass of " -"``flwr.client.NumPyClient`` and implement the three methods " -"``get_parameters``, ``fit``, and ``evaluate``. Here, we also pass the " -"``cid`` to the client and use it log additional details:" +#: ../../source/tutorial-quickstart-scikitlearn.rst:46 +msgid "Returns the parameters of a :code:`sklearn` LogisticRegression model" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:308 -msgid "Let's test what we have so far before we continue:" +#: ../../source/tutorial-quickstart-scikitlearn.rst:47 +msgid ":code:`set_model_params()`" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:339 -msgid "Build a Strategy from scratch" +#: ../../source/tutorial-quickstart-scikitlearn.rst:48 +msgid "Sets the parameters of a :code:`sklearn` LogisticRegression model" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:341 -msgid "" -"Let’s overwrite the ``configure_fit`` method such that it passes a higher" -" learning rate (potentially also other hyperparameters) to the optimizer " -"of a fraction of the clients. We will keep the sampling of the clients as" -" it is in ``FedAvg`` and then change the configuration dictionary (one of" -" the ``FitIns`` attributes)." +#: ../../source/tutorial-quickstart-scikitlearn.rst:50 +msgid ":code:`set_initial_params()`" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:507 -msgid "" -"The only thing left is to use the newly created custom Strategy " -"``FedCustom`` when starting the experiment:" +#: ../../source/tutorial-quickstart-scikitlearn.rst:50 +msgid "Initializes the model parameters that the Flower server will ask for" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:534 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:932 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:697 -msgid "Recap" +#: ../../source/tutorial-quickstart-scikitlearn.rst:52 +msgid "" +"Please check out :code:`utils.py` `here " +"`_ for more details. The pre-defined functions are used in" +" the :code:`client.py` and imported. The :code:`client.py` also requires " +"to import several packages such as Flower and scikit-learn:" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:536 +#: ../../source/tutorial-quickstart-scikitlearn.rst:67 msgid "" -"In this notebook, we’ve seen how to implement a custom strategy. A custom" -" strategy enables granular control over client node configuration, result" -" aggregation, and more. To define a custom strategy, you only have to " -"overwrite the abstract methods of the (abstract) base class ``Strategy``." -" To make custom strategies even more powerful, you can pass custom " -"functions to the constructor of your new class (``__init__``) and then " -"call these functions whenever needed." +"Prior to local training, we need to load the MNIST dataset, a popular " +"image classification dataset of handwritten digits for machine learning, " +"and partition the dataset for FL. This can be conveniently achieved using" +" `Flower Datasets `_. The " +":code:`FederatedDataset.load_partition()` method loads the partitioned " +"training set for each partition ID defined in the :code:`--partition-id` " +"argument." msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:550 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:948 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:729 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:715 -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:369 +#: ../../source/tutorial-quickstart-scikitlearn.rst:95 msgid "" -"Before you continue, make sure to join the Flower community on Slack: " -"`Join Slack `__" +"Next, the logistic regression model is defined and initialized with " +":code:`utils.set_initial_params()`." msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:552 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:950 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:731 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:717 -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:371 +#: ../../source/tutorial-quickstart-scikitlearn.rst:107 msgid "" -"There's a dedicated ``#questions`` channel if you need help, but we'd " -"also love to hear who you are in ``#introductions``!" +"The Flower server interacts with clients through an interface called " +":code:`Client`. When the server selects a particular client for training," +" it sends training instructions over the network. The client receives " +"those instructions and calls one of the :code:`Client` methods to run " +"your code (i.e., to fit the logistic regression we defined earlier)." msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:554 +#: ../../source/tutorial-quickstart-scikitlearn.rst:113 msgid "" -"The `Flower Federated Learning Tutorial - Part 4 " -"`__ introduces ``Client``, the flexible API underlying " -"``NumPyClient``." +"Flower provides a convenience class called :code:`NumPyClient` which " +"makes it easier to implement the :code:`Client` interface when your " +"workload uses scikit-learn. Implementing :code:`NumPyClient` usually " +"means defining the following methods (:code:`set_parameters` is optional " +"though):" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:9 -msgid "Customize the client" +#: ../../source/tutorial-quickstart-scikitlearn.rst:122 +msgid "is directly imported with :code:`utils.set_model_params()`" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:11 -msgid "" -"Welcome to the fourth part of the Flower federated learning tutorial. In " -"the previous parts of this tutorial, we introduced federated learning " -"with PyTorch and Flower (`part 1 `__), we learned how " -"strategies can be used to customize the execution on both the server and " -"the clients (`part 2 `__), and we built our own " -"custom strategy from scratch (`part 3 `__)." +#: ../../source/tutorial-quickstart-scikitlearn.rst:130 +msgid "The methods can be implemented in the following way:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:14 +#: ../../source/tutorial-quickstart-scikitlearn.rst:153 msgid "" -"In this notebook, we revisit ``NumPyClient`` and introduce a new " -"baseclass for building clients, simply named ``Client``. In previous " -"parts of this tutorial, we've based our client on ``NumPyClient``, a " -"convenience class which makes it easy to work with machine learning " -"libraries that have good NumPy interoperability. With ``Client``, we gain" -" a lot of flexibility that we didn't have before, but we'll also have to " -"do a few things the we didn't have to do before." +"We can now create an instance of our class :code:`MnistClient` and add " +"one line to actually run this client:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:18 +#: ../../source/tutorial-quickstart-scikitlearn.rst:160 msgid "" -"Let's go deeper and see what it takes to move from ``NumPyClient`` to " -"``Client``!" -msgstr "" - -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:30 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:29 -msgid "Step 0: Preparation" +"That's it for the client. We only have to implement :code:`Client` or " +":code:`NumPyClient` and call :code:`fl.client.start_client()`. If you " +"implement a client of type :code:`NumPyClient` you'll need to first call " +"its :code:`to_client()` method. The string :code:`\"0.0.0.0:8080\"` tells" +" the client which server to connect to. In our case we can run the server" +" and the client on the same machine, therefore we use " +":code:`\"0.0.0.0:8080\"`. If we run a truly federated workload with the " +"server and clients running on different machines, all that needs to " +"change is the :code:`server_address` we pass to the client." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:117 +#: ../../source/tutorial-quickstart-scikitlearn.rst:169 msgid "" -"Let's now load the CIFAR-10 training and test set, partition them into " -"ten smaller datasets (each split into training and validation set), and " -"wrap everything in their own ``DataLoader``." +"The following Flower server is a little bit more advanced and returns an " +"evaluation function for the server-side evaluation. First, we import " +"again all required libraries such as Flower and scikit-learn." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:259 -msgid "Step 1: Revisiting NumPyClient" +#: ../../source/tutorial-quickstart-scikitlearn.rst:172 +msgid ":code:`server.py`, import Flower and start the server:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:261 +#: ../../source/tutorial-quickstart-scikitlearn.rst:185 msgid "" -"So far, we've implemented our client by subclassing " -"``flwr.client.NumPyClient``. The three methods we implemented are " -"``get_parameters``, ``fit``, and ``evaluate``. Finally, we wrap the " -"creation of instances of this class in a function called ``client_fn``:" +"The number of federated learning rounds is set in :code:`fit_round()` and" +" the evaluation is defined in :code:`get_evaluate_fn()`. The evaluation " +"function is called after each federated learning round and gives you " +"information about loss and accuracy. Note that we also make use of Flower" +" Datasets here to load the test split of the MNIST dataset for server-" +"side evaluation." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:309 +#: ../../source/tutorial-quickstart-scikitlearn.rst:213 msgid "" -"We've seen this before, there's nothing new so far. The only *tiny* " -"difference compared to the previous notebook is naming, we've changed " -"``FlowerClient`` to ``FlowerNumPyClient`` and ``client_fn`` to " -"``numpyclient_fn``. Let's run it to see the output we get:" +"The :code:`main` contains the server-side parameter initialization " +":code:`utils.set_initial_params()` as well as the aggregation strategy " +":code:`fl.server.strategy:FedAvg()`. The strategy is the default one, " +"federated averaging (or FedAvg), with two clients and evaluation after " +"each federated learning round. The server can be started with the command" +" :code:`fl.server.start_server(server_address=\"0.0.0.0:8080\", " +"strategy=strategy, config=fl.server.ServerConfig(num_rounds=3))`." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:339 +#: ../../source/tutorial-quickstart-scikitlearn.rst:232 msgid "" -"This works as expected, two clients are training for three rounds of " -"federated learning." +"With both client and server ready, we can now run everything and see " +"federated learning in action. Federated learning systems usually have a " +"server and multiple clients. We, therefore, have to start the server " +"first:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:341 +#: ../../source/tutorial-quickstart-scikitlearn.rst:286 msgid "" -"Let's dive a little bit deeper and discuss how Flower executes this " -"simulation. Whenever a client is selected to do some work, " -"``start_simulation`` calls the function ``numpyclient_fn`` to create an " -"instance of our ``FlowerNumPyClient`` (along with loading the model and " -"the data)." +"Congratulations! You've successfully built and run your first federated " +"learning system. The full `source code " +"`_ for this example can be found in :code:`examples/sklearn-logreg-" +"mnist`." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:343 +#: ../../source/tutorial-quickstart-tensorflow.rst:-1 msgid "" -"But here's the perhaps surprising part: Flower doesn't actually use the " -"``FlowerNumPyClient`` object directly. Instead, it wraps the object to " -"makes it look like a subclass of ``flwr.client.Client``, not " -"``flwr.client.NumPyClient``. In fact, the Flower core framework doesn't " -"know how to handle ``NumPyClient``'s, it only knows how to handle " -"``Client``'s. ``NumPyClient`` is just a convenience abstraction built on " -"top of ``Client``." +"Check out this Federated Learning quickstart tutorial for using Flower " +"with TensorFlow to train a MobilNetV2 model on CIFAR-10." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:345 -msgid "" -"Instead of building on top of ``NumPyClient``, we can directly build on " -"top of ``Client``." +#: ../../source/tutorial-quickstart-tensorflow.rst:5 +msgid "Quickstart TensorFlow" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:357 -msgid "Step 2: Moving from ``NumPyClient`` to ``Client``" +#: ../../source/tutorial-quickstart-tensorflow.rst:13 +msgid "Let's build a federated learning system in less than 20 lines of code!" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:359 -msgid "" -"Let's try to do the same thing using ``Client`` instead of " -"``NumPyClient``." +#: ../../source/tutorial-quickstart-tensorflow.rst:15 +msgid "Before Flower can be imported we have to install it:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:465 +#: ../../source/tutorial-quickstart-tensorflow.rst:21 msgid "" -"Before we discuss the code in more detail, let's try to run it! Gotta " -"make sure our new ``Client``-based client works, right?" +"Since we want to use the Keras API of TensorFlow (TF), we have to install" +" TF as well:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:490 -msgid "" -"That's it, we're now using ``Client``. It probably looks similar to what " -"we've done with ``NumPyClient``. So what's the difference?" +#: ../../source/tutorial-quickstart-tensorflow.rst:31 +msgid "Next, in a file called :code:`client.py`, import Flower and TensorFlow:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:492 +#: ../../source/tutorial-quickstart-tensorflow.rst:38 msgid "" -"First of all, it's more code. But why? The difference comes from the fact" -" that ``Client`` expects us to take care of parameter serialization and " -"deserialization. For Flower to be able to send parameters over the " -"network, it eventually needs to turn these parameters into ``bytes``. " -"Turning parameters (e.g., NumPy ``ndarray``'s) into raw bytes is called " -"serialization. Turning raw bytes into something more useful (like NumPy " -"``ndarray``'s) is called deserialization. Flower needs to do both: it " -"needs to serialize parameters on the server-side and send them to the " -"client, the client needs to deserialize them to use them for local " -"training, and then serialize the updated parameters again to send them " -"back to the server, which (finally!) deserializes them again in order to " -"aggregate them with the updates received from other clients." +"We use the Keras utilities of TF to load CIFAR10, a popular colored image" +" classification dataset for machine learning. The call to " +":code:`tf.keras.datasets.cifar10.load_data()` downloads CIFAR10, caches " +"it locally, and then returns the entire training and test set as NumPy " +"ndarrays." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:495 +#: ../../source/tutorial-quickstart-tensorflow.rst:47 msgid "" -"The only *real* difference between Client and NumPyClient is that " -"NumPyClient takes care of serialization and deserialization for you. It " -"can do so because it expects you to return parameters as NumPy ndarray's," -" and it knows how to handle these. This makes working with machine " -"learning libraries that have good NumPy support (most of them) a breeze." +"Next, we need a model. For the purpose of this tutorial, we use " +"MobilNetV2 with 10 output classes:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:497 +#: ../../source/tutorial-quickstart-tensorflow.rst:60 msgid "" -"In terms of API, there's one major difference: all methods in Client take" -" exactly one argument (e.g., ``FitIns`` in ``Client.fit``) and return " -"exactly one value (e.g., ``FitRes`` in ``Client.fit``). The methods in " -"``NumPyClient`` on the other hand have multiple arguments (e.g., " -"``parameters`` and ``config`` in ``NumPyClient.fit``) and multiple return" -" values (e.g., ``parameters``, ``num_example``, and ``metrics`` in " -"``NumPyClient.fit``) if there are multiple things to handle. These " -"``*Ins`` and ``*Res`` objects in ``Client`` wrap all the individual " -"values you're used to from ``NumPyClient``." +"Flower provides a convenience class called :code:`NumPyClient` which " +"makes it easier to implement the :code:`Client` interface when your " +"workload uses Keras. The :code:`NumPyClient` interface defines three " +"methods which can be implemented in the following way:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:510 -msgid "Step 3: Custom serialization" +#: ../../source/tutorial-quickstart-tensorflow.rst:135 +msgid "Each client will have its own dataset." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:512 +#: ../../source/tutorial-quickstart-tensorflow.rst:137 msgid "" -"Here we will explore how to implement custom serialization with a simple " -"example." +"You should now see how the training does in the very first terminal (the " +"one that started the server):" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:514 +#: ../../source/tutorial-quickstart-tensorflow.rst:169 msgid "" -"But first what is serialization? Serialization is just the process of " -"converting an object into raw bytes, and equally as important, " -"deserialization is the process of converting raw bytes back into an " -"object. This is very useful for network communication. Indeed, without " -"serialization, you could not just a Python object through the internet." +"Congratulations! You've successfully built and run your first federated " +"learning system. The full `source code " +"`_ for this can be found in :code:`examples" +"/quickstart-tensorflow/client.py`." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:516 +#: ../../source/tutorial-quickstart-xgboost.rst:-1 msgid "" -"Federated Learning relies heavily on internet communication for training " -"by sending Python objects back and forth between the clients and the " -"server. This means that serialization is an essential part of Federated " -"Learning." +"Check out this Federated Learning quickstart tutorial for using Flower " +"with XGBoost to train classification models on trees." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:518 -msgid "" -"In the following section, we will write a basic example where instead of " -"sending a serialized version of our ``ndarray``\\ s containing our " -"parameters, we will first convert the ``ndarray`` into sparse matrices, " -"before sending them. This technique can be used to save bandwidth, as in " -"certain cases where the weights of a model are sparse (containing many 0 " -"entries), converting them to a sparse matrix can greatly improve their " -"bytesize." +#: ../../source/tutorial-quickstart-xgboost.rst:5 +msgid "Quickstart XGBoost" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:521 -msgid "Our custom serialization/deserialization functions" +#: ../../source/tutorial-quickstart-xgboost.rst:14 +msgid "Federated XGBoost" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:523 +#: ../../source/tutorial-quickstart-xgboost.rst:16 msgid "" -"This is where the real serialization/deserialization will happen, " -"especially in ``ndarray_to_sparse_bytes`` for serialization and " -"``sparse_bytes_to_ndarray`` for deserialization." +"EXtreme Gradient Boosting (**XGBoost**) is a robust and efficient " +"implementation of gradient-boosted decision tree (**GBDT**), that " +"maximises the computational boundaries for boosted tree methods. It's " +"primarily designed to enhance both the performance and computational " +"speed of machine learning models. In XGBoost, trees are constructed " +"concurrently, unlike the sequential approach taken by GBDT." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:525 +#: ../../source/tutorial-quickstart-xgboost.rst:20 msgid "" -"Note that we imported the ``scipy.sparse`` library in order to convert " -"our arrays." +"Often, for tabular data on medium-sized datasets with fewer than 10k " +"training examples, XGBoost surpasses the results of deep learning " +"techniques." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:613 -msgid "Client-side" +#: ../../source/tutorial-quickstart-xgboost.rst:23 +msgid "Why federated XGBoost?" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:615 +#: ../../source/tutorial-quickstart-xgboost.rst:25 msgid "" -"To be able to serialize our ``ndarray``\\ s into sparse parameters, we " -"will just have to call our custom functions in our " -"``flwr.client.Client``." +"Indeed, as the demand for data privacy and decentralized learning grows, " +"there's an increasing requirement to implement federated XGBoost systems " +"for specialised applications, like survival analysis and financial fraud " +"detection." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:617 -msgid "" -"Indeed, in ``get_parameters`` we need to serialize the parameters we got " -"from our network using our custom ``ndarrays_to_sparse_parameters`` " -"defined above." -msgstr "" - -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:619 +#: ../../source/tutorial-quickstart-xgboost.rst:27 msgid "" -"In ``fit``, we first need to deserialize the parameters coming from the " -"server using our custom ``sparse_parameters_to_ndarrays`` and then we " -"need to serialize our local results with " -"``ndarrays_to_sparse_parameters``." +"Federated learning ensures that raw data remains on the local device, " +"making it an attractive approach for sensitive domains where data " +"security and privacy are paramount. Given the robustness and efficiency " +"of XGBoost, combining it with federated learning offers a promising " +"solution for these specific challenges." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:621 +#: ../../source/tutorial-quickstart-xgboost.rst:30 msgid "" -"In ``evaluate``, we will only need to deserialize the global parameters " -"with our custom function." +"In this tutorial we will learn how to train a federated XGBoost model on " +"HIGGS dataset using Flower and :code:`xgboost` package. We use a simple " +"example (`full code xgboost-quickstart " +"`_)" +" with two *clients* and one *server* to demonstrate how federated XGBoost" +" works, and then we dive into a more complex example (`full code xgboost-" +"comprehensive `_) to run various experiments." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:725 -msgid "Server-side" +#: ../../source/tutorial-quickstart-xgboost.rst:37 +msgid "Environment Setup" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:727 +#: ../../source/tutorial-quickstart-xgboost.rst:41 msgid "" -"For this example, we will just use ``FedAvg`` as a strategy. To change " -"the serialization and deserialization here, we only need to reimplement " -"the ``evaluate`` and ``aggregate_fit`` functions of ``FedAvg``. The other" -" functions of the strategy will be inherited from the super class " -"``FedAvg``." +"We first need to install Flower and Flower Datasets. You can do this by " +"running :" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:729 -msgid "As you can see only one line as change in ``evaluate``:" +#: ../../source/tutorial-quickstart-xgboost.rst:47 +msgid "" +"Since we want to use :code:`xgboost` package to build up XGBoost trees, " +"let's go ahead and install :code:`xgboost`:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:735 +#: ../../source/tutorial-quickstart-xgboost.rst:57 msgid "" -"And for ``aggregate_fit``, we will first deserialize every result we " -"received:" +"*Clients* are responsible for generating individual weight-updates for " +"the model based on their local datasets. Now that we have all our " +"dependencies installed, let's run a simple distributed training with two " +"clients and one server." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:744 -msgid "And then serialize the aggregated result:" +#: ../../source/tutorial-quickstart-xgboost.rst:60 +msgid "" +"In a file called :code:`client.py`, import xgboost, Flower, Flower " +"Datasets and other related functions:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:903 -msgid "We can now run our custom serialization example!" +#: ../../source/tutorial-quickstart-xgboost.rst:87 +msgid "Dataset partition and hyper-parameter selection" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:934 +#: ../../source/tutorial-quickstart-xgboost.rst:89 msgid "" -"In this part of the tutorial, we've seen how we can build clients by " -"subclassing either ``NumPyClient`` or ``Client``. ``NumPyClient`` is a " -"convenience abstraction that makes it easier to work with machine " -"learning libraries that have good NumPy interoperability. ``Client`` is a" -" more flexible abstraction that allows us to do things that are not " -"possible in ``NumPyClient``. In order to do so, it requires us to handle " -"parameter serialization and deserialization ourselves." +"Prior to local training, we require loading the HIGGS dataset from Flower" +" Datasets and conduct data partitioning for FL:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:952 +#: ../../source/tutorial-quickstart-xgboost.rst:102 msgid "" -"This is the final part of the Flower tutorial (for now!), " -"congratulations! You're now well equipped to understand the rest of the " -"documentation. There are many topics we didn't cover in the tutorial, we " -"recommend the following resources:" +"In this example, we split the dataset into two partitions with uniform " +"distribution (:code:`IidPartitioner(num_partitions=2)`). Then, we load " +"the partition for the given client based on :code:`node_id`:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:954 -msgid "`Read Flower Docs `__" +#: ../../source/tutorial-quickstart-xgboost.rst:121 +msgid "" +"After that, we do train/test splitting on the given partition (client's " +"local data), and transform data format for :code:`xgboost` package." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:955 +#: ../../source/tutorial-quickstart-xgboost.rst:134 msgid "" -"`Check out Flower Code Examples " -"`__" +"The functions of :code:`train_test_split` and " +":code:`transform_dataset_to_dmatrix` are defined as below:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:956 -msgid "" -"`Use Flower Baselines for your research " -"`__" +#: ../../source/tutorial-quickstart-xgboost.rst:158 +msgid "Finally, we define the hyper-parameters used for XGBoost training." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:957 +#: ../../source/tutorial-quickstart-xgboost.rst:174 msgid "" -"`Watch Flower Summit 2023 videos `__" +"The :code:`num_local_round` represents the number of iterations for local" +" tree boost. We use CPU for the training in default. One can shift it to " +"GPU by setting :code:`tree_method` to :code:`gpu_hist`. We use AUC as " +"evaluation metric." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:9 -msgid "Get started with Flower" +#: ../../source/tutorial-quickstart-xgboost.rst:181 +msgid "Flower client definition for XGBoost" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:11 -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:11 -msgid "Welcome to the Flower federated learning tutorial!" +#: ../../source/tutorial-quickstart-xgboost.rst:183 +msgid "" +"After loading the dataset we define the Flower client. We follow the " +"general rule to define :code:`XgbClient` class inherited from " +":code:`fl.client.Client`." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:13 +#: ../../source/tutorial-quickstart-xgboost.rst:193 msgid "" -"In this notebook, we'll build a federated learning system using Flower, " -"`Flower Datasets `__ and PyTorch. In " -"part 1, we use PyTorch for the model training pipeline and data loading. " -"In part 2, we continue to federate the PyTorch-based pipeline using " -"Flower." +"The :code:`self.bst` is used to keep the Booster objects that remain " +"consistent across rounds, allowing them to store predictions from trees " +"integrated in earlier rounds and maintain other essential data structures" +" for training." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:17 -msgid "Let's get stated!" +#: ../../source/tutorial-quickstart-xgboost.rst:196 +msgid "" +"Then, we override :code:`get_parameters`, :code:`fit` and " +":code:`evaluate` methods insides :code:`XgbClient` class as follows." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:31 +#: ../../source/tutorial-quickstart-xgboost.rst:210 msgid "" -"Before we begin with any actual code, let's make sure that we have " -"everything we need." +"Unlike neural network training, XGBoost trees are not started from a " +"specified random weights. In this case, we do not use " +":code:`get_parameters` and :code:`set_parameters` to initialise model " +"parameters for XGBoost. As a result, let's return an empty tensor in " +":code:`get_parameters` when it is called by the server at the first " +"round." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:45 +#: ../../source/tutorial-quickstart-xgboost.rst:251 msgid "" -"Next, we install the necessary packages for PyTorch (``torch`` and " -"``torchvision``), Flower Datasets (``flwr-datasets``) and Flower " -"(``flwr``):" +"In :code:`fit`, at the first round, we call :code:`xgb.train()` to build " +"up the first set of trees. the returned Booster object and config are " +"stored in :code:`self.bst` and :code:`self.config`, respectively. From " +"the second round, we load the global model sent from server to " +":code:`self.bst`, and then update model weights on local training data " +"with function :code:`local_boost` as follows:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:105 +#: ../../source/tutorial-quickstart-xgboost.rst:269 msgid "" -"It is possible to switch to a runtime that has GPU acceleration enabled " -"(on Google Colab: ``Runtime > Change runtime type > Hardware accelerator:" -" GPU > Save``). Note, however, that Google Colab is not always able to " -"offer GPU acceleration. If you see an error related to GPU availability " -"in one of the following sections, consider switching back to CPU-based " -"execution by setting ``DEVICE = torch.device(\"cpu\")``. If the runtime " -"has GPU acceleration enabled, you should see the output ``Training on " -"cuda``, otherwise it'll say ``Training on cpu``." +"Given :code:`num_local_round`, we update trees by calling " +":code:`self.bst.update` method. After training, the last " +":code:`N=num_local_round` trees will be extracted to send to the server." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:118 -msgid "Loading the data" +#: ../../source/tutorial-quickstart-xgboost.rst:291 +msgid "" +"In :code:`evaluate`, we call :code:`self.bst.eval_set` function to " +"conduct evaluation on valid set. The AUC value will be returned." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:120 +#: ../../source/tutorial-quickstart-xgboost.rst:294 msgid "" -"Federated learning can be applied to many different types of tasks across" -" different domains. In this tutorial, we introduce federated learning by " -"training a simple convolutional neural network (CNN) on the popular " -"CIFAR-10 dataset. CIFAR-10 can be used to train image classifiers that " -"distinguish between images from ten different classes: 'airplane', " -"'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', and " -"'truck'." +"Now, we can create an instance of our class :code:`XgbClient` and add one" +" line to actually run this client:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:131 +#: ../../source/tutorial-quickstart-xgboost.rst:300 msgid "" -"We simulate having multiple datasets from multiple organizations (also " -"called the \"cross-silo\" setting in federated learning) by splitting the" -" original CIFAR-10 dataset into multiple partitions. Each partition will " -"represent the data from a single organization. We're doing this purely " -"for experimentation purposes, in the real world there's no need for data " -"splitting because each organization already has their own data (so the " -"data is naturally partitioned)." +"That's it for the client. We only have to implement :code:`Client`and " +"call :code:`fl.client.start_client()`. The string :code:`\"[::]:8080\"` " +"tells the client which server to connect to. In our case we can run the " +"server and the client on the same machine, therefore we use " +":code:`\"[::]:8080\"`. If we run a truly federated workload with the " +"server and clients running on different machines, all that needs to " +"change is the :code:`server_address` we point the client at." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:133 +#: ../../source/tutorial-quickstart-xgboost.rst:311 msgid "" -"Each organization will act as a client in the federated learning system. " -"So having ten organizations participate in a federation means having ten " -"clients connected to the federated learning server." +"These updates are then sent to the *server* which will aggregate them to " +"produce a better model. Finally, the *server* sends this improved version" +" of the model back to each *client* to finish a complete FL round." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:144 +#: ../../source/tutorial-quickstart-xgboost.rst:314 msgid "" -"Let's now create the Federated Dataset abstraction that from ``flwr-" -"datasets`` that partitions the CIFAR-10. We will create small training " -"and test set for each edge device and wrap each of them into a PyTorch " -"``DataLoader``:" +"In a file named :code:`server.py`, import Flower and FedXgbBagging from " +":code:`flwr.server.strategy`." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:198 -msgid "" -"We now have a list of ten training sets and ten validation sets " -"(``trainloaders`` and ``valloaders``) representing the data of ten " -"different organizations. Each ``trainloader``/``valloader`` pair contains" -" 4500 training examples and 500 validation examples. There's also a " -"single ``testloader`` (we did not split the test set). Again, this is " -"only necessary for building research or educational systems, actual " -"federated learning systems have their data naturally distributed across " -"multiple partitions." +#: ../../source/tutorial-quickstart-xgboost.rst:316 +msgid "We first define a strategy for XGBoost bagging aggregation." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:201 +#: ../../source/tutorial-quickstart-xgboost.rst:339 msgid "" -"Let's take a look at the first batch of images and labels in the first " -"training set (i.e., ``trainloaders[0]``) before we move on:" +"We use two clients for this example. An " +":code:`evaluate_metrics_aggregation` function is defined to collect and " +"wighted average the AUC values from clients." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:240 -msgid "" -"The output above shows a random batch of images from the first " -"``trainloader`` in our list of ten ``trainloaders``. It also prints the " -"labels associated with each image (i.e., one of the ten possible labels " -"we've seen above). If you run the cell again, you should see another " -"batch of images." +#: ../../source/tutorial-quickstart-xgboost.rst:342 +msgid "Then, we start the server:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:252 -msgid "Step 1: Centralized Training with PyTorch" +#: ../../source/tutorial-quickstart-xgboost.rst:354 +msgid "Tree-based bagging aggregation" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:263 +#: ../../source/tutorial-quickstart-xgboost.rst:356 msgid "" -"Next, we're going to use PyTorch to define a simple convolutional neural " -"network. This introduction assumes basic familiarity with PyTorch, so it " -"doesn't cover the PyTorch-related aspects in full detail. If you want to " -"dive deeper into PyTorch, we recommend `DEEP LEARNING WITH PYTORCH: A 60 " -"MINUTE BLITZ " -"`__." +"You must be curious about how bagging aggregation works. Let's look into " +"the details." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:275 -msgid "Defining the model" +#: ../../source/tutorial-quickstart-xgboost.rst:358 +msgid "" +"In file :code:`flwr.server.strategy.fedxgb_bagging.py`, we define " +":code:`FedXgbBagging` inherited from :code:`flwr.server.strategy.FedAvg`." +" Then, we override the :code:`aggregate_fit`, :code:`aggregate_evaluate` " +"and :code:`evaluate` methods as follows:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:277 +#: ../../source/tutorial-quickstart-xgboost.rst:454 msgid "" -"We use the simple CNN described in the `PyTorch tutorial " -"`__:" +"In :code:`aggregate_fit`, we sequentially aggregate the clients' XGBoost " +"trees by calling :code:`aggregate()` function:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:314 -msgid "Let's continue with the usual training and test functions:" +#: ../../source/tutorial-quickstart-xgboost.rst:513 +msgid "" +"In this function, we first fetch the number of trees and the number of " +"parallel trees for the current and previous model by calling " +":code:`_get_tree_nums`. Then, the fetched information will be aggregated." +" After that, the trees (containing model weights) are aggregated to " +"generate a new tree model." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:374 -msgid "Training the model" +#: ../../source/tutorial-quickstart-xgboost.rst:518 +msgid "" +"After traversal of all clients' models, a new global model is generated, " +"followed by the serialisation, and sending back to each client." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:376 +#: ../../source/tutorial-quickstart-xgboost.rst:523 +msgid "Launch Federated XGBoost!" +msgstr "" + +#: ../../source/tutorial-quickstart-xgboost.rst:585 msgid "" -"We now have all the basic building blocks we need: a dataset, a model, a " -"training function, and a test function. Let's put them together to train " -"the model on the dataset of one of our organizations " -"(``trainloaders[0]``). This simulates the reality of most machine " -"learning projects today: each organization has their own data and trains " -"models only on this internal data:" +"Congratulations! You've successfully built and run your first federated " +"XGBoost system. The AUC values can be checked in " +":code:`metrics_distributed`. One can see that the average AUC increases " +"over FL rounds." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:406 +#: ../../source/tutorial-quickstart-xgboost.rst:590 msgid "" -"Training the simple CNN on our CIFAR-10 split for 5 epochs should result " -"in a test set accuracy of about 41%, which is not good, but at the same " -"time, it doesn't really matter for the purposes of this tutorial. The " -"intent was just to show a simplistic centralized training pipeline that " -"sets the stage for what comes next - federated learning!" +"The full `source code `_ for this example can be found in :code:`examples" +"/xgboost-quickstart`." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:418 -msgid "Step 2: Federated Learning with Flower" +#: ../../source/tutorial-quickstart-xgboost.rst:594 +msgid "Comprehensive Federated XGBoost" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:420 +#: ../../source/tutorial-quickstart-xgboost.rst:596 msgid "" -"Step 1 demonstrated a simple centralized training pipeline. All data was " -"in one place (i.e., a single ``trainloader`` and a single ``valloader``)." -" Next, we'll simulate a situation where we have multiple datasets in " -"multiple organizations and where we train a model over these " -"organizations using federated learning." +"Now that you have known how federated XGBoost work with Flower, it's time" +" to run some more comprehensive experiments by customising the " +"experimental settings. In the xgboost-comprehensive example (`full code " +"`_), we provide more options to define various experimental" +" setups, including aggregation strategies, data partitioning and " +"centralised/distributed evaluation. We also support :doc:`Flower " +"simulation ` making it easy to simulate large " +"client cohorts in a resource-aware manner. Let's take a look!" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:432 -msgid "Updating model parameters" +#: ../../source/tutorial-quickstart-xgboost.rst:603 +msgid "Cyclic training" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:434 +#: ../../source/tutorial-quickstart-xgboost.rst:605 msgid "" -"In federated learning, the server sends the global model parameters to " -"the client, and the client updates the local model with the parameters " -"received from the server. It then trains the model on the local data " -"(which changes the model parameters locally) and sends the " -"updated/changed model parameters back to the server (or, alternatively, " -"it sends just the gradients back to the server, not the full model " -"parameters)." +"In addition to bagging aggregation, we offer a cyclic training scheme, " +"which performs FL in a client-by-client fashion. Instead of aggregating " +"multiple clients, there is only one single client participating in the " +"training per round in the cyclic training scenario. The trained local " +"XGBoost trees will be passed to the next client as an initialised model " +"for next round's boosting." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:436 +#: ../../source/tutorial-quickstart-xgboost.rst:609 msgid "" -"We need two helper functions to update the local model with parameters " -"received from the server and to get the updated model parameters from the" -" local model: ``set_parameters`` and ``get_parameters``. The following " -"two functions do just that for the PyTorch model above." +"To do this, we first customise a :code:`ClientManager` in " +":code:`server_utils.py`:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:438 +#: ../../source/tutorial-quickstart-xgboost.rst:649 msgid "" -"The details of how this works are not really important here (feel free to" -" consult the PyTorch documentation if you want to learn more). In " -"essence, we use ``state_dict`` to access PyTorch model parameter tensors." -" The parameter tensors are then converted to/from a list of NumPy " -"ndarray's (which Flower knows how to serialize/deserialize):" +"The customised :code:`ClientManager` samples all available clients in " +"each FL round based on the order of connection to the server. Then, we " +"define a new strategy :code:`FedXgbCyclic` in " +":code:`flwr.server.strategy.fedxgb_cyclic.py`, in order to sequentially " +"select only one client in given round and pass the received model to next" +" client." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:466 -msgid "Implementing a Flower client" +#: ../../source/tutorial-quickstart-xgboost.rst:690 +msgid "" +"Unlike the original :code:`FedAvg`, we don't perform aggregation here. " +"Instead, we just make a copy of the received client model as global model" +" by overriding :code:`aggregate_fit`." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:468 +#: ../../source/tutorial-quickstart-xgboost.rst:693 msgid "" -"With that out of the way, let's move on to the interesting part. " -"Federated learning systems consist of a server and multiple clients. In " -"Flower, we create clients by implementing subclasses of " -"``flwr.client.Client`` or ``flwr.client.NumPyClient``. We use " -"``NumPyClient`` in this tutorial because it is easier to implement and " -"requires us to write less boilerplate." +"Also, the customised :code:`configure_fit` and :code:`configure_evaluate`" +" methods ensure the clients to be sequentially selected given FL round:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:470 +#: ../../source/tutorial-quickstart-xgboost.rst:757 +msgid "Customised data partitioning" +msgstr "" + +#: ../../source/tutorial-quickstart-xgboost.rst:759 msgid "" -"To implement the Flower client, we create a subclass of " -"``flwr.client.NumPyClient`` and implement the three methods " -"``get_parameters``, ``fit``, and ``evaluate``:" +"In :code:`dataset.py`, we have a function :code:`instantiate_partitioner`" +" to instantiate the data partitioner based on the given " +":code:`num_partitions` and :code:`partitioner_type`. Currently, we " +"provide four supported partitioner type to simulate the uniformity/non-" +"uniformity in data quantity (uniform, linear, square, exponential)." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:472 -msgid "``get_parameters``: Return the current local model parameters" +#: ../../source/tutorial-quickstart-xgboost.rst:790 +msgid "Customised centralised/distributed evaluation" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:473 +#: ../../source/tutorial-quickstart-xgboost.rst:792 msgid "" -"``fit``: Receive model parameters from the server, train the model " -"parameters on the local data, and return the (updated) model parameters " -"to the server" +"To facilitate centralised evaluation, we define a function in " +":code:`server_utils.py`:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:474 +#: ../../source/tutorial-quickstart-xgboost.rst:824 msgid "" -"``evaluate``: Receive model parameters from the server, evaluate the " -"model parameters on the local data, and return the evaluation result to " -"the server" +"This function returns a evaluation function which instantiates a " +":code:`Booster` object and loads the global model weights to it. The " +"evaluation is conducted by calling :code:`eval_set()` method, and the " +"tested AUC value is reported." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:476 +#: ../../source/tutorial-quickstart-xgboost.rst:827 msgid "" -"We mentioned that our clients will use the previously defined PyTorch " -"components for model training and evaluation. Let's see a simple Flower " -"client implementation that brings everything together:" +"As for distributed evaluation on the clients, it's same as the quick-" +"start example by overriding the :code:`evaluate()` method insides the " +":code:`XgbClient` class in :code:`client_utils.py`." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:513 -msgid "" -"Our class ``FlowerClient`` defines how local training/evaluation will be " -"performed and allows Flower to call the local training/evaluation through" -" ``fit`` and ``evaluate``. Each instance of ``FlowerClient`` represents a" -" *single client* in our federated learning system. Federated learning " -"systems have multiple clients (otherwise, there's not much to federate), " -"so each client will be represented by its own instance of " -"``FlowerClient``. If we have, for example, three clients in our workload," -" then we'd have three instances of ``FlowerClient``. Flower calls " -"``FlowerClient.fit`` on the respective instance when the server selects a" -" particular client for training (and ``FlowerClient.evaluate`` for " -"evaluation)." +#: ../../source/tutorial-quickstart-xgboost.rst:831 +msgid "Flower simulation" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:517 -msgid "Using the Virtual Client Engine" +#: ../../source/tutorial-quickstart-xgboost.rst:832 +msgid "" +"We also provide an example code (:code:`sim.py`) to use the simulation " +"capabilities of Flower to simulate federated XGBoost training on either a" +" single machine or a cluster of machines." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:519 +#: ../../source/tutorial-quickstart-xgboost.rst:866 msgid "" -"In this notebook, we want to simulate a federated learning system with 10" -" clients on a single machine. This means that the server and all 10 " -"clients will live on a single machine and share resources such as CPU, " -"GPU, and memory. Having 10 clients would mean having 10 instances of " -"``FlowerClient`` in memory. Doing this on a single machine can quickly " -"exhaust the available memory resources, even if only a subset of these " -"clients participates in a single round of federated learning." +"After importing all required packages, we define a :code:`main()` " +"function to perform the simulation process:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:521 +#: ../../source/tutorial-quickstart-xgboost.rst:921 msgid "" -"In addition to the regular capabilities where server and clients run on " -"multiple machines, Flower, therefore, provides special simulation " -"capabilities that create ``FlowerClient`` instances only when they are " -"actually necessary for training or evaluation. To enable the Flower " -"framework to create clients when necessary, we need to implement a " -"function called ``client_fn`` that creates a ``FlowerClient`` instance on" -" demand. Flower calls ``client_fn`` whenever it needs an instance of one " -"particular client to call ``fit`` or ``evaluate`` (those instances are " -"usually discarded after use, so they should not keep any local state). " -"Clients are identified by a client ID, or short ``cid``. The ``cid`` can " -"be used, for example, to load different local data partitions for " -"different clients, as can be seen below:" +"We first load the dataset and perform data partitioning, and the pre-" +"processed data is stored in a :code:`list`. After the simulation begins, " +"the clients won't need to pre-process their partitions again." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:556 -msgid "Starting the training" +#: ../../source/tutorial-quickstart-xgboost.rst:924 +msgid "Then, we define the strategies and other hyper-parameters:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:558 +#: ../../source/tutorial-quickstart-xgboost.rst:975 msgid "" -"We now have the class ``FlowerClient`` which defines client-side " -"training/evaluation and ``client_fn`` which allows Flower to create " -"``FlowerClient`` instances whenever it needs to call ``fit`` or " -"``evaluate`` on one particular client. The last step is to start the " -"actual simulation using ``flwr.simulation.start_simulation``." +"After that, we start the simulation by calling " +":code:`fl.simulation.start_simulation`:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:560 +#: ../../source/tutorial-quickstart-xgboost.rst:995 msgid "" -"The function ``start_simulation`` accepts a number of arguments, amongst " -"them the ``client_fn`` used to create ``FlowerClient`` instances, the " -"number of clients to simulate (``num_clients``), the number of federated " -"learning rounds (``num_rounds``), and the strategy. The strategy " -"encapsulates the federated learning approach/algorithm, for example, " -"*Federated Averaging* (FedAvg)." +"One of key parameters for :code:`start_simulation` is :code:`client_fn` " +"which returns a function to construct a client. We define it as follows:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:562 -msgid "" -"Flower has a number of built-in strategies, but we can also use our own " -"strategy implementations to customize nearly all aspects of the federated" -" learning approach. For this example, we use the built-in ``FedAvg`` " -"implementation and customize it using a few basic parameters. The last " -"step is the actual call to ``start_simulation`` which - you guessed it - " -"starts the simulation:" -msgstr "" - -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:608 -msgid "Behind the scenes" -msgstr "" - -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:610 -msgid "So how does this work? How does Flower execute this simulation?" +#: ../../source/tutorial-quickstart-xgboost.rst:1038 +msgid "Arguments parser" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:612 -#, python-format +#: ../../source/tutorial-quickstart-xgboost.rst:1040 msgid "" -"When we call ``start_simulation``, we tell Flower that there are 10 " -"clients (``num_clients=10``). Flower then goes ahead an asks the " -"``FedAvg`` strategy to select clients. ``FedAvg`` knows that it should " -"select 100% of the available clients (``fraction_fit=1.0``), so it goes " -"ahead and selects 10 random clients (i.e., 100% of 10)." +"In :code:`utils.py`, we define the arguments parsers for clients, server " +"and simulation, allowing users to specify different experimental " +"settings. Let's first see the sever side:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:614 +#: ../../source/tutorial-quickstart-xgboost.rst:1086 msgid "" -"Flower then asks the selected 10 clients to train the model. When the " -"server receives the model parameter updates from the clients, it hands " -"those updates over to the strategy (*FedAvg*) for aggregation. The " -"strategy aggregates those updates and returns the new global model, which" -" then gets used in the next round of federated learning." +"This allows user to specify training strategies / the number of total " +"clients / FL rounds / participating clients / clients for evaluation, and" +" evaluation fashion. Note that with :code:`--centralised-eval`, the sever" +" will do centralised evaluation and all functionalities for client " +"evaluation will be disabled." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:626 -msgid "Where's the accuracy?" +#: ../../source/tutorial-quickstart-xgboost.rst:1090 +msgid "Then, the argument parser on client side:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:628 +#: ../../source/tutorial-quickstart-xgboost.rst:1144 msgid "" -"You may have noticed that all metrics except for ``losses_distributed`` " -"are empty. Where did the ``{\"accuracy\": float(accuracy)}`` go?" +"This defines various options for client data partitioning. Besides, " +"clients also have an option to conduct evaluation on centralised test set" +" by setting :code:`--centralised-eval`, as well as an option to perform " +"scaled learning rate based on the number of clients by setting :code" +":`--scaled-lr`." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:630 -msgid "" -"Flower can automatically aggregate losses returned by individual clients," -" but it cannot do the same for metrics in the generic metrics dictionary " -"(the one with the ``accuracy`` key). Metrics dictionaries can contain " -"very different kinds of metrics and even key/value pairs that are not " -"metrics at all, so the framework does not (and can not) know how to " -"handle these automatically." +#: ../../source/tutorial-quickstart-xgboost.rst:1148 +msgid "We also have an argument parser for simulation:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:632 -msgid "" -"As users, we need to tell the framework how to handle/aggregate these " -"custom metrics, and we do so by passing metric aggregation functions to " -"the strategy. The strategy will then call these functions whenever it " -"receives fit or evaluate metrics from clients. The two possible functions" -" are ``fit_metrics_aggregation_fn`` and " -"``evaluate_metrics_aggregation_fn``." +#: ../../source/tutorial-quickstart-xgboost.rst:1226 +msgid "This integrates all arguments for both client and server sides." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:634 -msgid "" -"Let's create a simple weighted averaging function to aggregate the " -"``accuracy`` metric we return from ``evaluate``:" +#: ../../source/tutorial-quickstart-xgboost.rst:1229 +msgid "Example commands" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:660 +#: ../../source/tutorial-quickstart-xgboost.rst:1231 msgid "" -"The only thing left to do is to tell the strategy to call this function " -"whenever it receives evaluation metric dictionaries from the clients:" +"To run a centralised evaluated experiment with bagging strategy on 5 " +"clients with exponential distribution for 50 rounds, we first start the " +"server as below:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:697 -msgid "" -"We now have a full system that performs federated training and federated " -"evaluation. It uses the ``weighted_average`` function to aggregate custom" -" evaluation metrics and calculates a single ``accuracy`` metric across " -"all clients on the server side." +#: ../../source/tutorial-quickstart-xgboost.rst:1238 +msgid "Then, on each client terminal, we start the clients:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:699 +#: ../../source/tutorial-quickstart-xgboost.rst:1244 +msgid "To run the same experiment with Flower simulation:" +msgstr "" + +#: ../../source/tutorial-quickstart-xgboost.rst:1250 msgid "" -"The other two categories of metrics (``losses_centralized`` and " -"``metrics_centralized``) are still empty because they only apply when " -"centralized evaluation is being used. Part two of the Flower tutorial " -"will cover centralized evaluation." +"The full `code `_ for this comprehensive example can be found in" +" :code:`examples/xgboost-comprehensive`." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:711 -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:351 -msgid "Final remarks" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:9 +msgid "Build a strategy from scratch" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:713 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:11 msgid "" -"Congratulations, you just trained a convolutional neural network, " -"federated over 10 clients! With that, you understand the basics of " -"federated learning with Flower. The same approach you've seen can be used" -" with other machine learning frameworks (not just PyTorch) and tasks (not" -" just CIFAR-10 images classification), for example NLP with Hugging Face " -"Transformers or speech with SpeechBrain." +"Welcome to the third part of the Flower federated learning tutorial. In " +"previous parts of this tutorial, we introduced federated learning with " +"PyTorch and Flower (`part 1 `__) and we learned how strategies " +"can be used to customize the execution on both the server and the clients" +" (`part 2 `__)." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:715 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:13 msgid "" -"In the next notebook, we're going to cover some more advanced concepts. " -"Want to customize your strategy? Initialize parameters on the server " -"side? Or evaluate the aggregated model on the server side? We'll cover " -"all this and more in the next tutorial." +"In this notebook, we'll continue to customize the federated learning " +"system we built previously by creating a custom version of FedAvg (again," +" using `Flower `__ and `PyTorch " +"`__)." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:733 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:15 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:16 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:15 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:15 msgid "" -"The `Flower Federated Learning Tutorial - Part 2 " -"`__ goes into more depth about strategies and all " -"the advanced things you can build with them." +"`Star Flower on GitHub `__ ⭐️ and join " +"the Flower community on Slack to connect, ask questions, and get help: " +"`Join Slack `__ 🌼 We'd love to hear from " +"you in the ``#introductions`` channel! And if anything is unclear, head " +"over to the ``#questions`` channel." msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:9 -msgid "Use a federated learning strategy" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:17 +msgid "Let's build a new ``Strategy`` from scratch!" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:11 -msgid "" -"Welcome to the next part of the federated learning tutorial. In previous " -"parts of this tutorial, we introduced federated learning with PyTorch and" -" Flower (`part 1 `__)." +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:29 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:29 +msgid "Preparation" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:13 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:31 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:32 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:31 msgid "" -"In this notebook, we'll begin to customize the federated learning system " -"we built in the introductory notebook (again, using `Flower " -"`__ and `PyTorch `__)." +"Before we begin with the actual code, let's make sure that we have " +"everything we need." msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:17 -msgid "Let's move beyond FedAvg with Flower strategies!" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:43 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:44 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:43 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:43 +msgid "Installing dependencies" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:309 -msgid "Strategy customization" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:45 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:46 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:45 +msgid "First, we install the necessary packages:" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:311 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:65 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:66 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:65 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:65 msgid "" -"So far, everything should look familiar if you've worked through the " -"introductory notebook. With that, we're ready to introduce a number of " -"new features." -msgstr "" - -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:323 -msgid "Server-side parameter **initialization**" +"Now that we have all dependencies installed, we can import everything we " +"need for this tutorial:" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:325 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:101 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:102 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:101 msgid "" -"Flower, by default, initializes the global model by asking one random " -"client for the initial parameters. In many cases, we want more control " -"over parameter initialization though. Flower therefore allows you to " -"directly pass the initial parameters to the Strategy:" +"It is possible to switch to a runtime that has GPU acceleration enabled " +"(on Google Colab: ``Runtime > Change runtime type > Hardware acclerator: " +"GPU > Save``). Note, however, that Google Colab is not always able to " +"offer GPU acceleration. If you see an error related to GPU availability " +"in one of the following sections, consider switching back to CPU-based " +"execution by setting ``DEVICE = torch.device(\"cpu\")``. If the runtime " +"has GPU acceleration enabled, you should see the output ``Training on " +"cuda``, otherwise it'll say ``Training on cpu``." msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:370 -msgid "" -"Passing ``initial_parameters`` to the ``FedAvg`` strategy prevents Flower" -" from asking one of the clients for the initial parameters. If we look " -"closely, we can see that the logs do not show any calls to the " -"``FlowerClient.get_parameters`` method." +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:114 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:115 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:114 +msgid "Data loading" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:382 -msgid "Starting with a customized strategy" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:116 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:116 +msgid "" +"Let's now load the CIFAR-10 training and test set, partition them into " +"ten smaller datasets (each split into training and validation set), and " +"wrap everything in their own ``DataLoader``. We introduce a new parameter" +" ``num_clients`` which allows us to call ``load_datasets`` with different" +" numbers of clients." msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:384 -msgid "" -"We've seen the function ``start_simulation`` before. It accepts a number " -"of arguments, amongst them the ``client_fn`` used to create " -"``FlowerClient`` instances, the number of clients to simulate " -"``num_clients``, the number of rounds ``num_rounds``, and the strategy." +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:167 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:168 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:167 +msgid "Model training/evaluation" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:386 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:169 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:170 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:169 msgid "" -"The strategy encapsulates the federated learning approach/algorithm, for " -"example, ``FedAvg`` or ``FedAdagrad``. Let's try to use a different " -"strategy this time:" +"Let's continue with the usual model definition (including " +"``set_parameters`` and ``get_parameters``), training and test functions:" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:424 -msgid "Server-side parameter **evaluation**" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:258 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:258 +msgid "Flower client" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:426 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:260 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:260 msgid "" -"Flower can evaluate the aggregated model on the server-side or on the " -"client-side. Client-side and server-side evaluation are similar in some " -"ways, but different in others." +"To implement the Flower client, we (again) create a subclass of " +"``flwr.client.NumPyClient`` and implement the three methods " +"``get_parameters``, ``fit``, and ``evaluate``. Here, we also pass the " +"``cid`` to the client and use it log additional details:" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:428 -msgid "" -"**Centralized Evaluation** (or *server-side evaluation*) is conceptually " -"simple: it works the same way that evaluation in centralized machine " -"learning does. If there is a server-side dataset that can be used for " -"evaluation purposes, then that's great. We can evaluate the newly " -"aggregated model after each round of training without having to send the " -"model to clients. We're also fortunate in the sense that our entire " -"evaluation dataset is available at all times." +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:308 +msgid "Let's test what we have so far before we continue:" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:430 -msgid "" -"**Federated Evaluation** (or *client-side evaluation*) is more complex, " -"but also more powerful: it doesn't require a centralized dataset and " -"allows us to evaluate models over a larger set of data, which often " -"yields more realistic evaluation results. In fact, many scenarios require" -" us to use **Federated Evaluation** if we want to get representative " -"evaluation results at all. But this power comes at a cost: once we start " -"to evaluate on the client side, we should be aware that our evaluation " -"dataset can change over consecutive rounds of learning if those clients " -"are not always available. Moreover, the dataset held by each client can " -"also change over consecutive rounds. This can lead to evaluation results " -"that are not stable, so even if we would not change the model, we'd see " -"our evaluation results fluctuate over consecutive rounds." +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:339 +msgid "Build a Strategy from scratch" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:433 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:341 msgid "" -"We've seen how federated evaluation works on the client side (i.e., by " -"implementing the ``evaluate`` method in ``FlowerClient``). Now let's see " -"how we can evaluate aggregated model parameters on the server-side:" +"Let’s overwrite the ``configure_fit`` method such that it passes a higher" +" learning rate (potentially also other hyperparameters) to the optimizer " +"of a fraction of the clients. We will keep the sampling of the clients as" +" it is in ``FedAvg`` and then change the configuration dictionary (one of" +" the ``FitIns`` attributes)." msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:490 -msgid "Sending/receiving arbitrary values to/from clients" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:507 +msgid "" +"The only thing left is to use the newly created custom Strategy " +"``FedCustom`` when starting the experiment:" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:492 -msgid "" -"In some situations, we want to configure client-side execution (training," -" evaluation) from the server-side. One example for that is the server " -"asking the clients to train for a certain number of local epochs. Flower " -"provides a way to send configuration values from the server to the " -"clients using a dictionary. Let's look at an example where the clients " -"receive values from the server through the ``config`` parameter in " -"``fit`` (``config`` is also available in ``evaluate``). The ``fit`` " -"method receives the configuration dictionary through the ``config`` " -"parameter and can then read values from this dictionary. In this example," -" it reads ``server_round`` and ``local_epochs`` and uses those values to " -"improve the logging and configure the number of local training epochs:" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:534 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:932 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:697 +msgid "Recap" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:546 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:536 msgid "" -"So how can we send this config dictionary from server to clients? The " -"built-in Flower Strategies provide way to do this, and it works similarly" -" to the way server-side evaluation works. We provide a function to the " -"strategy, and the strategy calls this function for every round of " -"federated learning:" +"In this notebook, we’ve seen how to implement a custom strategy. A custom" +" strategy enables granular control over client node configuration, result" +" aggregation, and more. To define a custom strategy, you only have to " +"overwrite the abstract methods of the (abstract) base class ``Strategy``." +" To make custom strategies even more powerful, you can pass custom " +"functions to the constructor of your new class (``__init__``) and then " +"call these functions whenever needed." msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:576 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:550 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:948 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:729 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:715 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:369 msgid "" -"Next, we'll just pass this function to the FedAvg strategy before " -"starting the simulation:" +"Before you continue, make sure to join the Flower community on Slack: " +"`Join Slack `__" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:613 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:552 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:950 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:731 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:717 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:371 msgid "" -"As we can see, the client logs now include the current round of federated" -" learning (which they read from the ``config`` dictionary). We can also " -"configure local training to run for one epoch during the first and second" -" round of federated learning, and then for two epochs during the third " -"round." +"There's a dedicated ``#questions`` channel if you need help, but we'd " +"also love to hear who you are in ``#introductions``!" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:615 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:554 msgid "" -"Clients can also return arbitrary values to the server. To do so, they " -"return a dictionary from ``fit`` and/or ``evaluate``. We have seen and " -"used this concept throughout this notebook without mentioning it " -"explicitly: our ``FlowerClient`` returns a dictionary containing a custom" -" key/value pair as the third return value in ``evaluate``." +"The `Flower Federated Learning Tutorial - Part 4 " +"`__ introduces ``Client``, the flexible API underlying " +"``NumPyClient``." msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:627 -msgid "Scaling federated learning" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:9 +msgid "Customize the client" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:629 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:11 msgid "" -"As a last step in this notebook, let's see how we can use Flower to " -"experiment with a large number of clients." +"Welcome to the fourth part of the Flower federated learning tutorial. In " +"the previous parts of this tutorial, we introduced federated learning " +"with PyTorch and Flower (`part 1 `__), we learned how " +"strategies can be used to customize the execution on both the server and " +"the clients (`part 2 `__), and we built our own " +"custom strategy from scratch (`part 3 `__)." msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:651 -#, python-format +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:14 msgid "" -"We now have 1000 partitions, each holding 45 training and 5 validation " -"examples. Given that the number of training examples on each client is " -"quite small, we should probably train the model a bit longer, so we " -"configure the clients to perform 3 local training epochs. We should also " -"adjust the fraction of clients selected for training during each round " -"(we don't want all 1000 clients participating in every round), so we " -"adjust ``fraction_fit`` to ``0.05``, which means that only 5% of " -"available clients (so 50 clients) will be selected for training each " -"round:" +"In this notebook, we revisit ``NumPyClient`` and introduce a new " +"baseclass for building clients, simply named ``Client``. In previous " +"parts of this tutorial, we've based our client on ``NumPyClient``, a " +"convenience class which makes it easy to work with machine learning " +"libraries that have good NumPy interoperability. With ``Client``, we gain" +" a lot of flexibility that we didn't have before, but we'll also have to " +"do a few things the we didn't have to do before." msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:699 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:18 msgid "" -"In this notebook, we've seen how we can gradually enhance our system by " -"customizing the strategy, initializing parameters on the server side, " -"choosing a different strategy, and evaluating models on the server-side. " -"That's quite a bit of flexibility with so little code, right?" +"Let's go deeper and see what it takes to move from ``NumPyClient`` to " +"``Client``!" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:701 -msgid "" -"In the later sections, we've seen how we can communicate arbitrary values" -" between server and clients to fully customize client-side execution. " -"With that capability, we built a large-scale Federated Learning " -"simulation using the Flower Virtual Client Engine and ran an experiment " -"involving 1000 clients in the same workload - all in a Jupyter Notebook!" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:30 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:29 +msgid "Step 0: Preparation" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:719 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:117 msgid "" -"The `Flower Federated Learning Tutorial - Part 3 " -"`__ shows how to build a fully custom ``Strategy`` from " -"scratch." +"Let's now load the CIFAR-10 training and test set, partition them into " +"ten smaller datasets (each split into training and validation set), and " +"wrap everything in their own ``DataLoader``." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:9 -msgid "What is Federated Learning?" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:259 +msgid "Step 1: Revisiting NumPyClient" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:13 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:261 msgid "" -"In this tutorial, you will learn what federated learning is, build your " -"first system in Flower, and gradually extend it. If you work through all " -"parts of the tutorial, you will be able to build advanced federated " -"learning systems that approach the current state of the art in the field." +"So far, we've implemented our client by subclassing " +"``flwr.client.NumPyClient``. The three methods we implemented are " +"``get_parameters``, ``fit``, and ``evaluate``. Finally, we wrap the " +"creation of instances of this class in a function called ``client_fn``:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:15 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:309 msgid "" -"🧑‍🏫 This tutorial starts at zero and expects no familiarity with " -"federated learning. Only a basic understanding of data science and Python" -" programming is assumed." +"We've seen this before, there's nothing new so far. The only *tiny* " +"difference compared to the previous notebook is naming, we've changed " +"``FlowerClient`` to ``FlowerNumPyClient`` and ``client_fn`` to " +"``numpyclient_fn``. Let's run it to see the output we get:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:17 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:339 msgid "" -"`Star Flower on GitHub `__ ⭐️ and join " -"the open-source Flower community on Slack to connect, ask questions, and " -"get help: `Join Slack `__ 🌼 We'd love to " -"hear from you in the ``#introductions`` channel! And if anything is " -"unclear, head over to the ``#questions`` channel." +"This works as expected, two clients are training for three rounds of " +"federated learning." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:19 -msgid "Let's get started!" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:341 +msgid "" +"Let's dive a little bit deeper and discuss how Flower executes this " +"simulation. Whenever a client is selected to do some work, " +"``start_simulation`` calls the function ``numpyclient_fn`` to create an " +"instance of our ``FlowerNumPyClient`` (along with loading the model and " +"the data)." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:31 -msgid "Classic machine learning" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:343 +msgid "" +"But here's the perhaps surprising part: Flower doesn't actually use the " +"``FlowerNumPyClient`` object directly. Instead, it wraps the object to " +"makes it look like a subclass of ``flwr.client.Client``, not " +"``flwr.client.NumPyClient``. In fact, the Flower core framework doesn't " +"know how to handle ``NumPyClient``'s, it only knows how to handle " +"``Client``'s. ``NumPyClient`` is just a convenience abstraction built on " +"top of ``Client``." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:33 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:345 msgid "" -"Before we begin to discuss federated learning, let us quickly recap how " -"most machine learning works today." +"Instead of building on top of ``NumPyClient``, we can directly build on " +"top of ``Client``." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:35 -msgid "" -"In machine learning, we have a model, and we have data. The model could " -"be a neural network (as depicted here), or something else, like classical" -" linear regression." +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:357 +msgid "Step 2: Moving from ``NumPyClient`` to ``Client``" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:41 -msgid "|2b5c62c529f6416f840c594cce062fbb|" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:359 +msgid "" +"Let's try to do the same thing using ``Client`` instead of " +"``NumPyClient``." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:109 -msgid "Model and data" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:465 +msgid "" +"Before we discuss the code in more detail, let's try to run it! Gotta " +"make sure our new ``Client``-based client works, right?" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:47 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:490 msgid "" -"We train the model using the data to perform a useful task. A task could " -"be to detect objects in images, transcribe an audio recording, or play a " -"game like Go." +"That's it, we're now using ``Client``. It probably looks similar to what " +"we've done with ``NumPyClient``. So what's the difference?" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:53 -msgid "|90b334680cb7467d9a04d39b8e8dca9f|" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:492 +msgid "" +"First of all, it's more code. But why? The difference comes from the fact" +" that ``Client`` expects us to take care of parameter serialization and " +"deserialization. For Flower to be able to send parameters over the " +"network, it eventually needs to turn these parameters into ``bytes``. " +"Turning parameters (e.g., NumPy ``ndarray``'s) into raw bytes is called " +"serialization. Turning raw bytes into something more useful (like NumPy " +"``ndarray``'s) is called deserialization. Flower needs to do both: it " +"needs to serialize parameters on the server-side and send them to the " +"client, the client needs to deserialize them to use them for local " +"training, and then serialize the updated parameters again to send them " +"back to the server, which (finally!) deserializes them again in order to " +"aggregate them with the updates received from other clients." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:111 -msgid "Train model using data" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:495 +msgid "" +"The only *real* difference between Client and NumPyClient is that " +"NumPyClient takes care of serialization and deserialization for you. It " +"can do so because it expects you to return parameters as NumPy ndarray's," +" and it knows how to handle these. This makes working with machine " +"learning libraries that have good NumPy support (most of them) a breeze." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:59 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:497 msgid "" -"Now, in practice, the training data we work with doesn't originate on the" -" machine we train the model on. It gets created somewhere else." +"In terms of API, there's one major difference: all methods in Client take" +" exactly one argument (e.g., ``FitIns`` in ``Client.fit``) and return " +"exactly one value (e.g., ``FitRes`` in ``Client.fit``). The methods in " +"``NumPyClient`` on the other hand have multiple arguments (e.g., " +"``parameters`` and ``config`` in ``NumPyClient.fit``) and multiple return" +" values (e.g., ``parameters``, ``num_example``, and ``metrics`` in " +"``NumPyClient.fit``) if there are multiple things to handle. These " +"``*Ins`` and ``*Res`` objects in ``Client`` wrap all the individual " +"values you're used to from ``NumPyClient``." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:61 -msgid "" -"It originates on a smartphone by the user interacting with an app, a car " -"collecting sensor data, a laptop receiving input via the keyboard, or a " -"smart speaker listening to someone trying to sing a song." +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:510 +msgid "Step 3: Custom serialization" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:67 -msgid "|65764ceee89f4335bfd93fd0b115e831|" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:512 +msgid "" +"Here we will explore how to implement custom serialization with a simple " +"example." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:113 -msgid "Data on a phone" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:514 +msgid "" +"But first what is serialization? Serialization is just the process of " +"converting an object into raw bytes, and equally as important, " +"deserialization is the process of converting raw bytes back into an " +"object. This is very useful for network communication. Indeed, without " +"serialization, you could not just a Python object through the internet." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:73 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:516 msgid "" -"What's also important to mention, this \"somewhere else\" is usually not " -"just one place, it's many places. It could be several devices all running" -" the same app. But it could also be several organizations, all generating" -" data for the same task." +"Federated Learning relies heavily on internet communication for training " +"by sending Python objects back and forth between the clients and the " +"server. This means that serialization is an essential part of Federated " +"Learning." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:79 -msgid "|d97319ec28bb407ea0ab9705e38f3bcf|" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:518 +msgid "" +"In the following section, we will write a basic example where instead of " +"sending a serialized version of our ``ndarray``\\ s containing our " +"parameters, we will first convert the ``ndarray`` into sparse matrices, " +"before sending them. This technique can be used to save bandwidth, as in " +"certain cases where the weights of a model are sparse (containing many 0 " +"entries), converting them to a sparse matrix can greatly improve their " +"bytesize." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:115 -msgid "Data is on many devices" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:521 +msgid "Our custom serialization/deserialization functions" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:85 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:523 msgid "" -"So to use machine learning, or any kind of data analysis, the approach " -"that has been used in the past was to collect all data on a central " -"server. This server can be somewhere in a data center, or somewhere in " -"the cloud." +"This is where the real serialization/deserialization will happen, " +"especially in ``ndarray_to_sparse_bytes`` for serialization and " +"``sparse_bytes_to_ndarray`` for deserialization." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:91 -msgid "|11e95ac83a8548d8b3505b4663187d07|" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:525 +msgid "" +"Note that we imported the ``scipy.sparse`` library in order to convert " +"our arrays." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:117 -msgid "Central data collection" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:613 +msgid "Client-side" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:97 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:615 msgid "" -"Once all the data is collected in one place, we can finally use machine " -"learning algorithms to train our model on the data. This is the machine " -"learning approach that we've basically always relied on." +"To be able to serialize our ``ndarray``\\ s into sparse parameters, we " +"will just have to call our custom functions in our " +"``flwr.client.Client``." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:103 -msgid "|1dab2f3a23674abc8a6731f20fa10730|" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:617 +msgid "" +"Indeed, in ``get_parameters`` we need to serialize the parameters we got " +"from our network using our custom ``ndarrays_to_sparse_parameters`` " +"defined above." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:119 -msgid "Central model training" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:619 +msgid "" +"In ``fit``, we first need to deserialize the parameters coming from the " +"server using our custom ``sparse_parameters_to_ndarrays`` and then we " +"need to serialize our local results with " +"``ndarrays_to_sparse_parameters``." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:130 -msgid "Challenges of classical machine learning" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:621 +msgid "" +"In ``evaluate``, we will only need to deserialize the global parameters " +"with our custom function." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:132 -msgid "" -"The classic machine learning approach we've just seen can be used in some" -" cases. Great examples include categorizing holiday photos, or analyzing " -"web traffic. Cases, where all the data is naturally available on a " -"centralized server." +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:725 +msgid "Server-side" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:138 -msgid "|7f0ee162da38450788493a21627306f7|" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:727 +msgid "" +"For this example, we will just use ``FedAvg`` as a strategy. To change " +"the serialization and deserialization here, we only need to reimplement " +"the ``evaluate`` and ``aggregate_fit`` functions of ``FedAvg``. The other" +" functions of the strategy will be inherited from the super class " +"``FedAvg``." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:173 -msgid "Centralized possible" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:729 +msgid "As you can see only one line as change in ``evaluate``:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:144 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:735 msgid "" -"But the approach can not be used in many other cases. Cases, where the " -"data is not available on a centralized server, or cases where the data " -"available on one server is not enough to train a good model." +"And for ``aggregate_fit``, we will first deserialize every result we " +"received:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:150 -msgid "|296a1fb72c514b23b3d8905ff0ff98c6|" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:744 +msgid "And then serialize the aggregated result:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:175 -msgid "Centralized impossible" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:903 +msgid "We can now run our custom serialization example!" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:156 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:934 msgid "" -"There are many reasons why the classic centralized machine learning " -"approach does not work for a large number of highly important real-world " -"use cases. Those reasons include:" +"In this part of the tutorial, we've seen how we can build clients by " +"subclassing either ``NumPyClient`` or ``Client``. ``NumPyClient`` is a " +"convenience abstraction that makes it easier to work with machine " +"learning libraries that have good NumPy interoperability. ``Client`` is a" +" more flexible abstraction that allows us to do things that are not " +"possible in ``NumPyClient``. In order to do so, it requires us to handle " +"parameter serialization and deserialization ourselves." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:158 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:952 msgid "" -"**Regulations**: GDPR (Europe), CCPA (California), PIPEDA (Canada), LGPD " -"(Brazil), PDPL (Argentina), KVKK (Turkey), POPI (South Africa), FSS " -"(Russia), CDPR (China), PDPB (India), PIPA (Korea), APPI (Japan), PDP " -"(Indonesia), PDPA (Singapore), APP (Australia), and other regulations " -"protect sensitive data from being moved. In fact, those regulations " -"sometimes even prevent single organizations from combining their own " -"users' data for artificial intelligence training because those users live" -" in different parts of the world, and their data is governed by different" -" data protection regulations." +"This is the final part of the Flower tutorial (for now!), " +"congratulations! You're now well equipped to understand the rest of the " +"documentation. There are many topics we didn't cover in the tutorial, we " +"recommend the following resources:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:160 -msgid "" -"**User preference**: In addition to regulation, there are use cases where" -" users just expect that no data leaves their device, ever. If you type " -"your passwords and credit card info into the digital keyboard of your " -"phone, you don't expect those passwords to end up on the server of the " -"company that developed that keyboard, do you? In fact, that use case was " -"the reason federated learning was invented in the first place." +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:954 +msgid "`Read Flower Docs `__" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:161 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:955 msgid "" -"**Data volume**: Some sensors, like cameras, produce such a high data " -"volume that it is neither feasible nor economic to collect all the data " -"(due to, for example, bandwidth or communication efficiency). Think about" -" a national rail service with hundreds of train stations across the " -"country. If each of these train stations is outfitted with a number of " -"security cameras, the volume of raw on-device data they produce requires " -"incredibly powerful and exceedingly expensive infrastructure to process " -"and store. And most of the data isn't even useful." -msgstr "" - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:164 -msgid "Examples where centralized machine learning does not work include:" +"`Check out Flower Code Examples " +"`__" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:166 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:956 msgid "" -"Sensitive healthcare records from multiple hospitals to train cancer " -"detection models" +"`Use Flower Baselines for your research " +"`__" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:167 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:957 msgid "" -"Financial information from different organizations to detect financial " -"fraud" +"`Watch Flower Summit 2023 videos `__" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:168 -msgid "Location data from your electric car to make better range prediction" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:9 +msgid "Get started with Flower" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:169 -msgid "End-to-end encrypted messages to train better auto-complete models" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:11 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:11 +msgid "Welcome to the Flower federated learning tutorial!" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:171 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:13 msgid "" -"The popularity of privacy-enhancing systems like the `Brave " -"`__ browser or the `Signal `__ " -"messenger shows that users care about privacy. In fact, they choose the " -"privacy-enhancing version over other alternatives, if such an alternative" -" exists. But what can we do to apply machine learning and data science to" -" these cases to utilize private data? After all, these are all areas that" -" would benefit significantly from recent advances in AI." +"In this notebook, we'll build a federated learning system using Flower, " +"`Flower Datasets `__ and PyTorch. In " +"part 1, we use PyTorch for the model training pipeline and data loading. " +"In part 2, we continue to federate the PyTorch-based pipeline using " +"Flower." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:186 -msgid "Federated learning" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:17 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:19 +msgid "Let's get started!" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:188 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:31 msgid "" -"Federated learning simply reverses this approach. It enables machine " -"learning on distributed data by moving the training to the data, instead " -"of moving the data to the training. Here's the single-sentence " -"explanation:" -msgstr "" - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:190 -msgid "Central machine learning: move the data to the computation" -msgstr "" - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:191 -msgid "Federated (machine) learning: move the computation to the data" +"Before we begin with any actual code, let's make sure that we have " +"everything we need." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:193 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:45 msgid "" -"By doing so, it enables us to use machine learning (and other data " -"science approaches) in areas where it wasn't possible before. We can now " -"train excellent medical AI models by enabling different hospitals to work" -" together. We can solve financial fraud by training AI models on the data" -" of different financial institutions. We can build novel privacy-" -"enhancing applications (such as secure messaging) that have better built-" -"in AI than their non-privacy-enhancing alternatives. And those are just a" -" few of the examples that come to mind. As we deploy federated learning, " -"we discover more and more areas that can suddenly be reinvented because " -"they now have access to vast amounts of previously inaccessible data." +"Next, we install the necessary packages for PyTorch (``torch`` and " +"``torchvision``), Flower Datasets (``flwr-datasets``) and Flower " +"(``flwr``):" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:196 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:105 msgid "" -"So how does federated learning work, exactly? Let's start with an " -"intuitive explanation." +"It is possible to switch to a runtime that has GPU acceleration enabled " +"(on Google Colab: ``Runtime > Change runtime type > Hardware accelerator:" +" GPU > Save``). Note, however, that Google Colab is not always able to " +"offer GPU acceleration. If you see an error related to GPU availability " +"in one of the following sections, consider switching back to CPU-based " +"execution by setting ``DEVICE = torch.device(\"cpu\")``. If the runtime " +"has GPU acceleration enabled, you should see the output ``Training on " +"cuda``, otherwise it'll say ``Training on cpu``." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:199 -msgid "Federated learning in five steps" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:118 +msgid "Loading the data" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:202 -msgid "Step 0: Initialize global model" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:120 +msgid "" +"Federated learning can be applied to many different types of tasks across" +" different domains. In this tutorial, we introduce federated learning by " +"training a simple convolutional neural network (CNN) on the popular " +"CIFAR-10 dataset. CIFAR-10 can be used to train image classifiers that " +"distinguish between images from ten different classes: 'airplane', " +"'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', and " +"'truck'." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:204 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:131 msgid "" -"We start by initializing the model on the server. This is exactly the " -"same in classic centralized learning: we initialize the model parameters," -" either randomly or from a previously saved checkpoint." +"We simulate having multiple datasets from multiple organizations (also " +"called the \"cross-silo\" setting in federated learning) by splitting the" +" original CIFAR-10 dataset into multiple partitions. Each partition will " +"represent the data from a single organization. We're doing this purely " +"for experimentation purposes, in the real world there's no need for data " +"splitting because each organization already has their own data (so the " +"data is naturally partitioned)." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:210 -msgid "|5b1408eec0d746cdb91162a9107b6089|" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:133 +msgid "" +"Each organization will act as a client in the federated learning system. " +"So having ten organizations participate in a federation means having ten " +"clients connected to the federated learning server." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:307 -msgid "Initialize global model" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:144 +msgid "" +"Let's now create the Federated Dataset abstraction that from ``flwr-" +"datasets`` that partitions the CIFAR-10. We will create small training " +"and test set for each edge device and wrap each of them into a PyTorch " +"``DataLoader``:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:217 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:198 msgid "" -"Step 1: Send model to a number of connected organizations/devices (client" -" nodes)" +"We now have a list of ten training sets and ten validation sets " +"(``trainloaders`` and ``valloaders``) representing the data of ten " +"different organizations. Each ``trainloader``/``valloader`` pair contains" +" 4000 training examples and 1000 validation examples. There's also a " +"single ``testloader`` (we did not split the test set). Again, this is " +"only necessary for building research or educational systems, actual " +"federated learning systems have their data naturally distributed across " +"multiple partitions." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:219 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:201 msgid "" -"Next, we send the parameters of the global model to the connected client " -"nodes (think: edge devices like smartphones or servers belonging to " -"organizations). This is to ensure that each participating node starts " -"their local training using the same model parameters. We often use only a" -" few of the connected nodes instead of all nodes. The reason for this is " -"that selecting more and more client nodes has diminishing returns." +"Let's take a look at the first batch of images and labels in the first " +"training set (i.e., ``trainloaders[0]``) before we move on:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:225 -msgid "|aef19f4b122c4e8d9f4c57f99bcd5dd2|" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:240 +msgid "" +"The output above shows a random batch of images from the first " +"``trainloader`` in our list of ten ``trainloaders``. It also prints the " +"labels associated with each image (i.e., one of the ten possible labels " +"we've seen above). If you run the cell again, you should see another " +"batch of images." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:309 -msgid "Send global model" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:252 +msgid "Step 1: Centralized Training with PyTorch" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:232 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:263 msgid "" -"Step 2: Train model locally on the data of each organization/device " -"(client node)" +"Next, we're going to use PyTorch to define a simple convolutional neural " +"network. This introduction assumes basic familiarity with PyTorch, so it " +"doesn't cover the PyTorch-related aspects in full detail. If you want to " +"dive deeper into PyTorch, we recommend `DEEP LEARNING WITH PYTORCH: A 60 " +"MINUTE BLITZ " +"`__." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:234 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:275 +msgid "Defining the model" +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:277 msgid "" -"Now that all (selected) client nodes have the latest version of the " -"global model parameters, they start the local training. They use their " -"own local dataset to train their own local model. They don't train the " -"model until full convergence, but they only train for a little while. " -"This could be as little as one epoch on the local data, or even just a " -"few steps (mini-batches)." +"We use the simple CNN described in the `PyTorch tutorial " +"`__:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:240 -msgid "|2881a86d8fc54ba29d96b29fc2819f4a|" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:314 +msgid "Let's continue with the usual training and test functions:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:311 -msgid "Train on local data" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:374 +msgid "Training the model" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:247 -msgid "Step 3: Return model updates back to the server" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:376 +msgid "" +"We now have all the basic building blocks we need: a dataset, a model, a " +"training function, and a test function. Let's put them together to train " +"the model on the dataset of one of our organizations " +"(``trainloaders[0]``). This simulates the reality of most machine " +"learning projects today: each organization has their own data and trains " +"models only on this internal data:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:249 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:406 msgid "" -"After local training, each client node has a slightly different version " -"of the model parameters they originally received. The parameters are all " -"different because each client node has different examples in its local " -"dataset. The client nodes then send those model updates back to the " -"server. The model updates they send can either be the full model " -"parameters or just the gradients that were accumulated during local " -"training." +"Training the simple CNN on our CIFAR-10 split for 5 epochs should result " +"in a test set accuracy of about 41%, which is not good, but at the same " +"time, it doesn't really matter for the purposes of this tutorial. The " +"intent was just to show a simplistic centralized training pipeline that " +"sets the stage for what comes next - federated learning!" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:255 -msgid "|ec1fe880237247e0975f52766775ab84|" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:418 +msgid "Step 2: Federated Learning with Flower" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:313 -msgid "Send model updates" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:420 +msgid "" +"Step 1 demonstrated a simple centralized training pipeline. All data was " +"in one place (i.e., a single ``trainloader`` and a single ``valloader``)." +" Next, we'll simulate a situation where we have multiple datasets in " +"multiple organizations and where we train a model over these " +"organizations using federated learning." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:262 -msgid "Step 4: Aggregate model updates into a new global model" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:432 +msgid "Updating model parameters" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:264 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:434 msgid "" -"The server receives model updates from the selected client nodes. If it " -"selected 100 client nodes, it now has 100 slightly different versions of " -"the original global model, each trained on the local data of one client. " -"But didn't we want to have one model that contains the learnings from the" -" data of all 100 client nodes?" +"In federated learning, the server sends the global model parameters to " +"the client, and the client updates the local model with the parameters " +"received from the server. It then trains the model on the local data " +"(which changes the model parameters locally) and sends the " +"updated/changed model parameters back to the server (or, alternatively, " +"it sends just the gradients back to the server, not the full model " +"parameters)." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:266 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:436 msgid "" -"In order to get one single model, we have to combine all the model " -"updates we received from the client nodes. This process is called " -"*aggregation*, and there are many different ways to do it. The most basic" -" way to do it is called *Federated Averaging* (`McMahan et al., 2016 " -"`__), often abbreviated as *FedAvg*. " -"*FedAvg* takes the 100 model updates and, as the name suggests, averages " -"them. To be more precise, it takes the *weighted average* of the model " -"updates, weighted by the number of examples each client used for " -"training. The weighting is important to make sure that each data example " -"has the same \"influence\" on the resulting global model. If one client " -"has 10 examples, and another client has 100 examples, then - without " -"weighting - each of the 10 examples would influence the global model ten " -"times as much as each of the 100 examples." +"We need two helper functions to update the local model with parameters " +"received from the server and to get the updated model parameters from the" +" local model: ``set_parameters`` and ``get_parameters``. The following " +"two functions do just that for the PyTorch model above." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:273 -msgid "|9fdf048ed58d4467b2718cdf4aaf1ec3|" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:438 +msgid "" +"The details of how this works are not really important here (feel free to" +" consult the PyTorch documentation if you want to learn more). In " +"essence, we use ``state_dict`` to access PyTorch model parameter tensors." +" The parameter tensors are then converted to/from a list of NumPy " +"ndarray's (which Flower knows how to serialize/deserialize):" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:315 -msgid "Aggregate model updates" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:466 +msgid "Implementing a Flower client" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:280 -msgid "Step 5: Repeat steps 1 to 4 until the model converges" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:468 +msgid "" +"With that out of the way, let's move on to the interesting part. " +"Federated learning systems consist of a server and multiple clients. In " +"Flower, we create clients by implementing subclasses of " +"``flwr.client.Client`` or ``flwr.client.NumPyClient``. We use " +"``NumPyClient`` in this tutorial because it is easier to implement and " +"requires us to write less boilerplate." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:282 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:470 msgid "" -"Steps 1 to 4 are what we call a single round of federated learning. The " -"global model parameters get sent to the participating client nodes (step " -"1), the client nodes train on their local data (step 2), they send their " -"updated models to the server (step 3), and the server then aggregates the" -" model updates to get a new version of the global model (step 4)." +"To implement the Flower client, we create a subclass of " +"``flwr.client.NumPyClient`` and implement the three methods " +"``get_parameters``, ``fit``, and ``evaluate``:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:284 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:472 +msgid "``get_parameters``: Return the current local model parameters" +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:473 msgid "" -"During a single round, each client node that participates in that " -"iteration only trains for a little while. This means that after the " -"aggregation step (step 4), we have a model that has been trained on all " -"the data of all participating client nodes, but only for a little while. " -"We then have to repeat this training process over and over again to " -"eventually arrive at a fully trained model that performs well across the " -"data of all client nodes." +"``fit``: Receive model parameters from the server, train the model " +"parameters on the local data, and return the (updated) model parameters " +"to the server" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:289 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:474 msgid "" -"Congratulations, you now understand the basics of federated learning. " -"There's a lot more to discuss, of course, but that was federated learning" -" in a nutshell. In later parts of this tutorial, we will go into more " -"detail. Interesting questions include: How can we select the best client " -"nodes that should participate in the next round? What's the best way to " -"aggregate model updates? How can we handle failing client nodes " -"(stragglers)?" +"``evaluate``: Receive model parameters from the server, evaluate the " +"model parameters on the local data, and return the evaluation result to " +"the server" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:294 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:476 msgid "" -"Just like we can train a model on the decentralized data of different " -"client nodes, we can also evaluate the model on that data to receive " -"valuable metrics. This is called federated evaluation, sometimes " -"abbreviated as FE. In fact, federated evaluation is an integral part of " -"most federated learning systems." +"We mentioned that our clients will use the previously defined PyTorch " +"components for model training and evaluation. Let's see a simple Flower " +"client implementation that brings everything together:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:297 -msgid "Federated analytics" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:513 +msgid "" +"Our class ``FlowerClient`` defines how local training/evaluation will be " +"performed and allows Flower to call the local training/evaluation through" +" ``fit`` and ``evaluate``. Each instance of ``FlowerClient`` represents a" +" *single client* in our federated learning system. Federated learning " +"systems have multiple clients (otherwise, there's not much to federate), " +"so each client will be represented by its own instance of " +"``FlowerClient``. If we have, for example, three clients in our workload," +" then we'd have three instances of ``FlowerClient``. Flower calls " +"``FlowerClient.fit`` on the respective instance when the server selects a" +" particular client for training (and ``FlowerClient.evaluate`` for " +"evaluation)." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:299 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:517 +msgid "Using the Virtual Client Engine" +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:519 msgid "" -"In many cases, machine learning isn't necessary to derive value from " -"data. Data analysis can yield valuable insights, but again, there's often" -" not enough data to get a clear answer. What's the average age at which " -"people develop a certain type of health condition? Federated analytics " -"enables such queries over multiple client nodes. It is usually used in " -"conjunction with other privacy-enhancing technologies like secure " -"aggregation to prevent the server from seeing the results submitted by " -"individual client nodes." +"In this notebook, we want to simulate a federated learning system with 10" +" clients on a single machine. This means that the server and all 10 " +"clients will live on a single machine and share resources such as CPU, " +"GPU, and memory. Having 10 clients would mean having 10 instances of " +"``FlowerClient`` in memory. Doing this on a single machine can quickly " +"exhaust the available memory resources, even if only a subset of these " +"clients participates in a single round of federated learning." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:305 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:521 msgid "" -"Differential privacy (DP) is often mentioned in the context of Federated " -"Learning. It is a privacy-preserving method used when analyzing and " -"sharing statistical data, ensuring the privacy of individual " -"participants. DP achieves this by adding statistical noise to the model " -"updates, ensuring any individual participants’ information cannot be " -"distinguished or re-identified. This technique can be considered an " -"optimization that provides a quantifiable privacy protection measure." +"In addition to the regular capabilities where server and clients run on " +"multiple machines, Flower, therefore, provides special simulation " +"capabilities that create ``FlowerClient`` instances only when they are " +"actually necessary for training or evaluation. To enable the Flower " +"framework to create clients when necessary, we need to implement a " +"function called ``client_fn`` that creates a ``FlowerClient`` instance on" +" demand. Flower calls ``client_fn`` whenever it needs an instance of one " +"particular client to call ``fit`` or ``evaluate`` (those instances are " +"usually discarded after use, so they should not keep any local state). " +"Clients are identified by a client ID, or short ``cid``. The ``cid`` can " +"be used, for example, to load different local data partitions for " +"different clients, as can be seen below:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:326 -msgid "Flower" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:556 +msgid "Starting the training" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:328 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:558 msgid "" -"Federated learning, federated evaluation, and federated analytics require" -" infrastructure to move machine learning models back and forth, train and" -" evaluate them on local data, and then aggregate the updated models. " -"Flower provides the infrastructure to do exactly that in an easy, " -"scalable, and secure way. In short, Flower presents a unified approach to" -" federated learning, analytics, and evaluation. It allows the user to " -"federate any workload, any ML framework, and any programming language." +"We now have the class ``FlowerClient`` which defines client-side " +"training/evaluation and ``client_fn`` which allows Flower to create " +"``FlowerClient`` instances whenever it needs to call ``fit`` or " +"``evaluate`` on one particular client. The last step is to start the " +"actual simulation using ``flwr.simulation.start_simulation``." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:334 -msgid "|ff726bc5505e432388ee2fdd6ef420b9|" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:560 +msgid "" +"The function ``start_simulation`` accepts a number of arguments, amongst " +"them the ``client_fn`` used to create ``FlowerClient`` instances, the " +"number of clients to simulate (``num_clients``), the number of federated " +"learning rounds (``num_rounds``), and the strategy. The strategy " +"encapsulates the federated learning approach/algorithm, for example, " +"*Federated Averaging* (FedAvg)." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:340 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:562 msgid "" -"Flower federated learning server and client nodes (car, scooter, personal" -" computer, roomba, and phone)" +"Flower has a number of built-in strategies, but we can also use our own " +"strategy implementations to customize nearly all aspects of the federated" +" learning approach. For this example, we use the built-in ``FedAvg`` " +"implementation and customize it using a few basic parameters. The last " +"step is the actual call to ``start_simulation`` which - you guessed it - " +"starts the simulation:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:353 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:608 +msgid "Behind the scenes" +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:610 +msgid "So how does this work? How does Flower execute this simulation?" +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:612 +#, python-format msgid "" -"Congratulations, you just learned the basics of federated learning and " -"how it relates to the classic (centralized) machine learning!" +"When we call ``start_simulation``, we tell Flower that there are 10 " +"clients (``num_clients=10``). Flower then goes ahead an asks the " +"``FedAvg`` strategy to select clients. ``FedAvg`` knows that it should " +"select 100% of the available clients (``fraction_fit=1.0``), so it goes " +"ahead and selects 10 random clients (i.e., 100% of 10)." +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:614 +msgid "" +"Flower then asks the selected 10 clients to train the model. When the " +"server receives the model parameter updates from the clients, it hands " +"those updates over to the strategy (*FedAvg*) for aggregation. The " +"strategy aggregates those updates and returns the new global model, which" +" then gets used in the next round of federated learning." +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:626 +msgid "Where's the accuracy?" +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:628 +msgid "" +"You may have noticed that all metrics except for ``losses_distributed`` " +"are empty. Where did the ``{\"accuracy\": float(accuracy)}`` go?" +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:630 +msgid "" +"Flower can automatically aggregate losses returned by individual clients," +" but it cannot do the same for metrics in the generic metrics dictionary " +"(the one with the ``accuracy`` key). Metrics dictionaries can contain " +"very different kinds of metrics and even key/value pairs that are not " +"metrics at all, so the framework does not (and can not) know how to " +"handle these automatically." +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:632 +msgid "" +"As users, we need to tell the framework how to handle/aggregate these " +"custom metrics, and we do so by passing metric aggregation functions to " +"the strategy. The strategy will then call these functions whenever it " +"receives fit or evaluate metrics from clients. The two possible functions" +" are ``fit_metrics_aggregation_fn`` and " +"``evaluate_metrics_aggregation_fn``." +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:634 +msgid "" +"Let's create a simple weighted averaging function to aggregate the " +"``accuracy`` metric we return from ``evaluate``:" +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:660 +msgid "" +"The only thing left to do is to tell the strategy to call this function " +"whenever it receives evaluation metric dictionaries from the clients:" +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:697 +msgid "" +"We now have a full system that performs federated training and federated " +"evaluation. It uses the ``weighted_average`` function to aggregate custom" +" evaluation metrics and calculates a single ``accuracy`` metric across " +"all clients on the server side." +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:699 +msgid "" +"The other two categories of metrics (``losses_centralized`` and " +"``metrics_centralized``) are still empty because they only apply when " +"centralized evaluation is being used. Part two of the Flower tutorial " +"will cover centralized evaluation." +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:711 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:351 +msgid "Final remarks" +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:713 +msgid "" +"Congratulations, you just trained a convolutional neural network, " +"federated over 10 clients! With that, you understand the basics of " +"federated learning with Flower. The same approach you've seen can be used" +" with other machine learning frameworks (not just PyTorch) and tasks (not" +" just CIFAR-10 images classification), for example NLP with Hugging Face " +"Transformers or speech with SpeechBrain." +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:715 +msgid "" +"In the next notebook, we're going to cover some more advanced concepts. " +"Want to customize your strategy? Initialize parameters on the server " +"side? Or evaluate the aggregated model on the server side? We'll cover " +"all this and more in the next tutorial." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:355 -msgid "" -"In the next part of this tutorial, we are going to build a first " -"federated learning system with Flower." -msgstr "" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:733 +msgid "" +"The `Flower Federated Learning Tutorial - Part 2 " +"`__ goes into more depth about strategies and all " +"the advanced things you can build with them." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:9 +msgid "Use a federated learning strategy" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:11 +msgid "" +"Welcome to the next part of the federated learning tutorial. In previous " +"parts of this tutorial, we introduced federated learning with PyTorch and" +" Flower (`part 1 `__)." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:13 +msgid "" +"In this notebook, we'll begin to customize the federated learning system " +"we built in the introductory notebook (again, using `Flower " +"`__ and `PyTorch `__)." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:17 +msgid "Let's move beyond FedAvg with Flower strategies!" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:309 +msgid "Strategy customization" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:311 +msgid "" +"So far, everything should look familiar if you've worked through the " +"introductory notebook. With that, we're ready to introduce a number of " +"new features." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:323 +msgid "Server-side parameter **initialization**" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:325 +msgid "" +"Flower, by default, initializes the global model by asking one random " +"client for the initial parameters. In many cases, we want more control " +"over parameter initialization though. Flower therefore allows you to " +"directly pass the initial parameters to the Strategy:" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:370 +msgid "" +"Passing ``initial_parameters`` to the ``FedAvg`` strategy prevents Flower" +" from asking one of the clients for the initial parameters. If we look " +"closely, we can see that the logs do not show any calls to the " +"``FlowerClient.get_parameters`` method." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:382 +msgid "Starting with a customized strategy" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:384 +msgid "" +"We've seen the function ``start_simulation`` before. It accepts a number " +"of arguments, amongst them the ``client_fn`` used to create " +"``FlowerClient`` instances, the number of clients to simulate " +"``num_clients``, the number of rounds ``num_rounds``, and the strategy." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:386 +msgid "" +"The strategy encapsulates the federated learning approach/algorithm, for " +"example, ``FedAvg`` or ``FedAdagrad``. Let's try to use a different " +"strategy this time:" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:424 +msgid "Server-side parameter **evaluation**" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:426 +msgid "" +"Flower can evaluate the aggregated model on the server-side or on the " +"client-side. Client-side and server-side evaluation are similar in some " +"ways, but different in others." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:428 +msgid "" +"**Centralized Evaluation** (or *server-side evaluation*) is conceptually " +"simple: it works the same way that evaluation in centralized machine " +"learning does. If there is a server-side dataset that can be used for " +"evaluation purposes, then that's great. We can evaluate the newly " +"aggregated model after each round of training without having to send the " +"model to clients. We're also fortunate in the sense that our entire " +"evaluation dataset is available at all times." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:430 +msgid "" +"**Federated Evaluation** (or *client-side evaluation*) is more complex, " +"but also more powerful: it doesn't require a centralized dataset and " +"allows us to evaluate models over a larger set of data, which often " +"yields more realistic evaluation results. In fact, many scenarios require" +" us to use **Federated Evaluation** if we want to get representative " +"evaluation results at all. But this power comes at a cost: once we start " +"to evaluate on the client side, we should be aware that our evaluation " +"dataset can change over consecutive rounds of learning if those clients " +"are not always available. Moreover, the dataset held by each client can " +"also change over consecutive rounds. This can lead to evaluation results " +"that are not stable, so even if we would not change the model, we'd see " +"our evaluation results fluctuate over consecutive rounds." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:433 +msgid "" +"We've seen how federated evaluation works on the client side (i.e., by " +"implementing the ``evaluate`` method in ``FlowerClient``). Now let's see " +"how we can evaluate aggregated model parameters on the server-side:" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:490 +msgid "Sending/receiving arbitrary values to/from clients" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:492 +msgid "" +"In some situations, we want to configure client-side execution (training," +" evaluation) from the server-side. One example for that is the server " +"asking the clients to train for a certain number of local epochs. Flower " +"provides a way to send configuration values from the server to the " +"clients using a dictionary. Let's look at an example where the clients " +"receive values from the server through the ``config`` parameter in " +"``fit`` (``config`` is also available in ``evaluate``). The ``fit`` " +"method receives the configuration dictionary through the ``config`` " +"parameter and can then read values from this dictionary. In this example," +" it reads ``server_round`` and ``local_epochs`` and uses those values to " +"improve the logging and configure the number of local training epochs:" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:546 +msgid "" +"So how can we send this config dictionary from server to clients? The " +"built-in Flower Strategies provide way to do this, and it works similarly" +" to the way server-side evaluation works. We provide a function to the " +"strategy, and the strategy calls this function for every round of " +"federated learning:" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:576 +msgid "" +"Next, we'll just pass this function to the FedAvg strategy before " +"starting the simulation:" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:613 +msgid "" +"As we can see, the client logs now include the current round of federated" +" learning (which they read from the ``config`` dictionary). We can also " +"configure local training to run for one epoch during the first and second" +" round of federated learning, and then for two epochs during the third " +"round." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:615 +msgid "" +"Clients can also return arbitrary values to the server. To do so, they " +"return a dictionary from ``fit`` and/or ``evaluate``. We have seen and " +"used this concept throughout this notebook without mentioning it " +"explicitly: our ``FlowerClient`` returns a dictionary containing a custom" +" key/value pair as the third return value in ``evaluate``." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:627 +msgid "Scaling federated learning" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:629 +msgid "" +"As a last step in this notebook, let's see how we can use Flower to " +"experiment with a large number of clients." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:651 +#, python-format +msgid "" +"We now have 1000 partitions, each holding 45 training and 5 validation " +"examples. Given that the number of training examples on each client is " +"quite small, we should probably train the model a bit longer, so we " +"configure the clients to perform 3 local training epochs. We should also " +"adjust the fraction of clients selected for training during each round " +"(we don't want all 1000 clients participating in every round), so we " +"adjust ``fraction_fit`` to ``0.05``, which means that only 5% of " +"available clients (so 50 clients) will be selected for training each " +"round:" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:699 +msgid "" +"In this notebook, we've seen how we can gradually enhance our system by " +"customizing the strategy, initializing parameters on the server side, " +"choosing a different strategy, and evaluating models on the server-side. " +"That's quite a bit of flexibility with so little code, right?" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:701 +msgid "" +"In the later sections, we've seen how we can communicate arbitrary values" +" between server and clients to fully customize client-side execution. " +"With that capability, we built a large-scale Federated Learning " +"simulation using the Flower Virtual Client Engine and ran an experiment " +"involving 1000 clients in the same workload - all in a Jupyter Notebook!" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:719 +msgid "" +"The `Flower Federated Learning Tutorial - Part 3 " +"`__ shows how to build a fully custom ``Strategy`` from " +"scratch." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:9 +msgid "What is Federated Learning?" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:13 +msgid "" +"In this tutorial, you will learn what federated learning is, build your " +"first system in Flower, and gradually extend it. If you work through all " +"parts of the tutorial, you will be able to build advanced federated " +"learning systems that approach the current state of the art in the field." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:15 +msgid "" +"🧑‍🏫 This tutorial starts at zero and expects no familiarity with " +"federated learning. Only a basic understanding of data science and Python" +" programming is assumed." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:17 +msgid "" +"`Star Flower on GitHub `__ ⭐️ and join " +"the open-source Flower community on Slack to connect, ask questions, and " +"get help: `Join Slack `__ 🌼 We'd love to " +"hear from you in the ``#introductions`` channel! And if anything is " +"unclear, head over to the ``#questions`` channel." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:31 +msgid "Classic machine learning" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:33 +msgid "" +"Before we begin to discuss federated learning, let us quickly recap how " +"most machine learning works today." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:35 +msgid "" +"In machine learning, we have a model, and we have data. The model could " +"be a neural network (as depicted here), or something else, like classical" +" linear regression." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:41 +msgid "|d8bf04f23d9b46d8a23cc6f4887d7873|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:109 +msgid "Model and data" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:47 +msgid "" +"We train the model using the data to perform a useful task. A task could " +"be to detect objects in images, transcribe an audio recording, or play a " +"game like Go." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:53 +msgid "|5aa1711387d74d0f8b9c499e1a51627e|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:111 +msgid "Train model using data" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:59 +msgid "" +"Now, in practice, the training data we work with doesn't originate on the" +" machine we train the model on. It gets created somewhere else." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:61 +msgid "" +"It originates on a smartphone by the user interacting with an app, a car " +"collecting sensor data, a laptop receiving input via the keyboard, or a " +"smart speaker listening to someone trying to sing a song." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:67 +msgid "|2bc8e069228d4873804061ff4a95048c|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:113 +msgid "Data on a phone" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:73 +msgid "" +"What's also important to mention, this \"somewhere else\" is usually not " +"just one place, it's many places. It could be several devices all running" +" the same app. But it could also be several organizations, all generating" +" data for the same task." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:79 +msgid "|c258488766324dc9a6807f0e7c4fd5f4|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:115 +msgid "Data is on many devices" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:85 +msgid "" +"So to use machine learning, or any kind of data analysis, the approach " +"that has been used in the past was to collect all data on a central " +"server. This server can be somewhere in a data center, or somewhere in " +"the cloud." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:91 +msgid "|d5f962c3f4ec48529efda980868c14b0|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:117 +msgid "Central data collection" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:97 +msgid "" +"Once all the data is collected in one place, we can finally use machine " +"learning algorithms to train our model on the data. This is the machine " +"learning approach that we've basically always relied on." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:103 +msgid "|a5eccea18d4c43a68b54b65043cabef8|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:119 +msgid "Central model training" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:130 +msgid "Challenges of classical machine learning" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:132 +msgid "" +"The classic machine learning approach we've just seen can be used in some" +" cases. Great examples include categorizing holiday photos, or analyzing " +"web traffic. Cases, where all the data is naturally available on a " +"centralized server." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:138 +msgid "|f17662f7df2d42f68cac70a1fdeda8a7|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:173 +msgid "Centralized possible" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:144 +msgid "" +"But the approach can not be used in many other cases. Cases, where the " +"data is not available on a centralized server, or cases where the data " +"available on one server is not enough to train a good model." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:150 +msgid "|241fc906441a4f038c625a19d30d01b2|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:175 +msgid "Centralized impossible" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:156 +msgid "" +"There are many reasons why the classic centralized machine learning " +"approach does not work for a large number of highly important real-world " +"use cases. Those reasons include:" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:158 +msgid "" +"**Regulations**: GDPR (Europe), CCPA (California), PIPEDA (Canada), LGPD " +"(Brazil), PDPL (Argentina), KVKK (Turkey), POPI (South Africa), FSS " +"(Russia), CDPR (China), PDPB (India), PIPA (Korea), APPI (Japan), PDP " +"(Indonesia), PDPA (Singapore), APP (Australia), and other regulations " +"protect sensitive data from being moved. In fact, those regulations " +"sometimes even prevent single organizations from combining their own " +"users' data for artificial intelligence training because those users live" +" in different parts of the world, and their data is governed by different" +" data protection regulations." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:160 +msgid "" +"**User preference**: In addition to regulation, there are use cases where" +" users just expect that no data leaves their device, ever. If you type " +"your passwords and credit card info into the digital keyboard of your " +"phone, you don't expect those passwords to end up on the server of the " +"company that developed that keyboard, do you? In fact, that use case was " +"the reason federated learning was invented in the first place." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:161 +msgid "" +"**Data volume**: Some sensors, like cameras, produce such a high data " +"volume that it is neither feasible nor economic to collect all the data " +"(due to, for example, bandwidth or communication efficiency). Think about" +" a national rail service with hundreds of train stations across the " +"country. If each of these train stations is outfitted with a number of " +"security cameras, the volume of raw on-device data they produce requires " +"incredibly powerful and exceedingly expensive infrastructure to process " +"and store. And most of the data isn't even useful." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:164 +msgid "Examples where centralized machine learning does not work include:" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:166 +msgid "" +"Sensitive healthcare records from multiple hospitals to train cancer " +"detection models" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:167 +msgid "" +"Financial information from different organizations to detect financial " +"fraud" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:168 +msgid "Location data from your electric car to make better range prediction" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:169 +msgid "End-to-end encrypted messages to train better auto-complete models" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:171 +msgid "" +"The popularity of privacy-enhancing systems like the `Brave " +"`__ browser or the `Signal `__ " +"messenger shows that users care about privacy. In fact, they choose the " +"privacy-enhancing version over other alternatives, if such an alternative" +" exists. But what can we do to apply machine learning and data science to" +" these cases to utilize private data? After all, these are all areas that" +" would benefit significantly from recent advances in AI." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:186 +msgid "Federated learning" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:188 +msgid "" +"Federated learning simply reverses this approach. It enables machine " +"learning on distributed data by moving the training to the data, instead " +"of moving the data to the training. Here's the single-sentence " +"explanation:" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:190 +msgid "Central machine learning: move the data to the computation" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:191 +msgid "Federated (machine) learning: move the computation to the data" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:193 +msgid "" +"By doing so, it enables us to use machine learning (and other data " +"science approaches) in areas where it wasn't possible before. We can now " +"train excellent medical AI models by enabling different hospitals to work" +" together. We can solve financial fraud by training AI models on the data" +" of different financial institutions. We can build novel privacy-" +"enhancing applications (such as secure messaging) that have better built-" +"in AI than their non-privacy-enhancing alternatives. And those are just a" +" few of the examples that come to mind. As we deploy federated learning, " +"we discover more and more areas that can suddenly be reinvented because " +"they now have access to vast amounts of previously inaccessible data." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:196 +msgid "" +"So how does federated learning work, exactly? Let's start with an " +"intuitive explanation." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:199 +msgid "Federated learning in five steps" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:202 +msgid "Step 0: Initialize global model" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:204 +msgid "" +"We start by initializing the model on the server. This is exactly the " +"same in classic centralized learning: we initialize the model parameters," +" either randomly or from a previously saved checkpoint." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:210 +msgid "|0aa5aa05810b44b6a835cecce28f3137|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:307 +msgid "Initialize global model" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:217 +msgid "" +"Step 1: Send model to a number of connected organizations/devices (client" +" nodes)" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:219 +msgid "" +"Next, we send the parameters of the global model to the connected client " +"nodes (think: edge devices like smartphones or servers belonging to " +"organizations). This is to ensure that each participating node starts " +"their local training using the same model parameters. We often use only a" +" few of the connected nodes instead of all nodes. The reason for this is " +"that selecting more and more client nodes has diminishing returns." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:225 +msgid "|c742940dd4bf4de09d8d0d5e8d179638|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:309 +msgid "Send global model" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:232 +msgid "" +"Step 2: Train model locally on the data of each organization/device " +"(client node)" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:234 +msgid "" +"Now that all (selected) client nodes have the latest version of the " +"global model parameters, they start the local training. They use their " +"own local dataset to train their own local model. They don't train the " +"model until full convergence, but they only train for a little while. " +"This could be as little as one epoch on the local data, or even just a " +"few steps (mini-batches)." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:240 +msgid "|1f169ab4601a47e1a226f1628f4ebddb|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:311 +msgid "Train on local data" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:247 +msgid "Step 3: Return model updates back to the server" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:249 +msgid "" +"After local training, each client node has a slightly different version " +"of the model parameters they originally received. The parameters are all " +"different because each client node has different examples in its local " +"dataset. The client nodes then send those model updates back to the " +"server. The model updates they send can either be the full model " +"parameters or just the gradients that were accumulated during local " +"training." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:255 +msgid "|12cfa9cde14440ecb8c8f6c1d7185bec|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:313 +msgid "Send model updates" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:262 +msgid "Step 4: Aggregate model updates into a new global model" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:264 +msgid "" +"The server receives model updates from the selected client nodes. If it " +"selected 100 client nodes, it now has 100 slightly different versions of " +"the original global model, each trained on the local data of one client. " +"But didn't we want to have one model that contains the learnings from the" +" data of all 100 client nodes?" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:266 +msgid "" +"In order to get one single model, we have to combine all the model " +"updates we received from the client nodes. This process is called " +"*aggregation*, and there are many different ways to do it. The most basic" +" way to do it is called *Federated Averaging* (`McMahan et al., 2016 " +"`__), often abbreviated as *FedAvg*. " +"*FedAvg* takes the 100 model updates and, as the name suggests, averages " +"them. To be more precise, it takes the *weighted average* of the model " +"updates, weighted by the number of examples each client used for " +"training. The weighting is important to make sure that each data example " +"has the same \"influence\" on the resulting global model. If one client " +"has 10 examples, and another client has 100 examples, then - without " +"weighting - each of the 10 examples would influence the global model ten " +"times as much as each of the 100 examples." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:273 +msgid "|72939caf6e294b0986fee6dde96614d7|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:315 +msgid "Aggregate model updates" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:280 +msgid "Step 5: Repeat steps 1 to 4 until the model converges" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:282 +msgid "" +"Steps 1 to 4 are what we call a single round of federated learning. The " +"global model parameters get sent to the participating client nodes (step " +"1), the client nodes train on their local data (step 2), they send their " +"updated models to the server (step 3), and the server then aggregates the" +" model updates to get a new version of the global model (step 4)." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:284 +msgid "" +"During a single round, each client node that participates in that " +"iteration only trains for a little while. This means that after the " +"aggregation step (step 4), we have a model that has been trained on all " +"the data of all participating client nodes, but only for a little while. " +"We then have to repeat this training process over and over again to " +"eventually arrive at a fully trained model that performs well across the " +"data of all client nodes." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:289 +msgid "" +"Congratulations, you now understand the basics of federated learning. " +"There's a lot more to discuss, of course, but that was federated learning" +" in a nutshell. In later parts of this tutorial, we will go into more " +"detail. Interesting questions include: How can we select the best client " +"nodes that should participate in the next round? What's the best way to " +"aggregate model updates? How can we handle failing client nodes " +"(stragglers)?" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:294 +msgid "" +"Just like we can train a model on the decentralized data of different " +"client nodes, we can also evaluate the model on that data to receive " +"valuable metrics. This is called federated evaluation, sometimes " +"abbreviated as FE. In fact, federated evaluation is an integral part of " +"most federated learning systems." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:297 +msgid "Federated analytics" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:299 +msgid "" +"In many cases, machine learning isn't necessary to derive value from " +"data. Data analysis can yield valuable insights, but again, there's often" +" not enough data to get a clear answer. What's the average age at which " +"people develop a certain type of health condition? Federated analytics " +"enables such queries over multiple client nodes. It is usually used in " +"conjunction with other privacy-enhancing technologies like secure " +"aggregation to prevent the server from seeing the results submitted by " +"individual client nodes." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:305 +msgid "" +"Differential privacy (DP) is often mentioned in the context of Federated " +"Learning. It is a privacy-preserving method used when analyzing and " +"sharing statistical data, ensuring the privacy of individual " +"participants. DP achieves this by adding statistical noise to the model " +"updates, ensuring any individual participants’ information cannot be " +"distinguished or re-identified. This technique can be considered an " +"optimization that provides a quantifiable privacy protection measure." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:326 +msgid "Flower" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:328 +msgid "" +"Federated learning, federated evaluation, and federated analytics require" +" infrastructure to move machine learning models back and forth, train and" +" evaluate them on local data, and then aggregate the updated models. " +"Flower provides the infrastructure to do exactly that in an easy, " +"scalable, and secure way. In short, Flower presents a unified approach to" +" federated learning, analytics, and evaluation. It allows the user to " +"federate any workload, any ML framework, and any programming language." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:334 +msgid "|83a8daee45da4a98b8d6f24ae098fc50|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:340 +msgid "" +"Flower federated learning server and client nodes (car, scooter, personal" +" computer, roomba, and phone)" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:353 +msgid "" +"Congratulations, you just learned the basics of federated learning and " +"how it relates to the classic (centralized) machine learning!" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:355 +msgid "" +"In the next part of this tutorial, we are going to build a first " +"federated learning system with Flower." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:373 +msgid "" +"The `Flower Federated Learning Tutorial - Part 1 " +"`__ shows how to build a simple federated learning system " +"with PyTorch and Flower." +msgstr "" + +#~ msgid "" +#~ "Configuring and setting up the " +#~ ":code:`Dockerfile` as well the configuration" +#~ " for the devcontainer can be a " +#~ "bit more involved. The good thing " +#~ "is you want have to do it. " +#~ "Usually it should be enough to " +#~ "install Docker on your system and " +#~ "ensure its available on your command " +#~ "line. Additionally, install the `VSCode " +#~ "Containers Extension `_." +#~ msgstr "" + +#~ msgid "" +#~ "``flwr = { path = " +#~ "\"../../dist/flwr-1.0.0-py3-none-any.whl\" }`` " +#~ "(without extras)" +#~ msgstr "" + +#~ msgid "" +#~ "``flwr = { path = " +#~ "\"../../dist/flwr-1.0.0-py3-none-any.whl\", extras =" +#~ " [\"simulation\"] }`` (with extras)" +#~ msgstr "" + +#~ msgid "Upload the whl (e.g., ``flwr-1.7.0-py3-none-any.whl``)" +#~ msgstr "" + +#~ msgid "" +#~ "Change ``!pip install -q 'flwr[simulation]'" +#~ " torch torchvision matplotlib`` to ``!pip" +#~ " install -q 'flwr-1.7.0-py3-none-" +#~ "any.whl[simulation]' torch torchvision matplotlib``" +#~ msgstr "" + +#~ msgid "Before the release" +#~ msgstr "" + +#~ msgid "" +#~ "Update the changelog (``changelog.md``) with" +#~ " all relevant changes that happened " +#~ "after the last release. If the " +#~ "last release was tagged ``v1.2.0``, you" +#~ " can use the following URL to " +#~ "see all commits that got merged " +#~ "into ``main`` since then:" +#~ msgstr "" + +#~ msgid "" +#~ "`GitHub: Compare v1.2.0...main " +#~ "`_" +#~ msgstr "" + +#~ msgid "" +#~ "Thank the authors who contributed since" +#~ " the last release. This can be " +#~ "done by running the ``./dev/add-" +#~ "shortlog.sh`` convenience script (it can " +#~ "be ran multiple times and will " +#~ "update the names in the list if" +#~ " new contributors were added in the" +#~ " meantime)." +#~ msgstr "" + +#~ msgid "" +#~ "Update the ``changelog.md`` section header " +#~ "``Unreleased`` to contain the version " +#~ "number and date for the release " +#~ "you are building. Create a pull " +#~ "request with the change." +#~ msgstr "" + +#~ msgid "" +#~ "Tag the release commit with the " +#~ "version number as soon as the PR" +#~ " is merged: ``git tag v0.12.3``, then" +#~ " ``git push --tags``. This will " +#~ "create a draft release on GitHub " +#~ "containing the correct artifacts and the" +#~ " relevant part of the changelog." +#~ msgstr "" + +#~ msgid "" +#~ "Note that, in order to build the" +#~ " documentation locally (with ``poetry run" +#~ " make html``, like described below), " +#~ "`Pandoc _` needs " +#~ "to be installed on the system." +#~ msgstr "" + +#~ msgid "" +#~ "If you're familiar with how contributing" +#~ " on GitHub works, you can directly" +#~ " checkout our `getting started guide " +#~ "for contributors `_ and examples " +#~ "of `good first contributions " +#~ "`_." +#~ msgstr "" + +#~ msgid "" +#~ "This will create a `flower/` (or " +#~ "the name of your fork if you " +#~ "renamed it) folder in the current " +#~ "working directory." +#~ msgstr "" + +#~ msgid "Otherwise you can always find this option in the `Branches` page." +#~ msgstr "" + +#~ msgid "" +#~ "Once you click the `Compare & pull" +#~ " request` button, you should see " +#~ "something similar to this:" +#~ msgstr "" + +#~ msgid "Find the source file in `doc/source`" +#~ msgstr "" + +#~ msgid "" +#~ "Make the change in the `.rst` file" +#~ " (beware, the dashes under the title" +#~ " should be the same length as " +#~ "the title itself)" +#~ msgstr "" + +#~ msgid "Change the file name to `save-progress.rst`" +#~ msgstr "" + +#~ msgid "Add a redirect rule to `doc/source/conf.py`" +#~ msgstr "" + +#~ msgid "" +#~ "This will cause a redirect from " +#~ "`saving-progress.html` to `save-progress.html`," +#~ " old links will continue to work." +#~ msgstr "" + +#~ msgid "" +#~ "For the lateral navigation bar to " +#~ "work properly, it is very important " +#~ "to update the `index.rst` file as " +#~ "well. This is where we define the" +#~ " whole arborescence of the navbar." +#~ msgstr "" + +#~ msgid "Find and modify the file name in `index.rst`" +#~ msgstr "" + +#~ msgid "Add CI job to deploy the staging system when the `main` branch changes" +#~ msgstr "" + +#~ msgid "`Python 3.7 `_ or above" +#~ msgstr "" + +#~ msgid "" +#~ "First, clone the `Flower repository " +#~ "`_ from GitHub::" +#~ msgstr "" + +#~ msgid "" +#~ "Second, create a virtual environment " +#~ "(and activate it). If you chose to" +#~ " use :code:`pyenv` (with the :code" +#~ ":`pyenv-virtualenv` plugin) and already " +#~ "have it installed , you can use" +#~ " the following convenience script (by " +#~ "default it will use :code:`Python " +#~ "3.8.17`, but you can change it by" +#~ " providing a specific :code:``)::" +#~ msgstr "" + +#~ msgid "" +#~ "If you don't have :code:`pyenv` " +#~ "installed, you can use the following " +#~ "script that will install pyenv, set " +#~ "it up and create the virtual " +#~ "environment (with :code:`Python 3.8.17` by " +#~ "default)::" +#~ msgstr "" + +#~ msgid "" +#~ "Third, install the Flower package in " +#~ "development mode (think :code:`pip install " +#~ "-e`) along with all necessary " +#~ "dependencies::" +#~ msgstr "" + +#~ msgid "" +#~ "Developers could run the full set " +#~ "of Github Actions workflows under their" +#~ " local environment by using `Act " +#~ "_`. Please refer to" +#~ " the installation instructions under the" +#~ " linked repository and run the next" +#~ " command under Flower main cloned " +#~ "repository folder::" +#~ msgstr "" + +#~ msgid "" +#~ "Please note that these components are" +#~ " still experimental, the correct " +#~ "configuration of DP for a specific " +#~ "task is still an unsolved problem." +#~ msgstr "" + +#~ msgid "" +#~ "The distribution of the update norm " +#~ "has been shown to vary from " +#~ "task-to-task and to evolve as " +#~ "training progresses. Therefore, we use " +#~ "an adaptive approach [andrew]_ that " +#~ "continuously adjusts the clipping threshold" +#~ " to track a prespecified quantile of" +#~ " the update norm distribution." +#~ msgstr "" + +#~ msgid "" +#~ "We make (and attempt to enforce) a" +#~ " number of assumptions that must be" +#~ " satisfied to ensure that the " +#~ "training process actually realises the " +#~ ":math:`(\\epsilon, \\delta)` guarantees the " +#~ "user has in mind when configuring " +#~ "the setup." +#~ msgstr "" + +#~ msgid "" +#~ "The first two are useful for " +#~ "eliminating a multitude of complications " +#~ "associated with calibrating the noise to" +#~ " the clipping threshold while the " +#~ "third one is required to comply " +#~ "with the assumptions of the privacy " +#~ "analysis." +#~ msgstr "" + +#~ msgid "" +#~ "The first version of our solution " +#~ "was to define a decorator whose " +#~ "constructor accepted, among other things, " +#~ "a boolean valued variable indicating " +#~ "whether adaptive clipping was to be " +#~ "enabled or not. We quickly realized " +#~ "that this would clutter its " +#~ ":code:`__init__()` function with variables " +#~ "corresponding to hyperparameters of adaptive" +#~ " clipping that would remain unused " +#~ "when it was disabled. A cleaner " +#~ "implementation could be achieved by " +#~ "splitting the functionality into two " +#~ "decorators, :code:`DPFedAvgFixed` and " +#~ ":code:`DPFedAvgAdaptive`, with the latter sub-" +#~ " classing the former. The constructors " +#~ "for both classes accept a boolean " +#~ "parameter :code:`server_side_noising`, which, as " +#~ "the name suggests, determines where " +#~ "noising is to be performed." +#~ msgstr "" + +#~ msgid "" +#~ ":code:`aggregate_fit()`: We check whether any" +#~ " of the sampled clients dropped out" +#~ " or failed to upload an update " +#~ "before the round timed out. In " +#~ "that case, we need to abort the" +#~ " current round, discarding any successful" +#~ " updates that were received, and move" +#~ " on to the next one. On the " +#~ "other hand, if all clients responded " +#~ "successfully, we must force the " +#~ "averaging of the updates to happen " +#~ "in an unweighted manner by intercepting" +#~ " the :code:`parameters` field of " +#~ ":code:`FitRes` for each received update " +#~ "and setting it to 1. Furthermore, " +#~ "if :code:`server_side_noising=true`, each update " +#~ "is perturbed with an amount of " +#~ "noise equal to what it would have" +#~ " been subjected to had client-side" +#~ " noising being enabled. This entails " +#~ "*pre*-processing of the arguments to " +#~ "this method before passing them on " +#~ "to the wrappee's implementation of " +#~ ":code:`aggregate_fit()`." +#~ msgstr "" + +#~ msgid "" +#~ "McMahan, H. Brendan, et al. \"Learning" +#~ " differentially private recurrent language " +#~ "models.\" arXiv preprint arXiv:1710.06963 " +#~ "(2017)." +#~ msgstr "" + +#~ msgid "" +#~ "Andrew, Galen, et al. \"Differentially " +#~ "private learning with adaptive clipping.\" " +#~ "Advances in Neural Information Processing " +#~ "Systems 34 (2021): 17455-17466." +#~ msgstr "" + +#~ msgid "" +#~ "The following command can be used " +#~ "to verfiy if Flower was successfully " +#~ "installed. If everything worked, it " +#~ "should print the version of Flower " +#~ "to the command line::" +#~ msgstr "" + +#~ msgid "flwr (Python API reference)" +#~ msgstr "" + +#~ msgid "start_client" +#~ msgstr "" + +#~ msgid "start_numpy_client" +#~ msgstr "" + +#~ msgid "start_simulation" +#~ msgstr "" + +#~ msgid "server.start_server" +#~ msgstr "" + +#~ msgid "server.strategy" +#~ msgstr "" + +#~ msgid "server.strategy.Strategy" +#~ msgstr "" + +#~ msgid "server.strategy.FedAvg" +#~ msgstr "" + +#~ msgid "server.strategy.FedAvgM" +#~ msgstr "" + +#~ msgid "server.strategy.FedMedian" +#~ msgstr "" + +#~ msgid "server.strategy.QFedAvg" +#~ msgstr "" + +#~ msgid "server.strategy.FaultTolerantFedAvg" +#~ msgstr "" + +#~ msgid "server.strategy.FedOpt" +#~ msgstr "" + +#~ msgid "server.strategy.FedProx" +#~ msgstr "" + +#~ msgid "server.strategy.FedAdagrad" +#~ msgstr "" + +#~ msgid "server.strategy.FedAdam" +#~ msgstr "" + +#~ msgid "server.strategy.FedYogi" +#~ msgstr "" + +#~ msgid "server.strategy.FedTrimmedAvg" +#~ msgstr "" + +#~ msgid "server.strategy.Krum" +#~ msgstr "" + +#~ msgid "server.strategy.FedXgbNnAvg" +#~ msgstr "" + +#~ msgid "server.strategy.DPFedAvgAdaptive" +#~ msgstr "" + +#~ msgid "server.strategy.DPFedAvgFixed" +#~ msgstr "" + +#~ msgid "" +#~ "**Fix the incorrect return types of " +#~ "Strategy** " +#~ "([#2432](https://github.com/adap/flower/pull/2432/files))" +#~ msgstr "" + +#~ msgid "" +#~ "The types of the return values in" +#~ " the docstrings in two methods " +#~ "(`aggregate_fit` and `aggregate_evaluate`) now " +#~ "match the hint types in the code." +#~ msgstr "" + +#~ msgid "" +#~ "Using the `client_fn`, Flower clients " +#~ "can interchangeably run as standalone " +#~ "processes (i.e. via `start_client`) or " +#~ "in simulation (i.e. via `start_simulation`)" +#~ " without requiring changes to how the" +#~ " client class is defined and " +#~ "instantiated. Calling `start_numpy_client` is " +#~ "now deprecated." +#~ msgstr "" + +#~ msgid "" +#~ "**Update Flower Examples** " +#~ "([#2384](https://github.com/adap/flower/pull/2384)), " +#~ "([#2425](https://github.com/adap/flower/pull/2425))" +#~ msgstr "" + +#~ msgid "" +#~ "**General updates to baselines** " +#~ "([#2301](https://github.com/adap/flower/pull/2301), " +#~ "[#2305](https://github.com/adap/flower/pull/2305), " +#~ "[#2307](https://github.com/adap/flower/pull/2307), " +#~ "[#2327](https://github.com/adap/flower/pull/2327), " +#~ "[#2435](https://github.com/adap/flower/pull/2435))" +#~ msgstr "" + +#~ msgid "" +#~ "**General updates to the simulation " +#~ "engine** ([#2331](https://github.com/adap/flower/pull/2331), " +#~ "[#2447](https://github.com/adap/flower/pull/2447), " +#~ "[#2448](https://github.com/adap/flower/pull/2448))" +#~ msgstr "" + +#~ msgid "" +#~ "**General improvements** " +#~ "([#2309](https://github.com/adap/flower/pull/2309), " +#~ "[#2310](https://github.com/adap/flower/pull/2310), " +#~ "[2313](https://github.com/adap/flower/pull/2313), " +#~ "[#2316](https://github.com/adap/flower/pull/2316), " +#~ "[2317](https://github.com/adap/flower/pull/2317),[#2349](https://github.com/adap/flower/pull/2349)," +#~ " [#2360](https://github.com/adap/flower/pull/2360), " +#~ "[#2402](https://github.com/adap/flower/pull/2402), " +#~ "[#2446](https://github.com/adap/flower/pull/2446))" +#~ msgstr "" + +#~ msgid "" +#~ "`flower-superlink --driver-api-address " +#~ "\"0.0.0.0:8081\" --fleet-api-address " +#~ "\"0.0.0.0:8086\"`" +#~ msgstr "" + +#~ msgid "" +#~ "That's it for the client. We only" +#~ " have to implement :code:`Client` or " +#~ ":code:`NumPyClient` and call " +#~ ":code:`fl.client.start_client()`. The string " +#~ ":code:`\"0.0.0.0:8080\"` tells the client " +#~ "which server to connect to. In our" +#~ " case we can run the server and" +#~ " the client on the same machine, " +#~ "therefore we use :code:`\"0.0.0.0:8080\"`. If" +#~ " we run a truly federated workload" +#~ " with the server and clients running" +#~ " on different machines, all that " +#~ "needs to change is the " +#~ ":code:`server_address` we pass to the " +#~ "client." +#~ msgstr "" + +#~ msgid "" +#~ "That's it for the client. We only" +#~ " have to implement :code:`Client` or " +#~ ":code:`NumPyClient` and call " +#~ ":code:`fl.client.start_client()`. The string " +#~ ":code:`\"[::]:8080\"` tells the client which" +#~ " server to connect to. In our " +#~ "case we can run the server and " +#~ "the client on the same machine, " +#~ "therefore we use :code:`\"[::]:8080\"`. If " +#~ "we run a truly federated workload " +#~ "with the server and clients running " +#~ "on different machines, all that needs" +#~ " to change is the :code:`server_address`" +#~ " we point the client at." +#~ msgstr "" + +#~ msgid "" +#~ "Let's build a horizontal federated " +#~ "learning system using XGBoost and " +#~ "Flower!" +#~ msgstr "" + +#~ msgid "" +#~ "Please refer to the `full code " +#~ "example `_ to learn " +#~ "more." +#~ msgstr "" + +#~ msgid "" +#~ "In this notebook, we'll build a " +#~ "federated learning system using Flower " +#~ "and PyTorch. In part 1, we use " +#~ "PyTorch for the model training pipeline" +#~ " and data loading. In part 2, " +#~ "we continue to federate the PyTorch-" +#~ "based pipeline using Flower." +#~ msgstr "" + +#~ msgid "" +#~ "Next, we install the necessary packages" +#~ " for PyTorch (``torch`` and " +#~ "``torchvision``) and Flower (``flwr``):" +#~ msgstr "" + +#~ msgid "" +#~ "Federated learning can be applied to " +#~ "many different types of tasks across " +#~ "different domains. In this tutorial, we" +#~ " introduce federated learning by training" +#~ " a simple convolutional neural network " +#~ "(CNN) on the popular CIFAR-10 dataset." +#~ " CIFAR-10 can be used to train " +#~ "image classifiers that distinguish between " +#~ "images from ten different classes:" +#~ msgstr "" + +#~ msgid "" +#~ "Each organization will act as a " +#~ "client in the federated learning system." +#~ " So having ten organizations participate" +#~ " in a federation means having ten " +#~ "clients connected to the federated " +#~ "learning server:" +#~ msgstr "" + +#~ msgid "" +#~ "Let's now load the CIFAR-10 training " +#~ "and test set, partition them into " +#~ "ten smaller datasets (each split into" +#~ " training and validation set), and " +#~ "wrap the resulting partitions by " +#~ "creating a PyTorch ``DataLoader`` for " +#~ "each of them:" +#~ msgstr "" + +#~ msgid "|ed6498a023f2477a9ccd57ee4514bda4|" +#~ msgstr "" + +#~ msgid "|5a4f742489ac4f819afefdd4dc9ab272|" +#~ msgstr "" + +#~ msgid "|3331c80cd05045f6a56524d8e3e76d0c|" +#~ msgstr "" + +#~ msgid "|4987b26884ec4b2c8f06c1264bcebe60|" +#~ msgstr "" + +#~ msgid "|ec8ae2d778aa493a986eb2fa29c220e5|" +#~ msgstr "" + +#~ msgid "|b8949d0669fe4f8eadc9a4932f4e9c57|" +#~ msgstr "" + +#~ msgid "|94ff30bdcd09443e8488b5f29932a541|" +#~ msgstr "" + +#~ msgid "|48dccf1d6d0544bba8917d2783a47719|" +#~ msgstr "" + +#~ msgid "|0366618db96b4f329f0d4372d1150fde|" +#~ msgstr "" + +#~ msgid "|ac80eddc76e6478081b1ca35eed029c0|" +#~ msgstr "" + +#~ msgid "|1ac94140c317450e89678db133c7f3c2|" +#~ msgstr "" + +#~ msgid "|f8850c6e96fc4430b55e53bba237a7c0|" +#~ msgstr "" + +#~ msgid "|4a368fdd3fc34adabd20a46752a68582|" +#~ msgstr "" + +#~ msgid "|40f69c17bb444652a7c8dfe577cd120e|" +#~ msgstr "" + +#~ msgid "" +#~ "Please follow the first section on " +#~ "`Run Flower using Docker " +#~ "`_ which covers this" +#~ " step in more detail." +#~ msgstr "" + +#~ msgid "" +#~ "Since `Flower 1.5 `_ we have " +#~ "introduced translations to our doc " +#~ "pages, but, as you might have " +#~ "noticed, the translations are often " +#~ "imperfect. If you speak languages other" +#~ " than English, you might be able " +#~ "to help us in our effort to " +#~ "make Federated Learning accessible to as" +#~ " many people as possible by " +#~ "contributing to those translations! This " +#~ "might also be a great opportunity " +#~ "for those wanting to become open " +#~ "source contributors with little prerequistes." +#~ msgstr "" + +#~ msgid "" +#~ "You input your translation in the " +#~ "textbox at the top and then, once" +#~ " you are happy with it, you " +#~ "either press ``Save and continue`` (to" +#~ " save the translation and go to " +#~ "the next untranslated string), ``Save " +#~ "and stay`` (to save the translation " +#~ "and stay on the same page), " +#~ "``Suggest`` (to add your translation to" +#~ " suggestions for other users to " +#~ "view), or ``Skip`` (to go to the" +#~ " next untranslated string without saving" +#~ " anything)." +#~ msgstr "" + +#~ msgid "" +#~ "The first thing we need to do " +#~ "is to define a message type for" +#~ " the RPC system in :code:`transport.proto`." +#~ " Note that we have to do it " +#~ "for both the request and response " +#~ "messages. For more details on the " +#~ "syntax of proto3, please see the " +#~ "`official documentation `_." +#~ msgstr "" + +#~ msgid "" +#~ "Source: `Official VSCode documentation " +#~ "`_" +#~ msgstr "" + +#~ msgid "" +#~ "`Developing inside a Container " +#~ "`_" +#~ msgstr "" + +#~ msgid "" +#~ "`Remote development in Containers " +#~ "`_" +#~ msgstr "" + +#~ msgid "" +#~ "If you are not familiar with " +#~ "Flower Baselines, you should probably " +#~ "check-out our `contributing guide for " +#~ "baselines `_." +#~ msgstr "" + +#~ msgid "" +#~ "You should then check out the open" +#~ " `issues " +#~ "`_" +#~ " for baseline requests. If you find" +#~ " a baseline that you'd like to " +#~ "work on and that has no assignes," +#~ " feel free to assign it to " +#~ "yourself and start working on it!" +#~ msgstr "" + +#~ msgid "" +#~ "If you're familiar with how contributing" +#~ " on GitHub works, you can directly" +#~ " checkout our `getting started guide " +#~ "for contributors `_." +#~ msgstr "" + +#~ msgid "" +#~ "Git is a distributed version control " +#~ "tool. This allows for an entire " +#~ "codebase's history to be stored and " +#~ "every developer's machine. It is a " +#~ "software that will need to be " +#~ "installed on your local machine, you " +#~ "can follow this `guide " +#~ "`_ to set it up." +#~ msgstr "" + +#~ msgid "" +#~ "A fork is a personal copy of " +#~ "a GitHub repository. To create one " +#~ "for Flower, you must navigate to " +#~ "https://github.com/adap/flower (while connected to" +#~ " your GitHub account) and click the" +#~ " ``Fork`` button situated on the top" +#~ " right of the page." +#~ msgstr "" + +#~ msgid "" +#~ "Now we will add an upstream " +#~ "address to our repository. Still in " +#~ "the same directroy, we must run " +#~ "the following command:" +#~ msgstr "" + +#~ msgid "" +#~ "This can be achieved by following " +#~ "this `getting started guide for " +#~ "contributors`_ (note that you won't need" +#~ " to clone the repository). Once you" +#~ " are able to write code and " +#~ "test it, you can finally start " +#~ "making changes!" +#~ msgstr "" + +#~ msgid "" +#~ "For our documentation, we’ve started to" +#~ " use the `Diàtaxis framework " +#~ "`_." +#~ msgstr "" + +#~ msgid "" +#~ "Our “How to” guides should have " +#~ "titles that continue the sencence “How" +#~ " to …”, for example, “How to " +#~ "upgrade to Flower 1.0”." +#~ msgstr "" + +#~ msgid "" +#~ "This issue is about changing the " +#~ "title of a doc from present " +#~ "continious to present simple." +#~ msgstr "" + +#~ msgid "" +#~ "Let's take the example of “Saving " +#~ "Progress” which we changed to “Save " +#~ "Progress”. Does this pass our check?" +#~ msgstr "" + +#~ msgid "Before: ”How to saving progress” ❌" +#~ msgstr "" + +#~ msgid "After: ”How to save progress” ✅" +#~ msgstr "" + +#~ msgid "" +#~ "This is a tiny change, but it’ll" +#~ " allow us to test your end-" +#~ "to-end setup. After cloning and " +#~ "setting up the Flower repo, here’s " +#~ "what you should do:" +#~ msgstr "" + +#~ msgid "" +#~ "Build the docs and check the " +#~ "result: ``_" +#~ msgstr "" + +#~ msgid "Here’s how to change the file name:" +#~ msgstr "" + +#~ msgid "" +#~ "Commit the changes (commit messages are" +#~ " always imperative: “Do something”, in " +#~ "this case “Change …”)" +#~ msgstr "" + +#~ msgid "" +#~ "`Good first contributions " +#~ "`_, where you should" +#~ " particularly look into the " +#~ ":code:`baselines` contributions." +#~ msgstr "" + +#~ msgid "" +#~ "If the section is completely empty " +#~ "(without any token) or non-existant, " +#~ "the changelog will just contain the " +#~ "title of the PR for the changelog" +#~ " entry, without any description." +#~ msgstr "" + +#~ msgid "" +#~ "Flower uses :code:`pyproject.toml` to manage" +#~ " dependencies and configure development " +#~ "tools (the ones which support it). " +#~ "Poetry is a build tool which " +#~ "supports `PEP 517 " +#~ "`_." +#~ msgstr "" + +#~ msgid "" +#~ "This tutorial will show you how to" +#~ " use Flower to build a federated " +#~ "version of an existing machine learning" +#~ " workload with `FedBN `_, a federated training strategy" +#~ " designed for non-iid data. We " +#~ "are using PyTorch to train a " +#~ "Convolutional Neural Network(with Batch " +#~ "Normalization layers) on the CIFAR-10 " +#~ "dataset. When applying FedBN, only few" +#~ " changes needed compared to `Example: " +#~ "PyTorch - From Centralized To Federated" +#~ " `_." +#~ msgstr "" + +#~ msgid "" +#~ "All files are revised based on " +#~ "`Example: PyTorch - From Centralized To" +#~ " Federated `_. The " +#~ "only thing to do is modifying the" +#~ " file called :code:`cifar.py`, revised part" +#~ " is shown below:" +#~ msgstr "" + +#~ msgid "" +#~ "So far this should all look fairly" +#~ " familiar if you've used PyTorch " +#~ "before. Let's take the next step " +#~ "and use what we've built to create" +#~ " a federated learning system within " +#~ "FedBN, the sytstem consists of one " +#~ "server and two clients." +#~ msgstr "" + +#~ msgid "" +#~ "If you have read `Example: PyTorch " +#~ "- From Centralized To Federated " +#~ "`_, the following" +#~ " parts are easy to follow, onyl " +#~ ":code:`get_parameters` and :code:`set_parameters` " +#~ "function in :code:`client.py` needed to " +#~ "revise. If not, please read the " +#~ "`Example: PyTorch - From Centralized To" +#~ " Federated `_. first." +#~ msgstr "" + +#~ msgid "Example: Walk-Through PyTorch & MNIST" +#~ msgstr "" + +#~ msgid "" +#~ "In this tutorial we will learn, " +#~ "how to train a Convolutional Neural " +#~ "Network on MNIST using Flower and " +#~ "PyTorch." +#~ msgstr "" + +#~ msgid "" +#~ "Since we want to use PyTorch to" +#~ " solve a computer vision task, let's" +#~ " go ahead an install PyTorch and " +#~ "the **torchvision** library:" +#~ msgstr "" + +#~ msgid "Ready... Set... Train!" +#~ msgstr "" + +#~ msgid "" +#~ "Now that we have all our " +#~ "dependencies installed, let's run a " +#~ "simple distributed training with two " +#~ "clients and one server. Our training " +#~ "procedure and network architecture are " +#~ "based on PyTorch's `Basic MNIST Example" +#~ " `_. " +#~ "This will allow you see how easy" +#~ " it is to wrap your code with" +#~ " Flower and begin training in a " +#~ "federated way. We provide you with " +#~ "two helper scripts, namely *run-" +#~ "server.sh*, and *run-clients.sh*. Don't " +#~ "be afraid to look inside, they are" +#~ " simple enough =)." +#~ msgstr "" + +#~ msgid "" +#~ "Go ahead and launch on a terminal" +#~ " the *run-server.sh* script first as" +#~ " follows:" +#~ msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:373 -msgid "" -"The `Flower Federated Learning Tutorial - Part 1 " -"`__ shows how to build a simple federated learning system " -"with PyTorch and Flower." -msgstr "" +#~ msgid "Now that the server is up and running, go ahead and launch the clients." +#~ msgstr "" #~ msgid "" -#~ "Configuring and setting up the " -#~ ":code:`Dockerfile` as well the configuration" -#~ " for the devcontainer can be a " -#~ "bit more involved. The good thing " -#~ "is you want have to do it. " -#~ "Usually it should be enough to " -#~ "install Docker on your system and " -#~ "ensure its available on your command " -#~ "line. Additionally, install the `VSCode " -#~ "Containers Extension `_." +#~ "Et voilà! You should be seeing the" +#~ " training procedure and, after a few" +#~ " iterations, the test accuracy for " +#~ "each client." +#~ msgstr "" + +#~ msgid "Now, let's see what is really happening inside." #~ msgstr "" #~ msgid "" -#~ "``flwr = { path = " -#~ "\"../../dist/flwr-1.0.0-py3-none-any.whl\" }`` " -#~ "(without extras)" +#~ "Inside the server helper script *run-" +#~ "server.sh* you will find the following" +#~ " code that basically runs the " +#~ ":code:`server.py`" #~ msgstr "" #~ msgid "" -#~ "``flwr = { path = " -#~ "\"../../dist/flwr-1.0.0-py3-none-any.whl\", extras =" -#~ " [\"simulation\"] }`` (with extras)" +#~ "We can go a bit deeper and " +#~ "see that :code:`server.py` simply launches " +#~ "a server that will coordinate three " +#~ "rounds of training. Flower Servers are" +#~ " very customizable, but for simple " +#~ "workloads, we can start a server " +#~ "using the :ref:`start_server ` function and leave " +#~ "all the configuration possibilities at " +#~ "their default values, as seen below." #~ msgstr "" -#~ msgid "Upload the whl (e.g., ``flwr-1.7.0-py3-none-any.whl``)" +#~ msgid "" +#~ "Next, let's take a look at the " +#~ "*run-clients.sh* file. You will see " +#~ "that it contains the main loop " +#~ "that starts a set of *clients*." #~ msgstr "" #~ msgid "" -#~ "Change ``!pip install -q 'flwr[simulation]'" -#~ " torch torchvision matplotlib`` to ``!pip" -#~ " install -q 'flwr-1.7.0-py3-none-" -#~ "any.whl[simulation]' torch torchvision matplotlib``" +#~ "**cid**: is the client ID. It is" +#~ " an integer that uniquely identifies " +#~ "client identifier." #~ msgstr "" -#~ msgid "Before the release" +#~ msgid "**sever_address**: String that identifies IP and port of the server." #~ msgstr "" #~ msgid "" -#~ "Update the changelog (``changelog.md``) with" -#~ " all relevant changes that happened " -#~ "after the last release. If the " -#~ "last release was tagged ``v1.2.0``, you" -#~ " can use the following URL to " -#~ "see all commits that got merged " -#~ "into ``main`` since then:" +#~ "**nb_clients**: This defines the number " +#~ "of clients being created. This piece " +#~ "of information is not required by " +#~ "the client, but it helps us " +#~ "partition the original MNIST dataset to" +#~ " make sure that every client is " +#~ "working on unique subsets of both " +#~ "*training* and *test* sets." #~ msgstr "" #~ msgid "" -#~ "`GitHub: Compare v1.2.0...main " -#~ "`_" +#~ "Again, we can go deeper and look" +#~ " inside :code:`flwr_example/quickstart-" +#~ "pytorch/client.py`. After going through the" +#~ " argument parsing code at the " +#~ "beginning of our :code:`main` function, " +#~ "you will find a call to " +#~ ":code:`mnist.load_data`. This function is " +#~ "responsible for partitioning the original " +#~ "MNIST datasets (*training* and *test*) " +#~ "and returning a :code:`torch.utils.data.DataLoader`" +#~ " s for each of them. We then" +#~ " instantiate a :code:`PytorchMNISTClient` object" +#~ " with our client ID, our DataLoaders," +#~ " the number of epochs in each " +#~ "round, and which device we want to" +#~ " use for training (CPU or GPU)." #~ msgstr "" #~ msgid "" -#~ "Thank the authors who contributed since" -#~ " the last release. This can be " -#~ "done by running the ``./dev/add-" -#~ "shortlog.sh`` convenience script (it can " -#~ "be ran multiple times and will " -#~ "update the names in the list if" -#~ " new contributors were added in the" -#~ " meantime)." +#~ "The :code:`PytorchMNISTClient` object when " +#~ "finally passed to :code:`fl.client.start_client` " +#~ "along with the server's address as " +#~ "the training process begins." #~ msgstr "" -#~ msgid "" -#~ "Update the ``changelog.md`` section header " -#~ "``Unreleased`` to contain the version " -#~ "number and date for the release " -#~ "you are building. Create a pull " -#~ "request with the change." +#~ msgid "A Closer Look" #~ msgstr "" #~ msgid "" -#~ "Tag the release commit with the " -#~ "version number as soon as the PR" -#~ " is merged: ``git tag v0.12.3``, then" -#~ " ``git push --tags``. This will " -#~ "create a draft release on GitHub " -#~ "containing the correct artifacts and the" -#~ " relevant part of the changelog." +#~ "Now, let's look closely into the " +#~ ":code:`PytorchMNISTClient` inside :code:`flwr_example" +#~ ".quickstart-pytorch.mnist` and see what it" +#~ " is doing:" #~ msgstr "" #~ msgid "" -#~ "Note that, in order to build the" -#~ " documentation locally (with ``poetry run" -#~ " make html``, like described below), " -#~ "`Pandoc _` needs " -#~ "to be installed on the system." +#~ "The first thing to notice is that" +#~ " :code:`PytorchMNISTClient` instantiates a CNN" +#~ " model inside its constructor" #~ msgstr "" #~ msgid "" -#~ "If you're familiar with how contributing" -#~ " on GitHub works, you can directly" -#~ " checkout our `getting started guide " -#~ "for contributors `_ and examples " -#~ "of `good first contributions " -#~ "`_." +#~ "The code for the CNN is available" +#~ " under :code:`quickstart-pytorch.mnist` and " +#~ "it is reproduced below. It is the" +#~ " same network found in `Basic MNIST" +#~ " Example " +#~ "`_." #~ msgstr "" #~ msgid "" -#~ "This will create a `flower/` (or " -#~ "the name of your fork if you " -#~ "renamed it) folder in the current " -#~ "working directory." +#~ "The second thing to notice is that" +#~ " :code:`PytorchMNISTClient` class inherits from" +#~ " the :code:`fl.client.Client`, and hence it" +#~ " must implement the following methods:" #~ msgstr "" -#~ msgid "Otherwise you can always find this option in the `Branches` page." +#~ msgid "" +#~ "When comparing the abstract class to " +#~ "its derived class :code:`PytorchMNISTClient` " +#~ "you will notice that :code:`fit` calls" +#~ " a :code:`train` function and that " +#~ ":code:`evaluate` calls a :code:`test`: " +#~ "function." #~ msgstr "" #~ msgid "" -#~ "Once you click the `Compare & pull" -#~ " request` button, you should see " -#~ "something similar to this:" +#~ "These functions can both be found " +#~ "inside the same :code:`quickstart-" +#~ "pytorch.mnist` module:" #~ msgstr "" -#~ msgid "Find the source file in `doc/source`" +#~ msgid "" +#~ "Observe that these functions encapsulate " +#~ "regular training and test loops and " +#~ "provide :code:`fit` and :code:`evaluate` with" +#~ " final statistics for each round. You" +#~ " could substitute them with your " +#~ "custom train and test loops and " +#~ "change the network architecture, and the" +#~ " entire example would still work " +#~ "flawlessly. As a matter of fact, " +#~ "why not try and modify the code" +#~ " to an example of your liking?" #~ msgstr "" -#~ msgid "" -#~ "Make the change in the `.rst` file" -#~ " (beware, the dashes under the title" -#~ " should be the same length as " -#~ "the title itself)" +#~ msgid "Give It a Try" #~ msgstr "" -#~ msgid "Change the file name to `save-progress.rst`" +#~ msgid "" +#~ "Looking through the quickstart code " +#~ "description above will have given a " +#~ "good understanding of how *clients* and" +#~ " *servers* work in Flower, how to " +#~ "run a simple experiment, and the " +#~ "internals of a client wrapper. Here " +#~ "are a few things you could try " +#~ "on your own and get more " +#~ "experience with Flower:" #~ msgstr "" -#~ msgid "Add a redirect rule to `doc/source/conf.py`" +#~ msgid "" +#~ "Try and change :code:`PytorchMNISTClient` so" +#~ " it can accept different architectures." #~ msgstr "" #~ msgid "" -#~ "This will cause a redirect from " -#~ "`saving-progress.html` to `save-progress.html`," -#~ " old links will continue to work." +#~ "Modify the :code:`train` function so " +#~ "that it accepts different optimizers" #~ msgstr "" #~ msgid "" -#~ "For the lateral navigation bar to " -#~ "work properly, it is very important " -#~ "to update the `index.rst` file as " -#~ "well. This is where we define the" -#~ " whole arborescence of the navbar." +#~ "Modify the :code:`test` function so that" +#~ " it proves not only the top-1 " +#~ "(regular accuracy) but also the top-5" +#~ " accuracy?" #~ msgstr "" -#~ msgid "Find and modify the file name in `index.rst`" +#~ msgid "" +#~ "Go larger! Try to adapt the code" +#~ " to larger images and datasets. Why" +#~ " not try training on ImageNet with" +#~ " a ResNet-50?" #~ msgstr "" -#~ msgid "Add CI job to deploy the staging system when the `main` branch changes" +#~ msgid "You are ready now. Enjoy learning in a federated way!" #~ msgstr "" -#~ msgid "`Python 3.7 `_ or above" +#~ msgid "Differential privacy" #~ msgstr "" #~ msgid "" -#~ "First, clone the `Flower repository " -#~ "`_ from GitHub::" +#~ "Flower provides differential privacy (DP) " +#~ "wrapper classes for the easy integration" +#~ " of the central DP guarantees " +#~ "provided by DP-FedAvg into training " +#~ "pipelines defined in any of the " +#~ "various ML frameworks that Flower is " +#~ "compatible with." #~ msgstr "" #~ msgid "" -#~ "Second, create a virtual environment " -#~ "(and activate it). If you chose to" -#~ " use :code:`pyenv` (with the :code" -#~ ":`pyenv-virtualenv` plugin) and already " -#~ "have it installed , you can use" -#~ " the following convenience script (by " -#~ "default it will use :code:`Python " -#~ "3.8.17`, but you can change it by" -#~ " providing a specific :code:``)::" +#~ "Please note that these components are" +#~ " still experimental; the correct " +#~ "configuration of DP for a specific " +#~ "task is still an unsolved problem." #~ msgstr "" #~ msgid "" -#~ "If you don't have :code:`pyenv` " -#~ "installed, you can use the following " -#~ "script that will install pyenv, set " -#~ "it up and create the virtual " -#~ "environment (with :code:`Python 3.8.17` by " -#~ "default)::" +#~ "The name DP-FedAvg is misleading " +#~ "since it can be applied on top " +#~ "of any FL algorithm that conforms " +#~ "to the general structure prescribed by" +#~ " the FedOpt family of algorithms." +#~ msgstr "" + +#~ msgid "DP-FedAvg" #~ msgstr "" #~ msgid "" -#~ "Third, install the Flower package in " -#~ "development mode (think :code:`pip install " -#~ "-e`) along with all necessary " -#~ "dependencies::" +#~ "DP-FedAvg, originally proposed by " +#~ "McMahan et al. [mcmahan]_ and extended" +#~ " by Andrew et al. [andrew]_, is " +#~ "essentially FedAvg with the following " +#~ "modifications." #~ msgstr "" #~ msgid "" -#~ "Developers could run the full set " -#~ "of Github Actions workflows under their" -#~ " local environment by using `Act " -#~ "_`. Please refer to" -#~ " the installation instructions under the" -#~ " linked repository and run the next" -#~ " command under Flower main cloned " -#~ "repository folder::" +#~ "**Clipping** : The influence of each " +#~ "client's update is bounded by clipping" +#~ " it. This is achieved by enforcing" +#~ " a cap on the L2 norm of " +#~ "the update, scaling it down if " +#~ "needed." #~ msgstr "" #~ msgid "" -#~ "Please note that these components are" -#~ " still experimental, the correct " -#~ "configuration of DP for a specific " -#~ "task is still an unsolved problem." +#~ "**Noising** : Gaussian noise, calibrated " +#~ "to the clipping threshold, is added " +#~ "to the average computed at the " +#~ "server." #~ msgstr "" #~ msgid "" #~ "The distribution of the update norm " #~ "has been shown to vary from " #~ "task-to-task and to evolve as " -#~ "training progresses. Therefore, we use " -#~ "an adaptive approach [andrew]_ that " -#~ "continuously adjusts the clipping threshold" -#~ " to track a prespecified quantile of" -#~ " the update norm distribution." +#~ "training progresses. This variability is " +#~ "crucial in understanding its impact on" +#~ " differential privacy guarantees, emphasizing " +#~ "the need for an adaptive approach " +#~ "[andrew]_ that continuously adjusts the " +#~ "clipping threshold to track a " +#~ "prespecified quantile of the update norm" +#~ " distribution." +#~ msgstr "" + +#~ msgid "Simplifying Assumptions" #~ msgstr "" #~ msgid "" #~ "We make (and attempt to enforce) a" #~ " number of assumptions that must be" #~ " satisfied to ensure that the " -#~ "training process actually realises the " +#~ "training process actually realizes the " #~ ":math:`(\\epsilon, \\delta)` guarantees the " #~ "user has in mind when configuring " #~ "the setup." #~ msgstr "" +#~ msgid "" +#~ "**Fixed-size subsampling** :Fixed-size " +#~ "subsamples of the clients must be " +#~ "taken at each round, as opposed to" +#~ " variable-sized Poisson subsamples." +#~ msgstr "" + +#~ msgid "" +#~ "**Unweighted averaging** : The contributions" +#~ " from all the clients must weighted" +#~ " equally in the aggregate to " +#~ "eliminate the requirement for the server" +#~ " to know in advance the sum of" +#~ " the weights of all clients available" +#~ " for selection." +#~ msgstr "" + +#~ msgid "" +#~ "**No client failures** : The set " +#~ "of available clients must stay constant" +#~ " across all rounds of training. In" +#~ " other words, clients cannot drop out" +#~ " or fail." +#~ msgstr "" + #~ msgid "" #~ "The first two are useful for " #~ "eliminating a multitude of complications " #~ "associated with calibrating the noise to" -#~ " the clipping threshold while the " +#~ " the clipping threshold, while the " #~ "third one is required to comply " #~ "with the assumptions of the privacy " #~ "analysis." #~ msgstr "" +#~ msgid "" +#~ "These restrictions are in line with " +#~ "constraints imposed by Andrew et al. " +#~ "[andrew]_." +#~ msgstr "" + +#~ msgid "Customizable Responsibility for Noise injection" +#~ msgstr "" + +#~ msgid "" +#~ "In contrast to other implementations " +#~ "where the addition of noise is " +#~ "performed at the server, you can " +#~ "configure the site of noise injection" +#~ " to better match your threat model." +#~ " We provide users with the " +#~ "flexibility to set up the training " +#~ "such that each client independently adds" +#~ " a small amount of noise to the" +#~ " clipped update, with the result that" +#~ " simply aggregating the noisy updates " +#~ "is equivalent to the explicit addition" +#~ " of noise to the non-noisy " +#~ "aggregate at the server." +#~ msgstr "" + +#~ msgid "" +#~ "To be precise, if we let :math:`m`" +#~ " be the number of clients sampled " +#~ "each round and :math:`\\sigma_\\Delta` be " +#~ "the scale of the total Gaussian " +#~ "noise that needs to be added to" +#~ " the sum of the model updates, " +#~ "we can use simple maths to show" +#~ " that this is equivalent to each " +#~ "client adding noise with scale " +#~ ":math:`\\sigma_\\Delta/\\sqrt{m}`." +#~ msgstr "" + +#~ msgid "Wrapper-based approach" +#~ msgstr "" + +#~ msgid "" +#~ "Introducing DP to an existing workload" +#~ " can be thought of as adding an" +#~ " extra layer of security around it." +#~ " This inspired us to provide the " +#~ "additional server and client-side logic" +#~ " needed to make the training process" +#~ " differentially private as wrappers for " +#~ "instances of the :code:`Strategy` and " +#~ ":code:`NumPyClient` abstract classes respectively." +#~ " This wrapper-based approach has the" +#~ " advantage of being easily composable " +#~ "with other wrappers that someone might" +#~ " contribute to the Flower library in" +#~ " the future, e.g., for secure " +#~ "aggregation. Using Inheritance instead can " +#~ "be tedious because that would require" +#~ " the creation of new sub- classes " +#~ "every time a new class implementing " +#~ ":code:`Strategy` or :code:`NumPyClient` is " +#~ "defined." +#~ msgstr "" + +#~ msgid "Server-side logic" +#~ msgstr "" + #~ msgid "" #~ "The first version of our solution " #~ "was to define a decorator whose " #~ "constructor accepted, among other things, " -#~ "a boolean valued variable indicating " +#~ "a boolean-valued variable indicating " #~ "whether adaptive clipping was to be " #~ "enabled or not. We quickly realized " #~ "that this would clutter its " @@ -19753,6 +22117,34 @@ msgstr "" #~ "noising is to be performed." #~ msgstr "" +#~ msgid "" +#~ "The server-side capabilities required " +#~ "for the original version of DP-" +#~ "FedAvg, i.e., the one which performed" +#~ " fixed clipping, can be completely " +#~ "captured with the help of wrapper " +#~ "logic for just the following two " +#~ "methods of the :code:`Strategy` abstract " +#~ "class." +#~ msgstr "" + +#~ msgid "" +#~ ":code:`configure_fit()` : The config " +#~ "dictionary being sent by the wrapped " +#~ ":code:`Strategy` to each client needs to" +#~ " be augmented with an additional " +#~ "value equal to the clipping threshold" +#~ " (keyed under :code:`dpfedavg_clip_norm`) and," +#~ " if :code:`server_side_noising=true`, another one" +#~ " equal to the scale of the " +#~ "Gaussian noise that needs to be " +#~ "added at the client (keyed under " +#~ ":code:`dpfedavg_noise_stddev`). This entails " +#~ "*post*-processing of the results returned " +#~ "by the wrappee's implementation of " +#~ ":code:`configure_fit()`." +#~ msgstr "" + #~ msgid "" #~ ":code:`aggregate_fit()`: We check whether any" #~ " of the sampled clients dropped out" @@ -19772,1682 +22164,1747 @@ msgstr "" #~ "if :code:`server_side_noising=true`, each update " #~ "is perturbed with an amount of " #~ "noise equal to what it would have" -#~ " been subjected to had client-side" -#~ " noising being enabled. This entails " -#~ "*pre*-processing of the arguments to " -#~ "this method before passing them on " -#~ "to the wrappee's implementation of " -#~ ":code:`aggregate_fit()`." -#~ msgstr "" - -#~ msgid "" -#~ "McMahan, H. Brendan, et al. \"Learning" -#~ " differentially private recurrent language " -#~ "models.\" arXiv preprint arXiv:1710.06963 " -#~ "(2017)." -#~ msgstr "" - -#~ msgid "" -#~ "Andrew, Galen, et al. \"Differentially " -#~ "private learning with adaptive clipping.\" " -#~ "Advances in Neural Information Processing " -#~ "Systems 34 (2021): 17455-17466." -#~ msgstr "" - -#~ msgid "" -#~ "The following command can be used " -#~ "to verfiy if Flower was successfully " -#~ "installed. If everything worked, it " -#~ "should print the version of Flower " -#~ "to the command line::" -#~ msgstr "" - -#~ msgid "flwr (Python API reference)" -#~ msgstr "" - -#~ msgid "start_client" -#~ msgstr "" - -#~ msgid "start_numpy_client" -#~ msgstr "" - -#~ msgid "start_simulation" -#~ msgstr "" - -#~ msgid "server.start_server" -#~ msgstr "" - -#~ msgid "server.strategy" -#~ msgstr "" - -#~ msgid "server.strategy.Strategy" -#~ msgstr "" - -#~ msgid "server.strategy.FedAvg" -#~ msgstr "" - -#~ msgid "server.strategy.FedAvgM" -#~ msgstr "" - -#~ msgid "server.strategy.FedMedian" -#~ msgstr "" - -#~ msgid "server.strategy.QFedAvg" -#~ msgstr "" - -#~ msgid "server.strategy.FaultTolerantFedAvg" +#~ " been subjected to had client-side" +#~ " noising being enabled. This entails " +#~ "*pre*-processing of the arguments to " +#~ "this method before passing them on " +#~ "to the wrappee's implementation of " +#~ ":code:`aggregate_fit()`." #~ msgstr "" -#~ msgid "server.strategy.FedOpt" +#~ msgid "" +#~ "We can't directly change the aggregation" +#~ " function of the wrapped strategy to" +#~ " force it to add noise to the" +#~ " aggregate, hence we simulate client-" +#~ "side noising to implement server-side" +#~ " noising." #~ msgstr "" -#~ msgid "server.strategy.FedProx" +#~ msgid "" +#~ "These changes have been put together " +#~ "into a class called :code:`DPFedAvgFixed`, " +#~ "whose constructor accepts the strategy " +#~ "being decorated, the clipping threshold " +#~ "and the number of clients sampled " +#~ "every round as compulsory arguments. The" +#~ " user is expected to specify the " +#~ "clipping threshold since the order of" +#~ " magnitude of the update norms is " +#~ "highly dependent on the model being " +#~ "trained and providing a default value" +#~ " would be misleading. The number of" +#~ " clients sampled at every round is" +#~ " required to calculate the amount of" +#~ " noise that must be added to " +#~ "each individual update, either by the" +#~ " server or the clients." #~ msgstr "" -#~ msgid "server.strategy.FedAdagrad" +#~ msgid "" +#~ "The additional functionality required to " +#~ "facilitate adaptive clipping has been " +#~ "provided in :code:`DPFedAvgAdaptive`, a " +#~ "subclass of :code:`DPFedAvgFixed`. It " +#~ "overrides the above-mentioned methods to" +#~ " do the following." #~ msgstr "" -#~ msgid "server.strategy.FedAdam" +#~ msgid "" +#~ ":code:`configure_fit()` : It intercepts the" +#~ " config dict returned by " +#~ ":code:`super.configure_fit()` to add the " +#~ "key-value pair " +#~ ":code:`dpfedavg_adaptive_clip_enabled:True` to it, " +#~ "which the client interprets as an " +#~ "instruction to include an indicator bit" +#~ " (1 if update norm <= clipping " +#~ "threshold, 0 otherwise) in the results" +#~ " returned by it." #~ msgstr "" -#~ msgid "server.strategy.FedYogi" +#~ msgid "" +#~ ":code:`aggregate_fit()` : It follows a " +#~ "call to :code:`super.aggregate_fit()` with one" +#~ " to :code:`__update_clip_norm__()`, a procedure" +#~ " which adjusts the clipping threshold " +#~ "on the basis of the indicator bits" +#~ " received from the sampled clients." #~ msgstr "" -#~ msgid "server.strategy.FedTrimmedAvg" +#~ msgid "Client-side logic" #~ msgstr "" -#~ msgid "server.strategy.Krum" +#~ msgid "" +#~ "The client-side capabilities required " +#~ "can be completely captured through " +#~ "wrapper logic for just the :code:`fit()`" +#~ " method of the :code:`NumPyClient` abstract" +#~ " class. To be precise, we need " +#~ "to *post-process* the update computed" +#~ " by the wrapped client to clip " +#~ "it, if necessary, to the threshold " +#~ "value supplied by the server as " +#~ "part of the config dictionary. In " +#~ "addition to this, it may need to" +#~ " perform some extra work if either" +#~ " (or both) of the following keys " +#~ "are also present in the dict." #~ msgstr "" -#~ msgid "server.strategy.FedXgbNnAvg" +#~ msgid "" +#~ ":code:`dpfedavg_noise_stddev` : Generate and " +#~ "add the specified amount of noise " +#~ "to the clipped update." #~ msgstr "" -#~ msgid "server.strategy.DPFedAvgAdaptive" +#~ msgid "" +#~ ":code:`dpfedavg_adaptive_clip_enabled` : Augment the" +#~ " metrics dict in the :code:`FitRes` " +#~ "object being returned to the server " +#~ "with an indicator bit, calculated as " +#~ "described earlier." #~ msgstr "" -#~ msgid "server.strategy.DPFedAvgFixed" +#~ msgid "Performing the :math:`(\\epsilon, \\delta)` analysis" #~ msgstr "" #~ msgid "" -#~ "**Fix the incorrect return types of " -#~ "Strategy** " -#~ "([#2432](https://github.com/adap/flower/pull/2432/files))" +#~ "Assume you have trained for :math:`n`" +#~ " rounds with sampling fraction :math:`q`" +#~ " and noise multiplier :math:`z`. In " +#~ "order to calculate the :math:`\\epsilon` " +#~ "value this would result in for a" +#~ " particular :math:`\\delta`, the following " +#~ "script may be used." #~ msgstr "" #~ msgid "" -#~ "The types of the return values in" -#~ " the docstrings in two methods " -#~ "(`aggregate_fit` and `aggregate_evaluate`) now " -#~ "match the hint types in the code." +#~ "McMahan et al. \"Learning Differentially " +#~ "Private Recurrent Language Models.\" " +#~ "International Conference on Learning " +#~ "Representations (ICLR), 2017." #~ msgstr "" #~ msgid "" -#~ "Using the `client_fn`, Flower clients " -#~ "can interchangeably run as standalone " -#~ "processes (i.e. via `start_client`) or " -#~ "in simulation (i.e. via `start_simulation`)" -#~ " without requiring changes to how the" -#~ " client class is defined and " -#~ "instantiated. Calling `start_numpy_client` is " -#~ "now deprecated." +#~ "Andrew, Galen, et al. \"Differentially " +#~ "Private Learning with Adaptive Clipping.\" " +#~ "Advances in Neural Information Processing " +#~ "Systems (NeurIPS), 2021." #~ msgstr "" #~ msgid "" -#~ "**Update Flower Examples** " -#~ "([#2384](https://github.com/adap/flower/pull/2384)), " -#~ "([#2425](https://github.com/adap/flower/pull/2425))" +#~ "This can be achieved by customizing " +#~ "an existing strategy or by `implementing" +#~ " a custom strategy from scratch " +#~ "`_. Here's a nonsensical " +#~ "example that customizes :code:`FedAvg` by " +#~ "adding a custom ``\"hello\": \"world\"`` " +#~ "configuration key/value pair to the " +#~ "config dict of a *single client* " +#~ "(only the first client in the " +#~ "list, the other clients in this " +#~ "round to not receive this \"special\"" +#~ " config value):" #~ msgstr "" #~ msgid "" -#~ "**General updates to baselines** " -#~ "([#2301](https://github.com/adap/flower/pull/2301), " -#~ "[#2305](https://github.com/adap/flower/pull/2305), " -#~ "[#2307](https://github.com/adap/flower/pull/2307), " -#~ "[#2327](https://github.com/adap/flower/pull/2327), " -#~ "[#2435](https://github.com/adap/flower/pull/2435))" +#~ "More sophisticated implementations can use " +#~ ":code:`configure_fit` to implement custom " +#~ "client selection logic. A client will" +#~ " only participate in a round if " +#~ "the corresponding :code:`ClientProxy` is " +#~ "included in the the list returned " +#~ "from :code:`configure_fit`." #~ msgstr "" #~ msgid "" -#~ "**General updates to the simulation " -#~ "engine** ([#2331](https://github.com/adap/flower/pull/2331), " -#~ "[#2447](https://github.com/adap/flower/pull/2447), " -#~ "[#2448](https://github.com/adap/flower/pull/2448))" +#~ "More sophisticated implementations can use " +#~ ":code:`configure_evaluate` to implement custom " +#~ "client selection logic. A client will" +#~ " only participate in a round if " +#~ "the corresponding :code:`ClientProxy` is " +#~ "included in the the list returned " +#~ "from :code:`configure_evaluate`." #~ msgstr "" #~ msgid "" -#~ "**General improvements** " -#~ "([#2309](https://github.com/adap/flower/pull/2309), " -#~ "[#2310](https://github.com/adap/flower/pull/2310), " -#~ "[2313](https://github.com/adap/flower/pull/2313), " -#~ "[#2316](https://github.com/adap/flower/pull/2316), " -#~ "[2317](https://github.com/adap/flower/pull/2317),[#2349](https://github.com/adap/flower/pull/2349)," -#~ " [#2360](https://github.com/adap/flower/pull/2360), " -#~ "[#2402](https://github.com/adap/flower/pull/2402), " -#~ "[#2446](https://github.com/adap/flower/pull/2446))" +#~ "`How to run Flower using Docker " +#~ "`_" #~ msgstr "" #~ msgid "" -#~ "`flower-superlink --driver-api-address " -#~ "\"0.0.0.0:8081\" --fleet-api-address " -#~ "\"0.0.0.0:8086\"`" +#~ "Ray Dashboard: ``_" #~ msgstr "" #~ msgid "" -#~ "That's it for the client. We only" -#~ " have to implement :code:`Client` or " -#~ ":code:`NumPyClient` and call " -#~ ":code:`fl.client.start_client()`. The string " -#~ ":code:`\"0.0.0.0:8080\"` tells the client " -#~ "which server to connect to. In our" -#~ " case we can run the server and" -#~ " the client on the same machine, " -#~ "therefore we use :code:`\"0.0.0.0:8080\"`. If" -#~ " we run a truly federated workload" -#~ " with the server and clients running" -#~ " on different machines, all that " -#~ "needs to change is the " -#~ ":code:`server_address` we pass to the " -#~ "client." +#~ "Ray Metrics: ``_" #~ msgstr "" -#~ msgid "" -#~ "That's it for the client. We only" -#~ " have to implement :code:`Client` or " -#~ ":code:`NumPyClient` and call " -#~ ":code:`fl.client.start_client()`. The string " -#~ ":code:`\"[::]:8080\"` tells the client which" -#~ " server to connect to. In our " -#~ "case we can run the server and " -#~ "the client on the same machine, " -#~ "therefore we use :code:`\"[::]:8080\"`. If " -#~ "we run a truly federated workload " -#~ "with the server and clients running " -#~ "on different machines, all that needs" -#~ " to change is the :code:`server_address`" -#~ " we point the client at." +#~ msgid "Enjoy building more robust and flexible ``ClientApp``s with mods!" #~ msgstr "" #~ msgid "" -#~ "Let's build a horizontal federated " -#~ "learning system using XGBoost and " -#~ "Flower!" +#~ ":py:obj:`ClientApp `\\ " +#~ "\\(client\\_fn\\[\\, mods\\]\\)" #~ msgstr "" -#~ msgid "" -#~ "Please refer to the `full code " -#~ "example `_ to learn " -#~ "more." +#~ msgid ":py:obj:`flwr.server.driver `\\" #~ msgstr "" -#~ msgid "" -#~ "In this notebook, we'll build a " -#~ "federated learning system using Flower " -#~ "and PyTorch. In part 1, we use " -#~ "PyTorch for the model training pipeline" -#~ " and data loading. In part 2, " -#~ "we continue to federate the PyTorch-" -#~ "based pipeline using Flower." +#~ msgid "Flower driver SDK." #~ msgstr "" -#~ msgid "" -#~ "Next, we install the necessary packages" -#~ " for PyTorch (``torch`` and " -#~ "``torchvision``) and Flower (``flwr``):" +#~ msgid "driver" #~ msgstr "" #~ msgid "" -#~ "Federated learning can be applied to " -#~ "many different types of tasks across " -#~ "different domains. In this tutorial, we" -#~ " introduce federated learning by training" -#~ " a simple convolutional neural network " -#~ "(CNN) on the popular CIFAR-10 dataset." -#~ " CIFAR-10 can be used to train " -#~ "image classifiers that distinguish between " -#~ "images from ten different classes:" +#~ ":py:obj:`start_driver `\\ " +#~ "\\(\\*\\[\\, server\\_address\\, server\\, ...\\]\\)" #~ msgstr "" #~ msgid "" -#~ "Each organization will act as a " -#~ "client in the federated learning system." -#~ " So having ten organizations participate" -#~ " in a federation means having ten " -#~ "clients connected to the federated " -#~ "learning server:" +#~ ":py:obj:`Driver `\\ " +#~ "\\(\\[driver\\_service\\_address\\, ...\\]\\)" #~ msgstr "" #~ msgid "" -#~ "Let's now load the CIFAR-10 training " -#~ "and test set, partition them into " -#~ "ten smaller datasets (each split into" -#~ " training and validation set), and " -#~ "wrap the resulting partitions by " -#~ "creating a PyTorch ``DataLoader`` for " -#~ "each of them:" +#~ ":py:obj:`GrpcDriver `\\ " +#~ "\\(\\[driver\\_service\\_address\\, ...\\]\\)" #~ msgstr "" -#~ msgid "|ed6498a023f2477a9ccd57ee4514bda4|" +#~ msgid "`GrpcDriver` provides access to the gRPC Driver API/service." #~ msgstr "" -#~ msgid "|5a4f742489ac4f819afefdd4dc9ab272|" +#~ msgid ":py:obj:`get_nodes `\\ \\(\\)" #~ msgstr "" -#~ msgid "|3331c80cd05045f6a56524d8e3e76d0c|" +#~ msgid "" +#~ ":py:obj:`pull_task_res " +#~ "`\\ \\(task\\_ids\\)" #~ msgstr "" -#~ msgid "|4987b26884ec4b2c8f06c1264bcebe60|" +#~ msgid "Get task results." #~ msgstr "" -#~ msgid "|ec8ae2d778aa493a986eb2fa29c220e5|" +#~ msgid "" +#~ ":py:obj:`push_task_ins " +#~ "`\\ " +#~ "\\(task\\_ins\\_list\\)" #~ msgstr "" -#~ msgid "|b8949d0669fe4f8eadc9a4932f4e9c57|" +#~ msgid "Schedule tasks." #~ msgstr "" -#~ msgid "|94ff30bdcd09443e8488b5f29932a541|" +#~ msgid "GrpcDriver" #~ msgstr "" -#~ msgid "|48dccf1d6d0544bba8917d2783a47719|" +#~ msgid ":py:obj:`connect `\\ \\(\\)" #~ msgstr "" -#~ msgid "|0366618db96b4f329f0d4372d1150fde|" +#~ msgid "Connect to the Driver API." #~ msgstr "" -#~ msgid "|ac80eddc76e6478081b1ca35eed029c0|" +#~ msgid "" +#~ ":py:obj:`create_run " +#~ "`\\ \\(req\\)" #~ msgstr "" -#~ msgid "|1ac94140c317450e89678db133c7f3c2|" +#~ msgid "Request for run ID." #~ msgstr "" -#~ msgid "|f8850c6e96fc4430b55e53bba237a7c0|" +#~ msgid "" +#~ ":py:obj:`disconnect " +#~ "`\\ \\(\\)" #~ msgstr "" -#~ msgid "|4a368fdd3fc34adabd20a46752a68582|" +#~ msgid "Disconnect from the Driver API." #~ msgstr "" -#~ msgid "|40f69c17bb444652a7c8dfe577cd120e|" +#~ msgid "" +#~ ":py:obj:`get_nodes `\\" +#~ " \\(req\\)" +#~ msgstr "" + +#~ msgid "Get client IDs." #~ msgstr "" #~ msgid "" -#~ "Please follow the first section on " -#~ "`Run Flower using Docker " -#~ "`_ which covers this" -#~ " step in more detail." +#~ ":py:obj:`pull_task_res " +#~ "`\\ \\(req\\)" #~ msgstr "" #~ msgid "" -#~ "Since `Flower 1.5 `_ we have " -#~ "introduced translations to our doc " -#~ "pages, but, as you might have " -#~ "noticed, the translations are often " -#~ "imperfect. If you speak languages other" -#~ " than English, you might be able " -#~ "to help us in our effort to " -#~ "make Federated Learning accessible to as" -#~ " many people as possible by " -#~ "contributing to those translations! This " -#~ "might also be a great opportunity " -#~ "for those wanting to become open " -#~ "source contributors with little prerequistes." +#~ ":py:obj:`push_task_ins " +#~ "`\\ \\(req\\)" #~ msgstr "" #~ msgid "" -#~ "You input your translation in the " -#~ "textbox at the top and then, once" -#~ " you are happy with it, you " -#~ "either press ``Save and continue`` (to" -#~ " save the translation and go to " -#~ "the next untranslated string), ``Save " -#~ "and stay`` (to save the translation " -#~ "and stay on the same page), " -#~ "``Suggest`` (to add your translation to" -#~ " suggestions for other users to " -#~ "view), or ``Skip`` (to go to the" -#~ " next untranslated string without saving" -#~ " anything)." +#~ "Optionally specify the type of actor " +#~ "to use. The actor object, which " +#~ "persists throughout the simulation, will " +#~ "be the process in charge of " +#~ "running the clients' jobs (i.e. their" +#~ " `fit()` method)." #~ msgstr "" #~ msgid "" -#~ "The first thing we need to do " -#~ "is to define a message type for" -#~ " the RPC system in :code:`transport.proto`." -#~ " Note that we have to do it " -#~ "for both the request and response " -#~ "messages. For more details on the " -#~ "syntax of proto3, please see the " -#~ "`official documentation `_." +#~ "Much effort went into a completely " +#~ "restructured Flower docs experience. The " +#~ "documentation on [flower.ai/docs](flower.ai/docs) is" +#~ " now divided into Flower Framework, " +#~ "Flower Baselines, Flower Android SDK, " +#~ "Flower iOS SDK, and code example " +#~ "projects." #~ msgstr "" #~ msgid "" -#~ "Source: `Official VSCode documentation " -#~ "`_" +#~ "The first preview release of Flower " +#~ "Baselines has arrived! We're kickstarting " +#~ "Flower Baselines with implementations of " +#~ "FedOpt (FedYogi, FedAdam, FedAdagrad), FedBN," +#~ " and FedAvgM. Check the documentation " +#~ "on how to use [Flower " +#~ "Baselines](https://flower.ai/docs/using-baselines.html). " +#~ "With this first preview release we're" +#~ " also inviting the community to " +#~ "[contribute their own " +#~ "baselines](https://flower.ai/docs/contributing-baselines.html)." #~ msgstr "" #~ msgid "" -#~ "`Developing inside a Container " -#~ "`_" +#~ "Flower usage examples used to be " +#~ "bundled with Flower in a package " +#~ "called ``flwr_example``. We are migrating " +#~ "those examples to standalone projects to" +#~ " make them easier to use. All " +#~ "new examples are based in the " +#~ "directory `examples " +#~ "`_." +#~ msgstr "" + +#~ msgid "The following examples are available as standalone projects." +#~ msgstr "" + +#~ msgid "Quickstart TensorFlow/Keras" #~ msgstr "" #~ msgid "" -#~ "`Remote development in Containers " -#~ "`_" +#~ "`Quickstart TensorFlow (Tutorial) " +#~ "`_" #~ msgstr "" #~ msgid "" -#~ "If you are not familiar with " -#~ "Flower Baselines, you should probably " -#~ "check-out our `contributing guide for " -#~ "baselines `_." +#~ "`Quickstart PyTorch (Tutorial) " +#~ "`_" #~ msgstr "" #~ msgid "" -#~ "You should then check out the open" -#~ " `issues " -#~ "`_" -#~ " for baseline requests. If you find" -#~ " a baseline that you'd like to " -#~ "work on and that has no assignes," -#~ " feel free to assign it to " -#~ "yourself and start working on it!" +#~ "`PyTorch: From Centralized To Federated " +#~ "(Tutorial) `_" +#~ msgstr "" + +#~ msgid "Legacy Examples (`flwr_example`)" #~ msgstr "" #~ msgid "" -#~ "If you're familiar with how contributing" -#~ " on GitHub works, you can directly" -#~ " checkout our `getting started guide " -#~ "for contributors `_." +#~ "The useage examples in `flwr_example` " +#~ "are deprecated and will be removed " +#~ "in the future. New examples are " +#~ "provided as standalone projects in " +#~ "`examples `_." +#~ msgstr "" + +#~ msgid "Extra Dependencies" #~ msgstr "" #~ msgid "" -#~ "Git is a distributed version control " -#~ "tool. This allows for an entire " -#~ "codebase's history to be stored and " -#~ "every developer's machine. It is a " -#~ "software that will need to be " -#~ "installed on your local machine, you " -#~ "can follow this `guide " -#~ "`_ to set it up." +#~ "The core Flower framework keeps a " +#~ "minimal set of dependencies. The " +#~ "examples demonstrate Flower in the " +#~ "context of different machine learning " +#~ "frameworks, so additional dependencies need" +#~ " to be installed before an example" +#~ " can be run." +#~ msgstr "" + +#~ msgid "For PyTorch examples::" +#~ msgstr "" + +#~ msgid "For TensorFlow examples::" +#~ msgstr "" + +#~ msgid "For both PyTorch and TensorFlow examples::" #~ msgstr "" #~ msgid "" -#~ "A fork is a personal copy of " -#~ "a GitHub repository. To create one " -#~ "for Flower, you must navigate to " -#~ "https://github.com/adap/flower (while connected to" -#~ " your GitHub account) and click the" -#~ " ``Fork`` button situated on the top" -#~ " right of the page." +#~ "Please consult :code:`pyproject.toml` for a" +#~ " full list of possible extras " +#~ "(section :code:`[tool.poetry.extras]`)." +#~ msgstr "" + +#~ msgid "PyTorch Examples" #~ msgstr "" #~ msgid "" -#~ "Now we will add an upstream " -#~ "address to our repository. Still in " -#~ "the same directroy, we must run " -#~ "the following command:" +#~ "Our PyTorch examples are based on " +#~ "PyTorch 1.7. They should work with " +#~ "other releases as well. So far, we" +#~ " provide the following examples." +#~ msgstr "" + +#~ msgid "CIFAR-10 Image Classification" #~ msgstr "" #~ msgid "" -#~ "This can be achieved by following " -#~ "this `getting started guide for " -#~ "contributors`_ (note that you won't need" -#~ " to clone the repository). Once you" -#~ " are able to write code and " -#~ "test it, you can finally start " -#~ "making changes!" +#~ "`CIFAR-10 and CIFAR-100 " +#~ "`_ are " +#~ "popular RGB image datasets. The Flower" +#~ " CIFAR-10 example uses PyTorch to " +#~ "train a simple CNN classifier in a" +#~ " federated learning setup with two " +#~ "clients." +#~ msgstr "" + +#~ msgid "First, start a Flower server:" +#~ msgstr "" + +#~ msgid "$ ./src/py/flwr_example/pytorch_cifar/run-server.sh" +#~ msgstr "" + +#~ msgid "Then, start the two clients in a new terminal window:" +#~ msgstr "" + +#~ msgid "$ ./src/py/flwr_example/pytorch_cifar/run-clients.sh" #~ msgstr "" -#~ msgid "" -#~ "For our documentation, we’ve started to" -#~ " use the `Diàtaxis framework " -#~ "`_." +#~ msgid "For more details, see :code:`src/py/flwr_example/pytorch_cifar`." #~ msgstr "" -#~ msgid "" -#~ "Our “How to” guides should have " -#~ "titles that continue the sencence “How" -#~ " to …”, for example, “How to " -#~ "upgrade to Flower 1.0”." +#~ msgid "ImageNet-2012 Image Classification" #~ msgstr "" #~ msgid "" -#~ "This issue is about changing the " -#~ "title of a doc from present " -#~ "continious to present simple." +#~ "`ImageNet-2012 `_ is " +#~ "one of the major computer vision " +#~ "datasets. The Flower ImageNet example " +#~ "uses PyTorch to train a ResNet-18 " +#~ "classifier in a federated learning setup" +#~ " with ten clients." #~ msgstr "" -#~ msgid "" -#~ "Let's take the example of “Saving " -#~ "Progress” which we changed to “Save " -#~ "Progress”. Does this pass our check?" +#~ msgid "$ ./src/py/flwr_example/pytorch_imagenet/run-server.sh" #~ msgstr "" -#~ msgid "Before: ”How to saving progress” ❌" +#~ msgid "$ ./src/py/flwr_example/pytorch_imagenet/run-clients.sh" #~ msgstr "" -#~ msgid "After: ”How to save progress” ✅" +#~ msgid "For more details, see :code:`src/py/flwr_example/pytorch_imagenet`." #~ msgstr "" -#~ msgid "" -#~ "This is a tiny change, but it’ll" -#~ " allow us to test your end-" -#~ "to-end setup. After cloning and " -#~ "setting up the Flower repo, here’s " -#~ "what you should do:" +#~ msgid "TensorFlow Examples" #~ msgstr "" #~ msgid "" -#~ "Build the docs and check the " -#~ "result: ``_" +#~ "Our TensorFlow examples are based on " +#~ "TensorFlow 2.0 or newer. So far, " +#~ "we provide the following examples." #~ msgstr "" -#~ msgid "Here’s how to change the file name:" +#~ msgid "Fashion-MNIST Image Classification" #~ msgstr "" #~ msgid "" -#~ "Commit the changes (commit messages are" -#~ " always imperative: “Do something”, in " -#~ "this case “Change …”)" +#~ "`Fashion-MNIST `_ is often used as " +#~ "the \"Hello, world!\" of machine " +#~ "learning. We follow this tradition and" +#~ " provide an example which samples " +#~ "random local datasets from Fashion-MNIST" +#~ " and trains a simple image " +#~ "classification model over those partitions." #~ msgstr "" -#~ msgid "" -#~ "`Good first contributions " -#~ "`_, where you should" -#~ " particularly look into the " -#~ ":code:`baselines` contributions." +#~ msgid "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-server.sh" +#~ msgstr "" + +#~ msgid "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-clients.sh" #~ msgstr "" #~ msgid "" -#~ "If the section is completely empty " -#~ "(without any token) or non-existant, " -#~ "the changelog will just contain the " -#~ "title of the PR for the changelog" -#~ " entry, without any description." +#~ "For more details, see " +#~ ":code:`src/py/flwr_example/tensorflow_fashion_mnist`." +#~ msgstr "" + +#~ msgid ":fa:`eye,mr-1` Can Flower run on Juptyter Notebooks / Google Colab?" #~ msgstr "" #~ msgid "" -#~ "Flower uses :code:`pyproject.toml` to manage" -#~ " dependencies and configure development " -#~ "tools (the ones which support it). " -#~ "Poetry is a build tool which " -#~ "supports `PEP 517 " -#~ "`_." +#~ "`Flower meets KOSMoS `_." #~ msgstr "" #~ msgid "" -#~ "This tutorial will show you how to" -#~ " use Flower to build a federated " -#~ "version of an existing machine learning" -#~ " workload with `FedBN `_, a federated training strategy" -#~ " designed for non-iid data. We " -#~ "are using PyTorch to train a " -#~ "Convolutional Neural Network(with Batch " -#~ "Normalization layers) on the CIFAR-10 " -#~ "dataset. When applying FedBN, only few" -#~ " changes needed compared to `Example: " -#~ "PyTorch - From Centralized To Federated" -#~ " `_." +#~ "If you want to check out " +#~ "everything put together, you should " +#~ "check out the full code example: " +#~ "[https://github.com/adap/flower/tree/main/examples/quickstart-" +#~ "huggingface](https://github.com/adap/flower/tree/main/examples" +#~ "/quickstart-huggingface)." #~ msgstr "" #~ msgid "" -#~ "All files are revised based on " -#~ "`Example: PyTorch - From Centralized To" -#~ " Federated `_. The " -#~ "only thing to do is modifying the" -#~ " file called :code:`cifar.py`, revised part" -#~ " is shown below:" +#~ "First of all, for running the " +#~ "Flower Python server, it is recommended" +#~ " to create a virtual environment and" +#~ " run everything within a `virtualenv " +#~ "`_. " +#~ "For the Flower client implementation in" +#~ " iOS, it is recommended to use " +#~ "Xcode as our IDE." #~ msgstr "" #~ msgid "" -#~ "So far this should all look fairly" -#~ " familiar if you've used PyTorch " -#~ "before. Let's take the next step " -#~ "and use what we've built to create" -#~ " a federated learning system within " -#~ "FedBN, the sytstem consists of one " -#~ "server and two clients." +#~ "Since CoreML does not allow the " +#~ "model parameters to be seen before " +#~ "training, and accessing the model " +#~ "parameters during or after the training" +#~ " can only be done by specifying " +#~ "the layer name, we need to know" +#~ " this informations beforehand, through " +#~ "looking at the model specification, " +#~ "which are written as proto files. " +#~ "The implementation can be seen in " +#~ ":code:`MLModelInspect`." #~ msgstr "" #~ msgid "" -#~ "If you have read `Example: PyTorch " -#~ "- From Centralized To Federated " -#~ "`_, the following" -#~ " parts are easy to follow, onyl " -#~ ":code:`get_parameters` and :code:`set_parameters` " -#~ "function in :code:`client.py` needed to " -#~ "revise. If not, please read the " -#~ "`Example: PyTorch - From Centralized To" -#~ " Federated `_. first." +#~ "After we have all of the necessary" +#~ " informations, let's create our Flower " +#~ "client." #~ msgstr "" -#~ msgid "Example: Walk-Through PyTorch & MNIST" +#~ msgid "" +#~ "MXNet is no longer maintained and " +#~ "has been moved into `Attic " +#~ "`_. As a " +#~ "result, we would encourage you to " +#~ "use other ML frameworks alongise Flower," +#~ " for example, PyTorch. This tutorial " +#~ "might be removed in future versions " +#~ "of Flower." #~ msgstr "" #~ msgid "" -#~ "In this tutorial we will learn, " -#~ "how to train a Convolutional Neural " -#~ "Network on MNIST using Flower and " -#~ "PyTorch." +#~ "It is recommended to create a " +#~ "virtual environment and run everything " +#~ "within this `virtualenv `_." #~ msgstr "" #~ msgid "" -#~ "Since we want to use PyTorch to" -#~ " solve a computer vision task, let's" -#~ " go ahead an install PyTorch and " -#~ "the **torchvision** library:" +#~ "First of all, it is recommended to" +#~ " create a virtual environment and run" +#~ " everything within a `virtualenv " +#~ "`_." #~ msgstr "" -#~ msgid "Ready... Set... Train!" +#~ msgid "Since we want to use scikt-learn, let's go ahead and install it:" #~ msgstr "" #~ msgid "" -#~ "Now that we have all our " -#~ "dependencies installed, let's run a " -#~ "simple distributed training with two " -#~ "clients and one server. Our training " -#~ "procedure and network architecture are " -#~ "based on PyTorch's `Basic MNIST Example" -#~ " `_. " -#~ "This will allow you see how easy" -#~ " it is to wrap your code with" -#~ " Flower and begin training in a " -#~ "federated way. We provide you with " -#~ "two helper scripts, namely *run-" -#~ "server.sh*, and *run-clients.sh*. Don't " -#~ "be afraid to look inside, they are" -#~ " simple enough =)." +#~ "We load the MNIST dataset from " +#~ "`OpenML `_, a popular" +#~ " image classification dataset of " +#~ "handwritten digits for machine learning. " +#~ "The utility :code:`utils.load_mnist()` downloads " +#~ "the training and test data. The " +#~ "training set is split afterwards into" +#~ " 10 partitions with :code:`utils.partition()`." #~ msgstr "" #~ msgid "" -#~ "Go ahead and launch on a terminal" -#~ " the *run-server.sh* script first as" -#~ " follows:" +#~ "Now that you have known how " +#~ "federated XGBoost work with Flower, it's" +#~ " time to run some more comprehensive" +#~ " experiments by customising the " +#~ "experimental settings. In the xgboost-" +#~ "comprehensive example (`full code " +#~ "`_), we provide more options " +#~ "to define various experimental setups, " +#~ "including aggregation strategies, data " +#~ "partitioning and centralised/distributed evaluation." +#~ " We also support `Flower simulation " +#~ "`_ making it easy to " +#~ "simulate large client cohorts in a " +#~ "resource-aware manner. Let's take a " +#~ "look!" #~ msgstr "" -#~ msgid "Now that the server is up and running, go ahead and launch the clients." +#~ msgid "|31e4b1afa87c4b968327bbeafbf184d4|" #~ msgstr "" -#~ msgid "" -#~ "Et voilà! You should be seeing the" -#~ " training procedure and, after a few" -#~ " iterations, the test accuracy for " -#~ "each client." +#~ msgid "|c9d935b4284e4c389a33d86b33e07c0a|" #~ msgstr "" -#~ msgid "Now, let's see what is really happening inside." +#~ msgid "|00727b5faffb468f84dd1b03ded88638|" #~ msgstr "" -#~ msgid "" -#~ "Inside the server helper script *run-" -#~ "server.sh* you will find the following" -#~ " code that basically runs the " -#~ ":code:`server.py`" +#~ msgid "|daf0cf0ff4c24fd29439af78416cf47b|" #~ msgstr "" -#~ msgid "" -#~ "We can go a bit deeper and " -#~ "see that :code:`server.py` simply launches " -#~ "a server that will coordinate three " -#~ "rounds of training. Flower Servers are" -#~ " very customizable, but for simple " -#~ "workloads, we can start a server " -#~ "using the :ref:`start_server ` function and leave " -#~ "all the configuration possibilities at " -#~ "their default values, as seen below." +#~ msgid "|9f093007080d471d94ca90d3e9fde9b6|" #~ msgstr "" - -#~ msgid "" -#~ "Next, let's take a look at the " -#~ "*run-clients.sh* file. You will see " -#~ "that it contains the main loop " -#~ "that starts a set of *clients*." + +#~ msgid "|46a26e6150e0479fbd3dfd655f36eb13|" #~ msgstr "" -#~ msgid "" -#~ "**cid**: is the client ID. It is" -#~ " an integer that uniquely identifies " -#~ "client identifier." +#~ msgid "|3daba297595c4c7fb845d90404a6179a|" #~ msgstr "" -#~ msgid "**sever_address**: String that identifies IP and port of the server." +#~ msgid "|5769874fa9c4455b80b2efda850d39d7|" #~ msgstr "" -#~ msgid "" -#~ "**nb_clients**: This defines the number " -#~ "of clients being created. This piece " -#~ "of information is not required by " -#~ "the client, but it helps us " -#~ "partition the original MNIST dataset to" -#~ " make sure that every client is " -#~ "working on unique subsets of both " -#~ "*training* and *test* sets." +#~ msgid "|ba47ffb421814b0f8f9fa5719093d839|" #~ msgstr "" -#~ msgid "" -#~ "Again, we can go deeper and look" -#~ " inside :code:`flwr_example/quickstart-" -#~ "pytorch/client.py`. After going through the" -#~ " argument parsing code at the " -#~ "beginning of our :code:`main` function, " -#~ "you will find a call to " -#~ ":code:`mnist.load_data`. This function is " -#~ "responsible for partitioning the original " -#~ "MNIST datasets (*training* and *test*) " -#~ "and returning a :code:`torch.utils.data.DataLoader`" -#~ " s for each of them. We then" -#~ " instantiate a :code:`PytorchMNISTClient` object" -#~ " with our client ID, our DataLoaders," -#~ " the number of epochs in each " -#~ "round, and which device we want to" -#~ " use for training (CPU or GPU)." +#~ msgid "|aeac5bf79cbf497082e979834717e01b|" #~ msgstr "" -#~ msgid "" -#~ "The :code:`PytorchMNISTClient` object when " -#~ "finally passed to :code:`fl.client.start_client` " -#~ "along with the server's address as " -#~ "the training process begins." +#~ msgid "|ce27ed4bbe95459dba016afc42486ba2|" #~ msgstr "" -#~ msgid "A Closer Look" +#~ msgid "|ae94a7f71dda443cbec2385751427d41|" #~ msgstr "" -#~ msgid "" -#~ "Now, let's look closely into the " -#~ ":code:`PytorchMNISTClient` inside :code:`flwr_example" -#~ ".quickstart-pytorch.mnist` and see what it" -#~ " is doing:" +#~ msgid "|e61fce4d43d243e7bb08bdde97d81ce6|" #~ msgstr "" -#~ msgid "" -#~ "The first thing to notice is that" -#~ " :code:`PytorchMNISTClient` instantiates a CNN" -#~ " model inside its constructor" +#~ msgid "|08cb60859b07461588fe44e55810b050|" #~ msgstr "" +#~ msgid "``BASE_IMAGE_TAG``" +#~ msgstr "``BASE_IMAGE_TAG``" + +#~ msgid "The image tag of the base image." +#~ msgstr "A tag da imagem da imagem base." + #~ msgid "" -#~ "The code for the CNN is available" -#~ " under :code:`quickstart-pytorch.mnist` and " -#~ "it is reproduced below. It is the" -#~ " same network found in `Basic MNIST" -#~ " Example " -#~ "`_." +#~ "Open the notebook ``doc/source/tutorial-" +#~ "get-started-with-flower-pytorch.ipynb``:" #~ msgstr "" #~ msgid "" -#~ "The second thing to notice is that" -#~ " :code:`PytorchMNISTClient` class inherits from" -#~ " the :code:`fl.client.Client`, and hence it" -#~ " must implement the following methods:" +#~ "https://colab.research.google.com/github/adap/flower/blob/main/doc/source" +#~ "/tutorial-get-started-with-flower-" +#~ "pytorch.ipynb" #~ msgstr "" #~ msgid "" -#~ "When comparing the abstract class to " -#~ "its derived class :code:`PytorchMNISTClient` " -#~ "you will notice that :code:`fit` calls" -#~ " a :code:`train` function and that " -#~ ":code:`evaluate` calls a :code:`test`: " -#~ "function." +#~ "https://colab.research.google.com/github/adap/flower/blob/branch-" +#~ "name/doc/source/tutorial-get-started-with-" +#~ "flower-pytorch.ipynb" #~ msgstr "" -#~ msgid "" -#~ "These functions can both be found " -#~ "inside the same :code:`quickstart-" -#~ "pytorch.mnist` module:" +#~ msgid "Virutualenv with Pyenv/Virtualenv" #~ msgstr "" #~ msgid "" -#~ "Observe that these functions encapsulate " -#~ "regular training and test loops and " -#~ "provide :code:`fit` and :code:`evaluate` with" -#~ " final statistics for each round. You" -#~ " could substitute them with your " -#~ "custom train and test loops and " -#~ "change the network architecture, and the" -#~ " entire example would still work " -#~ "flawlessly. As a matter of fact, " -#~ "why not try and modify the code" -#~ " to an example of your liking?" +#~ "It is important to follow the " +#~ "instructions described in comments. For " +#~ "instance, in order to not break " +#~ "how our changelog system works, you " +#~ "should read the information above the" +#~ " ``Changelog entry`` section carefully. You" +#~ " can also checkout some examples and" +#~ " details in the :ref:`changelogentry` " +#~ "appendix." #~ msgstr "" -#~ msgid "Give It a Try" +#~ msgid "Open a PR (as shown above)" #~ msgstr "" -#~ msgid "" -#~ "Looking through the quickstart code " -#~ "description above will have given a " -#~ "good understanding of how *clients* and" -#~ " *servers* work in Flower, how to " -#~ "run a simple experiment, and the " -#~ "internals of a client wrapper. Here " -#~ "are a few things you could try " -#~ "on your own and get more " -#~ "experience with Flower:" +#~ msgid "How to write a good PR title" #~ msgstr "" #~ msgid "" -#~ "Try and change :code:`PytorchMNISTClient` so" -#~ " it can accept different architectures." +#~ "A well-crafted PR title helps team" +#~ " members quickly understand the purpose " +#~ "and scope of the changes being " +#~ "proposed. Here's a guide to help " +#~ "you write a good GitHub PR title:" #~ msgstr "" #~ msgid "" -#~ "Modify the :code:`train` function so " -#~ "that it accepts different optimizers" +#~ "1. Be Clear and Concise: Provide a" +#~ " clear summary of the changes in " +#~ "a concise manner. 1. Use Actionable " +#~ "Verbs: Start with verbs like \"Add,\"" +#~ " \"Update,\" or \"Fix\" to indicate " +#~ "the purpose. 1. Include Relevant " +#~ "Information: Mention the affected feature " +#~ "or module for context. 1. Keep it" +#~ " Short: Avoid lengthy titles for easy" +#~ " readability. 1. Use Proper Capitalization" +#~ " and Punctuation: Follow grammar rules " +#~ "for clarity." #~ msgstr "" #~ msgid "" -#~ "Modify the :code:`test` function so that" -#~ " it proves not only the top-1 " -#~ "(regular accuracy) but also the top-5" -#~ " accuracy?" +#~ "Let's start with a few examples " +#~ "for titles that should be avoided " +#~ "because they do not provide meaningful" +#~ " information:" #~ msgstr "" -#~ msgid "" -#~ "Go larger! Try to adapt the code" -#~ " to larger images and datasets. Why" -#~ " not try training on ImageNet with" -#~ " a ResNet-50?" +#~ msgid "Implement Algorithm" #~ msgstr "" -#~ msgid "You are ready now. Enjoy learning in a federated way!" +#~ msgid "Database" #~ msgstr "" -#~ msgid "Differential privacy" +#~ msgid "Add my_new_file.py to codebase" #~ msgstr "" -#~ msgid "" -#~ "Flower provides differential privacy (DP) " -#~ "wrapper classes for the easy integration" -#~ " of the central DP guarantees " -#~ "provided by DP-FedAvg into training " -#~ "pipelines defined in any of the " -#~ "various ML frameworks that Flower is " -#~ "compatible with." +#~ msgid "Improve code in module" #~ msgstr "" -#~ msgid "" -#~ "Please note that these components are" -#~ " still experimental; the correct " -#~ "configuration of DP for a specific " -#~ "task is still an unsolved problem." +#~ msgid "Change SomeModule" #~ msgstr "" #~ msgid "" -#~ "The name DP-FedAvg is misleading " -#~ "since it can be applied on top " -#~ "of any FL algorithm that conforms " -#~ "to the general structure prescribed by" -#~ " the FedOpt family of algorithms." +#~ "Here are a few positive examples " +#~ "which provide helpful information without " +#~ "repeating how they do it, as that" +#~ " is already visible in the \"Files" +#~ " changed\" section of the PR:" #~ msgstr "" -#~ msgid "DP-FedAvg" +#~ msgid "Update docs banner to mention Flower Summit 2023" #~ msgstr "" -#~ msgid "" -#~ "DP-FedAvg, originally proposed by " -#~ "McMahan et al. [mcmahan]_ and extended" -#~ " by Andrew et al. [andrew]_, is " -#~ "essentially FedAvg with the following " -#~ "modifications." +#~ msgid "Remove unnecessary XGBoost dependency" #~ msgstr "" -#~ msgid "" -#~ "**Clipping** : The influence of each " -#~ "client's update is bounded by clipping" -#~ " it. This is achieved by enforcing" -#~ " a cap on the L2 norm of " -#~ "the update, scaling it down if " -#~ "needed." +#~ msgid "Remove redundant attributes in strategies subclassing FedAvg" #~ msgstr "" #~ msgid "" -#~ "**Noising** : Gaussian noise, calibrated " -#~ "to the clipping threshold, is added " -#~ "to the average computed at the " -#~ "server." +#~ "Add CI job to deploy the staging" +#~ " system when the ``main`` branch " +#~ "changes" #~ msgstr "" #~ msgid "" -#~ "The distribution of the update norm " -#~ "has been shown to vary from " -#~ "task-to-task and to evolve as " -#~ "training progresses. This variability is " -#~ "crucial in understanding its impact on" -#~ " differential privacy guarantees, emphasizing " -#~ "the need for an adaptive approach " -#~ "[andrew]_ that continuously adjusts the " -#~ "clipping threshold to track a " -#~ "prespecified quantile of the update norm" -#~ " distribution." +#~ "Add new amazing library which will " +#~ "be used to improve the simulation " +#~ "engine" #~ msgstr "" -#~ msgid "Simplifying Assumptions" +#~ msgid "Changelog entry" #~ msgstr "" #~ msgid "" -#~ "We make (and attempt to enforce) a" -#~ " number of assumptions that must be" -#~ " satisfied to ensure that the " -#~ "training process actually realizes the " -#~ ":math:`(\\epsilon, \\delta)` guarantees the " -#~ "user has in mind when configuring " -#~ "the setup." +#~ "When opening a new PR, inside its" +#~ " description, there should be a " +#~ "``Changelog entry`` header." #~ msgstr "" #~ msgid "" -#~ "**Fixed-size subsampling** :Fixed-size " -#~ "subsamples of the clients must be " -#~ "taken at each round, as opposed to" -#~ " variable-sized Poisson subsamples." +#~ "Above this header you should see " +#~ "the following comment that explains how" +#~ " to write your changelog entry:" #~ msgstr "" #~ msgid "" -#~ "**Unweighted averaging** : The contributions" -#~ " from all the clients must weighted" -#~ " equally in the aggregate to " -#~ "eliminate the requirement for the server" -#~ " to know in advance the sum of" -#~ " the weights of all clients available" -#~ " for selection." +#~ "Inside the following 'Changelog entry' " +#~ "section, you should put the description" +#~ " of your changes that will be " +#~ "added to the changelog alongside your" +#~ " PR title." #~ msgstr "" #~ msgid "" -#~ "**No client failures** : The set " -#~ "of available clients must stay constant" -#~ " across all rounds of training. In" -#~ " other words, clients cannot drop out" -#~ " or fail." +#~ "If the section is completely empty " +#~ "(without any token) or non-existent, " +#~ "the changelog will just contain the " +#~ "title of the PR for the changelog" +#~ " entry, without any description." #~ msgstr "" #~ msgid "" -#~ "The first two are useful for " -#~ "eliminating a multitude of complications " -#~ "associated with calibrating the noise to" -#~ " the clipping threshold, while the " -#~ "third one is required to comply " -#~ "with the assumptions of the privacy " -#~ "analysis." +#~ "If the section contains some text " +#~ "other than tokens, it will use it" +#~ " to add a description to the " +#~ "change." #~ msgstr "" #~ msgid "" -#~ "These restrictions are in line with " -#~ "constraints imposed by Andrew et al. " -#~ "[andrew]_." +#~ "If the section contains one of the" +#~ " following tokens it will ignore any" +#~ " other text and put the PR " +#~ "under the corresponding section of the" +#~ " changelog:" #~ msgstr "" -#~ msgid "Customizable Responsibility for Noise injection" +#~ msgid " is for classifying a PR as a general improvement." #~ msgstr "" -#~ msgid "" -#~ "In contrast to other implementations " -#~ "where the addition of noise is " -#~ "performed at the server, you can " -#~ "configure the site of noise injection" -#~ " to better match your threat model." -#~ " We provide users with the " -#~ "flexibility to set up the training " -#~ "such that each client independently adds" -#~ " a small amount of noise to the" -#~ " clipped update, with the result that" -#~ " simply aggregating the noisy updates " -#~ "is equivalent to the explicit addition" -#~ " of noise to the non-noisy " -#~ "aggregate at the server." +#~ msgid " is to not add the PR to the changelog" #~ msgstr "" -#~ msgid "" -#~ "To be precise, if we let :math:`m`" -#~ " be the number of clients sampled " -#~ "each round and :math:`\\sigma_\\Delta` be " -#~ "the scale of the total Gaussian " -#~ "noise that needs to be added to" -#~ " the sum of the model updates, " -#~ "we can use simple maths to show" -#~ " that this is equivalent to each " -#~ "client adding noise with scale " -#~ ":math:`\\sigma_\\Delta/\\sqrt{m}`." +#~ msgid " is to add a general baselines change to the PR" #~ msgstr "" -#~ msgid "Wrapper-based approach" +#~ msgid " is to add a general examples change to the PR" +#~ msgstr "" + +#~ msgid " is to add a general sdk change to the PR" +#~ msgstr "" + +#~ msgid " is to add a general simulations change to the PR" +#~ msgstr "" + +#~ msgid "Note that only one token should be used." #~ msgstr "" #~ msgid "" -#~ "Introducing DP to an existing workload" -#~ " can be thought of as adding an" -#~ " extra layer of security around it." -#~ " This inspired us to provide the " -#~ "additional server and client-side logic" -#~ " needed to make the training process" -#~ " differentially private as wrappers for " -#~ "instances of the :code:`Strategy` and " -#~ ":code:`NumPyClient` abstract classes respectively." -#~ " This wrapper-based approach has the" -#~ " advantage of being easily composable " -#~ "with other wrappers that someone might" -#~ " contribute to the Flower library in" -#~ " the future, e.g., for secure " -#~ "aggregation. Using Inheritance instead can " -#~ "be tedious because that would require" -#~ " the creation of new sub- classes " -#~ "every time a new class implementing " -#~ ":code:`Strategy` or :code:`NumPyClient` is " -#~ "defined." +#~ "Its content must have a specific " +#~ "format. We will break down what " +#~ "each possibility does:" +#~ msgstr "" + +#~ msgid "" +#~ "If the ``### Changelog entry`` section" +#~ " contains nothing or doesn't exist, " +#~ "the following text will be added " +#~ "to the changelog::" #~ msgstr "" -#~ msgid "Server-side logic" +#~ msgid "" +#~ "If the ``### Changelog entry`` section" +#~ " contains a description (and no " +#~ "token), the following text will be " +#~ "added to the changelog::" #~ msgstr "" #~ msgid "" -#~ "The first version of our solution " -#~ "was to define a decorator whose " -#~ "constructor accepted, among other things, " -#~ "a boolean-valued variable indicating " -#~ "whether adaptive clipping was to be " -#~ "enabled or not. We quickly realized " -#~ "that this would clutter its " -#~ ":code:`__init__()` function with variables " -#~ "corresponding to hyperparameters of adaptive" -#~ " clipping that would remain unused " -#~ "when it was disabled. A cleaner " -#~ "implementation could be achieved by " -#~ "splitting the functionality into two " -#~ "decorators, :code:`DPFedAvgFixed` and " -#~ ":code:`DPFedAvgAdaptive`, with the latter sub-" -#~ " classing the former. The constructors " -#~ "for both classes accept a boolean " -#~ "parameter :code:`server_side_noising`, which, as " -#~ "the name suggests, determines where " -#~ "noising is to be performed." +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, nothing will change" +#~ " in the changelog." #~ msgstr "" #~ msgid "" -#~ "The server-side capabilities required " -#~ "for the original version of DP-" -#~ "FedAvg, i.e., the one which performed" -#~ " fixed clipping, can be completely " -#~ "captured with the help of wrapper " -#~ "logic for just the following two " -#~ "methods of the :code:`Strategy` abstract " -#~ "class." +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, the following text" +#~ " will be added to the changelog::" #~ msgstr "" #~ msgid "" -#~ ":code:`configure_fit()` : The config " -#~ "dictionary being sent by the wrapped " -#~ ":code:`Strategy` to each client needs to" -#~ " be augmented with an additional " -#~ "value equal to the clipping threshold" -#~ " (keyed under :code:`dpfedavg_clip_norm`) and," -#~ " if :code:`server_side_noising=true`, another one" -#~ " equal to the scale of the " -#~ "Gaussian noise that needs to be " -#~ "added at the client (keyed under " -#~ ":code:`dpfedavg_noise_stddev`). This entails " -#~ "*post*-processing of the results returned " -#~ "by the wrappee's implementation of " -#~ ":code:`configure_fit()`." +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, the following " +#~ "text will be added to the " +#~ "changelog::" #~ msgstr "" #~ msgid "" -#~ ":code:`aggregate_fit()`: We check whether any" -#~ " of the sampled clients dropped out" -#~ " or failed to upload an update " -#~ "before the round timed out. In " -#~ "that case, we need to abort the" -#~ " current round, discarding any successful" -#~ " updates that were received, and move" -#~ " on to the next one. On the " -#~ "other hand, if all clients responded " -#~ "successfully, we must force the " -#~ "averaging of the updates to happen " -#~ "in an unweighted manner by intercepting" -#~ " the :code:`parameters` field of " -#~ ":code:`FitRes` for each received update " -#~ "and setting it to 1. Furthermore, " -#~ "if :code:`server_side_noising=true`, each update " -#~ "is perturbed with an amount of " -#~ "noise equal to what it would have" -#~ " been subjected to had client-side" -#~ " noising being enabled. This entails " -#~ "*pre*-processing of the arguments to " -#~ "this method before passing them on " -#~ "to the wrappee's implementation of " -#~ ":code:`aggregate_fit()`." +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, the following " +#~ "text will be added to the " +#~ "changelog::" #~ msgstr "" #~ msgid "" -#~ "We can't directly change the aggregation" -#~ " function of the wrapped strategy to" -#~ " force it to add noise to the" -#~ " aggregate, hence we simulate client-" -#~ "side noising to implement server-side" -#~ " noising." +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, the following text " +#~ "will be added to the changelog::" #~ msgstr "" #~ msgid "" -#~ "These changes have been put together " -#~ "into a class called :code:`DPFedAvgFixed`, " -#~ "whose constructor accepts the strategy " -#~ "being decorated, the clipping threshold " -#~ "and the number of clients sampled " -#~ "every round as compulsory arguments. The" -#~ " user is expected to specify the " -#~ "clipping threshold since the order of" -#~ " magnitude of the update norms is " -#~ "highly dependent on the model being " -#~ "trained and providing a default value" -#~ " would be misleading. The number of" -#~ " clients sampled at every round is" -#~ " required to calculate the amount of" -#~ " noise that must be added to " -#~ "each individual update, either by the" -#~ " server or the clients." +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, the following " +#~ "text will be added to the " +#~ "changelog::" #~ msgstr "" #~ msgid "" -#~ "The additional functionality required to " -#~ "facilitate adaptive clipping has been " -#~ "provided in :code:`DPFedAvgAdaptive`, a " -#~ "subclass of :code:`DPFedAvgFixed`. It " -#~ "overrides the above-mentioned methods to" -#~ " do the following." +#~ "Note that only one token must be" +#~ " provided, otherwise, only the first " +#~ "action (in the order listed above), " +#~ "will be performed." +#~ msgstr "" + +#~ msgid "Example: MXNet - Run MXNet Federated" #~ msgstr "" #~ msgid "" -#~ ":code:`configure_fit()` : It intercepts the" -#~ " config dict returned by " -#~ ":code:`super.configure_fit()` to add the " -#~ "key-value pair " -#~ ":code:`dpfedavg_adaptive_clip_enabled:True` to it, " -#~ "which the client interprets as an " -#~ "instruction to include an indicator bit" -#~ " (1 if update norm <= clipping " -#~ "threshold, 0 otherwise) in the results" -#~ " returned by it." +#~ "This tutorial will show you how to" +#~ " use Flower to build a federated " +#~ "version of an existing MXNet workload." +#~ " We are using MXNet to train a" +#~ " Sequential model on the MNIST " +#~ "dataset. We will structure the example" +#~ " similar to our `PyTorch - From " +#~ "Centralized To Federated " +#~ "`_ walkthrough. " +#~ "MXNet and PyTorch are very similar " +#~ "and a very good comparison between " +#~ "MXNet and PyTorch is given `here " +#~ "`_. First, " +#~ "we build a centralized training approach" +#~ " based on the `Handwritten Digit " +#~ "Recognition " +#~ "`_" +#~ " tutorial. Then, we build upon the" +#~ " centralized training code to run the" +#~ " training in a federated fashion." #~ msgstr "" #~ msgid "" -#~ ":code:`aggregate_fit()` : It follows a " -#~ "call to :code:`super.aggregate_fit()` with one" -#~ " to :code:`__update_clip_norm__()`, a procedure" -#~ " which adjusts the clipping threshold " -#~ "on the basis of the indicator bits" -#~ " received from the sampled clients." +#~ "Before we start setting up our " +#~ "MXNet example, we install the " +#~ ":code:`mxnet` and :code:`flwr` packages:" #~ msgstr "" -#~ msgid "Client-side logic" +#~ msgid "MNIST Training with MXNet" #~ msgstr "" #~ msgid "" -#~ "The client-side capabilities required " -#~ "can be completely captured through " -#~ "wrapper logic for just the :code:`fit()`" -#~ " method of the :code:`NumPyClient` abstract" -#~ " class. To be precise, we need " -#~ "to *post-process* the update computed" -#~ " by the wrapped client to clip " -#~ "it, if necessary, to the threshold " -#~ "value supplied by the server as " -#~ "part of the config dictionary. In " -#~ "addition to this, it may need to" -#~ " perform some extra work if either" -#~ " (or both) of the following keys " -#~ "are also present in the dict." +#~ "We begin with a brief description " +#~ "of the centralized training code based" +#~ " on a :code:`Sequential` model. If " +#~ "you want a more in-depth " +#~ "explanation of what's going on then " +#~ "have a look at the official `MXNet" +#~ " tutorial " +#~ "`_." #~ msgstr "" #~ msgid "" -#~ ":code:`dpfedavg_noise_stddev` : Generate and " -#~ "add the specified amount of noise " -#~ "to the clipped update." +#~ "Let's create a new file " +#~ "called:code:`mxnet_mnist.py` with all the " +#~ "components required for a traditional " +#~ "(centralized) MNIST training. First, the " +#~ "MXNet package :code:`mxnet` needs to be" +#~ " imported. You can see that we " +#~ "do not yet import the :code:`flwr` " +#~ "package for federated learning. This " +#~ "will be done later." #~ msgstr "" #~ msgid "" -#~ ":code:`dpfedavg_adaptive_clip_enabled` : Augment the" -#~ " metrics dict in the :code:`FitRes` " -#~ "object being returned to the server " -#~ "with an indicator bit, calculated as " -#~ "described earlier." +#~ "The :code:`load_data()` function loads the " +#~ "MNIST training and test sets." #~ msgstr "" -#~ msgid "Performing the :math:`(\\epsilon, \\delta)` analysis" +#~ msgid "" +#~ "As already mentioned, we will use " +#~ "the MNIST dataset for this machine " +#~ "learning workload. The model architecture " +#~ "(a very simple :code:`Sequential` model) " +#~ "is defined in :code:`model()`." #~ msgstr "" #~ msgid "" -#~ "Assume you have trained for :math:`n`" -#~ " rounds with sampling fraction :math:`q`" -#~ " and noise multiplier :math:`z`. In " -#~ "order to calculate the :math:`\\epsilon` " -#~ "value this would result in for a" -#~ " particular :math:`\\delta`, the following " -#~ "script may be used." +#~ "We now need to define the training" +#~ " (function :code:`train()`) which loops " +#~ "over the training set and measures " +#~ "the loss for each batch of " +#~ "training examples." #~ msgstr "" #~ msgid "" -#~ "McMahan et al. \"Learning Differentially " -#~ "Private Recurrent Language Models.\" " -#~ "International Conference on Learning " -#~ "Representations (ICLR), 2017." +#~ "The evaluation of the model is " +#~ "defined in function :code:`test()`. The " +#~ "function loops over all test samples " +#~ "and measures the loss and accuracy " +#~ "of the model based on the test " +#~ "dataset." #~ msgstr "" #~ msgid "" -#~ "Andrew, Galen, et al. \"Differentially " -#~ "Private Learning with Adaptive Clipping.\" " -#~ "Advances in Neural Information Processing " -#~ "Systems (NeurIPS), 2021." +#~ "Having defined the data loading, model" +#~ " architecture, training, and evaluation we" +#~ " can put everything together and " +#~ "train our model on MNIST. Note " +#~ "that the GPU/CPU device for the " +#~ "training and testing is defined within" +#~ " the :code:`ctx` (context)." +#~ msgstr "" + +#~ msgid "You can now run your (centralized) MXNet machine learning workload:" #~ msgstr "" #~ msgid "" -#~ "This can be achieved by customizing " -#~ "an existing strategy or by `implementing" -#~ " a custom strategy from scratch " -#~ "`_. Here's a nonsensical " -#~ "example that customizes :code:`FedAvg` by " -#~ "adding a custom ``\"hello\": \"world\"`` " -#~ "configuration key/value pair to the " -#~ "config dict of a *single client* " -#~ "(only the first client in the " -#~ "list, the other clients in this " -#~ "round to not receive this \"special\"" -#~ " config value):" +#~ "So far this should all look fairly" +#~ " familiar if you've used MXNet (or" +#~ " even PyTorch) before. Let's take the" +#~ " next step and use what we've " +#~ "built to create a simple federated " +#~ "learning system consisting of one server" +#~ " and two clients." +#~ msgstr "" + +#~ msgid "MXNet meets Flower" #~ msgstr "" #~ msgid "" -#~ "More sophisticated implementations can use " -#~ ":code:`configure_fit` to implement custom " -#~ "client selection logic. A client will" -#~ " only participate in a round if " -#~ "the corresponding :code:`ClientProxy` is " -#~ "included in the the list returned " -#~ "from :code:`configure_fit`." +#~ "So far, it was not easily possible" +#~ " to use MXNet workloads for federated" +#~ " learning because federated learning is " +#~ "not supported in MXNet. Since Flower " +#~ "is fully agnostic towards the underlying" +#~ " machine learning framework, it can " +#~ "be used to federated arbitrary machine" +#~ " learning workloads. This section will " +#~ "show you how Flower can be used" +#~ " to federate our centralized MXNet " +#~ "workload." #~ msgstr "" #~ msgid "" -#~ "More sophisticated implementations can use " -#~ ":code:`configure_evaluate` to implement custom " -#~ "client selection logic. A client will" -#~ " only participate in a round if " -#~ "the corresponding :code:`ClientProxy` is " -#~ "included in the the list returned " -#~ "from :code:`configure_evaluate`." +#~ "The concept to federate an existing " +#~ "workload is always the same and " +#~ "easy to understand. We have to " +#~ "start a *server* and then use the" +#~ " code in :code:`mxnet_mnist.py` for the " +#~ "*clients* that are connected to the " +#~ "*server*. The *server* sends model " +#~ "parameters to the clients. The *clients*" +#~ " run the training and update the " +#~ "parameters. The updated parameters are " +#~ "sent back to the *server* which " +#~ "averages all received parameter updates. " +#~ "This describes one round of the " +#~ "federated learning process and we repeat" +#~ " this for multiple rounds." #~ msgstr "" #~ msgid "" -#~ "`How to run Flower using Docker " -#~ "`_" +#~ "Finally, we will define our *client* " +#~ "logic in :code:`client.py` and build " +#~ "upon the previously defined MXNet " +#~ "training in :code:`mxnet_mnist.py`. Our " +#~ "*client* needs to import :code:`flwr`, " +#~ "but also :code:`mxnet` to update the " +#~ "parameters on our MXNet model:" #~ msgstr "" #~ msgid "" -#~ "Ray Dashboard: ``_" +#~ "Implementing a Flower *client* basically " +#~ "means implementing a subclass of either" +#~ " :code:`flwr.client.Client` or " +#~ ":code:`flwr.client.NumPyClient`. Our implementation " +#~ "will be based on " +#~ ":code:`flwr.client.NumPyClient` and we'll call " +#~ "it :code:`MNISTClient`. :code:`NumPyClient` is " +#~ "slightly easier to implement than " +#~ ":code:`Client` if you use a framework" +#~ " with good NumPy interoperability (like " +#~ "PyTorch or MXNet) because it avoids " +#~ "some of the boilerplate that would " +#~ "otherwise be necessary. :code:`MNISTClient` " +#~ "needs to implement four methods, two " +#~ "methods for getting/setting model parameters," +#~ " one method for training the model," +#~ " and one method for testing the " +#~ "model:" +#~ msgstr "" + +#~ msgid "transform MXNet :code:`NDArray`'s to NumPy :code:`ndarray`'s" #~ msgstr "" #~ msgid "" -#~ "Ray Metrics: ``_" +#~ "The challenging part is to transform " +#~ "the MXNet parameters from :code:`NDArray` " +#~ "to :code:`NumPy Arrays` to make it " +#~ "readable for Flower." #~ msgstr "" -#~ msgid "Enjoy building more robust and flexible ``ClientApp``s with mods!" +#~ msgid "" +#~ "The two :code:`NumPyClient` methods " +#~ ":code:`fit` and :code:`evaluate` make use " +#~ "of the functions :code:`train()` and " +#~ ":code:`test()` previously defined in " +#~ ":code:`mxnet_mnist.py`. So what we really " +#~ "do here is we tell Flower through" +#~ " our :code:`NumPyClient` subclass which of" +#~ " our already defined functions to " +#~ "call for training and evaluation. We " +#~ "included type annotations to give you" +#~ " a better understanding of the data" +#~ " types that get passed around." #~ msgstr "" #~ msgid "" -#~ ":py:obj:`ClientApp `\\ " -#~ "\\(client\\_fn\\[\\, mods\\]\\)" +#~ "Having defined data loading, model " +#~ "architecture, training, and evaluation we " +#~ "can put everything together and train" +#~ " our :code:`Sequential` model on MNIST." #~ msgstr "" -#~ msgid ":py:obj:`flwr.server.driver `\\" +#~ msgid "" +#~ "in each window (make sure that the" +#~ " server is still running before you" +#~ " do so) and see your MXNet " +#~ "project run federated learning across " +#~ "two clients. Congratulations!" #~ msgstr "" -#~ msgid "Flower driver SDK." +#~ msgid "" +#~ "The full source code for this " +#~ "example: `MXNet: From Centralized To " +#~ "Federated (Code) " +#~ "`_. Our " +#~ "example is of course somewhat over-" +#~ "simplified because both clients load the" +#~ " exact same dataset, which isn't " +#~ "realistic. You're now prepared to " +#~ "explore this topic further. How about" +#~ " using a CNN or using a " +#~ "different dataset? How about adding more" +#~ " clients?" #~ msgstr "" -#~ msgid "driver" +#~ msgid "" +#~ "This guide describes how to a " +#~ "SSL-enabled secure Flower server can " +#~ "be started and how a Flower client" +#~ " can establish a secure connections " +#~ "to it." #~ msgstr "" #~ msgid "" -#~ ":py:obj:`start_driver `\\ " -#~ "\\(\\*\\[\\, server\\_address\\, server\\, ...\\]\\)" +#~ "The code example comes with a " +#~ "README.md file which will explain how" +#~ " to start it. Although it is " +#~ "already SSL-enabled, it might be " +#~ "less descriptive on how. Stick to " +#~ "this guide for a deeper introduction " +#~ "to the topic." #~ msgstr "" #~ msgid "" -#~ ":py:obj:`Driver `\\ " -#~ "\\(\\[driver\\_service\\_address\\, ...\\]\\)" +#~ "Using SSL-enabled connections requires " +#~ "certificates to be passed to the " +#~ "server and client. For the purpose " +#~ "of this guide we are going to " +#~ "generate self-signed certificates. As " +#~ "this can become quite complex we " +#~ "are going to ask you to run " +#~ "the script in :code:`examples/advanced-" +#~ "tensorflow/certificates/generate.sh`" +#~ msgstr "" + +#~ msgid "with the following command sequence:" #~ msgstr "" #~ msgid "" -#~ ":py:obj:`GrpcDriver `\\ " -#~ "\\(\\[driver\\_service\\_address\\, ...\\]\\)" +#~ "The approach how the SSL certificates" +#~ " are generated in this example can" +#~ " serve as an inspiration and starting" +#~ " point but should not be taken " +#~ "as complete for production environments. " +#~ "Please refer to other sources regarding" +#~ " the issue of correctly generating " +#~ "certificates for production environments." #~ msgstr "" -#~ msgid "`GrpcDriver` provides access to the gRPC Driver API/service." +#~ msgid "" +#~ "In case you are a researcher you" +#~ " might be just fine using the " +#~ "self-signed certificates generated using " +#~ "the scripts which are part of this" +#~ " guide." #~ msgstr "" -#~ msgid ":py:obj:`get_nodes `\\ \\(\\)" +#~ msgid "" +#~ "We are now going to show how " +#~ "to write a sever which uses the" +#~ " previously generated scripts." #~ msgstr "" #~ msgid "" -#~ ":py:obj:`pull_task_res " -#~ "`\\ \\(task\\_ids\\)" +#~ "When providing certificates, the server " +#~ "expects a tuple of three certificates." +#~ " :code:`Path` can be used to easily" +#~ " read the contents of those files " +#~ "into byte strings, which is the " +#~ "data type :code:`start_server` expects." #~ msgstr "" -#~ msgid "Get task results." +#~ msgid "" +#~ "We are now going to show how " +#~ "to write a client which uses the" +#~ " previously generated scripts:" #~ msgstr "" #~ msgid "" -#~ ":py:obj:`push_task_ins " -#~ "`\\ " -#~ "\\(task\\_ins\\_list\\)" +#~ "When setting :code:`root_certificates`, the " +#~ "client expects the PEM-encoded root " +#~ "certificates as a byte string. We " +#~ "are again using :code:`Path` to simplify" +#~ " reading those as byte strings." #~ msgstr "" -#~ msgid "Schedule tasks." +#~ msgid "" +#~ "You should now have learned how to" +#~ " generate self-signed certificates using" +#~ " the given script, start a SSL-" +#~ "enabled server, and have a client " +#~ "establish a secure connection to it." #~ msgstr "" -#~ msgid "GrpcDriver" +#~ msgid "" +#~ "The simplest way to get started " +#~ "with Flower is by using the " +#~ "pre-made Docker images, which you can" +#~ " find on `Docker Hub " +#~ "`_." #~ msgstr "" -#~ msgid ":py:obj:`connect `\\ \\(\\)" +#~ msgid "Flower server" #~ msgstr "" -#~ msgid "Connect to the Driver API." +#~ msgid "" +#~ "The command will pull the Docker " +#~ "image with the tag " +#~ "``1.7.0-py3.11-ubuntu22.04`` from Docker Hub. " +#~ "The tag contains the information which" +#~ " Flower, Python and Ubuntu is used." +#~ " In this case, it uses Flower " +#~ "1.7.0, Python 3.11 and Ubuntu 22.04. " +#~ "The ``--rm`` flag tells Docker to " +#~ "remove the container after it exits." #~ msgstr "" #~ msgid "" -#~ ":py:obj:`create_run " -#~ "`\\ \\(req\\)" +#~ "By default, the Flower server keeps " +#~ "state in-memory. When using the " +#~ "Docker flag ``--rm``, the state is " +#~ "not persisted between container starts. " +#~ "We will show below how to save " +#~ "the state in a file on your " +#~ "host system." #~ msgstr "" -#~ msgid "Request for run ID." +#~ msgid "" +#~ "The ``-p :`` flag tells " +#~ "Docker to map the ports " +#~ "``9091``/``9092`` of the host to " +#~ "``9091``/``9092`` of the container, allowing" +#~ " you to access the Driver API " +#~ "on ``http://localhost:9091`` and the Fleet " +#~ "API on ``http://localhost:9092``. Lastly, any" +#~ " flag that comes after the tag " +#~ "is passed to the Flower server. " +#~ "Here, we are passing the flag " +#~ "``--insecure``." #~ msgstr "" #~ msgid "" -#~ ":py:obj:`disconnect " -#~ "`\\ \\(\\)" +#~ "The ``--insecure`` flag enables insecure " +#~ "communication (using HTTP, not HTTPS) " +#~ "and should only be used for " +#~ "testing purposes. We strongly recommend " +#~ "enabling `SSL `_ when " +#~ "deploying to a production environment." #~ msgstr "" -#~ msgid "Disconnect from the Driver API." +#~ msgid "" +#~ "You can use ``--help`` to view all" +#~ " available flags that the server " +#~ "supports:" #~ msgstr "" #~ msgid "" -#~ ":py:obj:`get_nodes `\\" -#~ " \\(req\\)" +#~ "If you want to persist the state" +#~ " of the server on your host " +#~ "system, all you need to do is " +#~ "specify a path where you want to" +#~ " save the file on your host " +#~ "system and a name for the database" +#~ " file. In the example below, we " +#~ "tell Docker via the flag ``-v`` to" +#~ " mount the user's home directory " +#~ "(``~/`` on your host) into the " +#~ "``/app/`` directory of the container. " +#~ "Furthermore, we use the flag " +#~ "``--database`` to specify the name of" +#~ " the database file." #~ msgstr "" -#~ msgid "Get client IDs." +#~ msgid "" +#~ "As soon as the server starts, the" +#~ " file ``state.db`` is created in the" +#~ " user's home directory on your host" +#~ " system. If the file already exists," +#~ " the server tries to restore the " +#~ "state from the file. To start the" +#~ " server with an empty database, " +#~ "simply remove the ``state.db`` file." +#~ msgstr "" + +#~ msgid "" +#~ "To enable SSL, you will need a " +#~ "CA certificate, a server certificate and" +#~ " a server private key." +#~ msgstr "" + +#~ msgid "" +#~ "For testing purposes, you can generate" +#~ " your own self-signed certificates. " +#~ "The `Enable SSL connections " +#~ "`_ page contains " +#~ "a section that will guide you " +#~ "through the process." +#~ msgstr "" + +#~ msgid "" +#~ "Assuming all files we need are in" +#~ " the local ``certificates`` directory, we" +#~ " can use the flag ``-v`` to " +#~ "mount the local directory into the " +#~ "``/app/`` directory of the container. " +#~ "This allows the server to access " +#~ "the files within the container. Finally," +#~ " we pass the names of the " +#~ "certificates to the server with the " +#~ "``--certificates`` flag." +#~ msgstr "" + +#~ msgid "Using a different Flower or Python version" #~ msgstr "" #~ msgid "" -#~ ":py:obj:`pull_task_res " -#~ "`\\ \\(req\\)" +#~ "If you want to use a different " +#~ "version of Flower or Python, you " +#~ "can do so by changing the tag. " +#~ "All versions we provide are available" +#~ " on `Docker Hub " +#~ "`_." #~ msgstr "" #~ msgid "" -#~ ":py:obj:`push_task_ins " -#~ "`\\ \\(req\\)" +#~ "The following command returns the " +#~ "current image hash referenced by the " +#~ "``server:1.7.0-py3.11-ubuntu22.04`` tag:" #~ msgstr "" -#~ msgid "" -#~ "Optionally specify the type of actor " -#~ "to use. The actor object, which " -#~ "persists throughout the simulation, will " -#~ "be the process in charge of " -#~ "running the clients' jobs (i.e. their" -#~ " `fit()` method)." +#~ msgid "Next, we can pin the hash when running a new server container:" #~ msgstr "" #~ msgid "" -#~ "Much effort went into a completely " -#~ "restructured Flower docs experience. The " -#~ "documentation on [flower.ai/docs](flower.ai/docs) is" -#~ " now divided into Flower Framework, " -#~ "Flower Baselines, Flower Android SDK, " -#~ "Flower iOS SDK, and code example " -#~ "projects." +#~ "QUICKSTART TUTORIALS: :doc:`PyTorch ` | :doc:`TensorFlow " +#~ "` | :doc:`🤗 " +#~ "Transformers ` " +#~ "| :doc:`JAX ` |" +#~ " :doc:`Pandas ` " +#~ "| :doc:`fastai `" +#~ " | :doc:`PyTorch Lightning ` | :doc:`MXNet " +#~ "` | :doc" +#~ ":`scikit-learn `" +#~ " | :doc:`XGBoost ` | :doc:`Android ` | :doc:`iOS `" #~ msgstr "" -#~ msgid "" -#~ "The first preview release of Flower " -#~ "Baselines has arrived! We're kickstarting " -#~ "Flower Baselines with implementations of " -#~ "FedOpt (FedYogi, FedAdam, FedAdagrad), FedBN," -#~ " and FedAvgM. Check the documentation " -#~ "on how to use [Flower " -#~ "Baselines](https://flower.ai/docs/using-baselines.html). " -#~ "With this first preview release we're" -#~ " also inviting the community to " -#~ "[contribute their own " -#~ "baselines](https://flower.ai/docs/contributing-baselines.html)." +#~ msgid "flower-driver-api" #~ msgstr "" -#~ msgid "" -#~ "Flower usage examples used to be " -#~ "bundled with Flower in a package " -#~ "called ``flwr_example``. We are migrating " -#~ "those examples to standalone projects to" -#~ " make them easier to use. All " -#~ "new examples are based in the " -#~ "directory `examples " -#~ "`_." +#~ msgid "flower-fleet-api" #~ msgstr "" -#~ msgid "The following examples are available as standalone projects." +#~ msgid "" +#~ "Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " +#~ "[:py:class:`str`, :py:obj:`~typing.Union`\\ " +#~ "[:py:class:`int`, :py:class:`float`, :py:class:`str`, " +#~ ":py:class:`bytes`, :py:class:`bool`, " +#~ ":py:class:`~typing.List`\\ [:py:class:`int`], " +#~ ":py:class:`~typing.List`\\ [:py:class:`float`], " +#~ ":py:class:`~typing.List`\\ [:py:class:`str`], " +#~ ":py:class:`~typing.List`\\ [:py:class:`bytes`], " +#~ ":py:class:`~typing.List`\\ [:py:class:`bool`]]]" #~ msgstr "" -#~ msgid "Quickstart TensorFlow/Keras" +#~ msgid "" +#~ ":py:obj:`create_error_reply " +#~ "`\\ \\(error\\, " +#~ "ttl\\)" #~ msgstr "" #~ msgid "" -#~ "`Quickstart TensorFlow (Tutorial) " -#~ "`_" +#~ ":py:obj:`create_reply `\\ " +#~ "\\(content\\, ttl\\)" #~ msgstr "" #~ msgid "" -#~ "`Quickstart PyTorch (Tutorial) " -#~ "`_" +#~ "Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " +#~ "[:py:class:`str`, :py:obj:`~typing.Union`\\ " +#~ "[:py:class:`int`, :py:class:`float`, " +#~ ":py:class:`~typing.List`\\ [:py:class:`int`], " +#~ ":py:class:`~typing.List`\\ [:py:class:`float`]]]" +#~ msgstr "" + +#~ msgid "Run Flower server (Driver API and Fleet API)." #~ msgstr "" #~ msgid "" -#~ "`PyTorch: From Centralized To Federated " -#~ "(Tutorial) `_" +#~ ":py:obj:`start_driver `\\ " +#~ "\\(\\*\\[\\, server\\_address\\, server\\, ...\\]\\)" #~ msgstr "" -#~ msgid "Legacy Examples (`flwr_example`)" +#~ msgid "Start a Flower Driver API server." #~ msgstr "" #~ msgid "" -#~ "The useage examples in `flwr_example` " -#~ "are deprecated and will be removed " -#~ "in the future. New examples are " -#~ "provided as standalone projects in " -#~ "`examples `_." +#~ ":py:obj:`Driver `\\ " +#~ "\\(\\[driver\\_service\\_address\\, ...\\]\\)" #~ msgstr "" -#~ msgid "Extra Dependencies" +#~ msgid "`Driver` class provides an interface to the Driver API." #~ msgstr "" #~ msgid "" -#~ "The core Flower framework keeps a " -#~ "minimal set of dependencies. The " -#~ "examples demonstrate Flower in the " -#~ "context of different machine learning " -#~ "frameworks, so additional dependencies need" -#~ " to be installed before an example" -#~ " can be run." +#~ "The IPv4 or IPv6 address of the" +#~ " Driver API server. Defaults to " +#~ "`\"[::]:9091\"`." #~ msgstr "" -#~ msgid "For PyTorch examples::" +#~ msgid ":py:obj:`close `\\ \\(\\)" #~ msgstr "" -#~ msgid "For TensorFlow examples::" +#~ msgid "Disconnect from the SuperLink if connected." #~ msgstr "" -#~ msgid "For both PyTorch and TensorFlow examples::" +#~ msgid "" +#~ ":py:obj:`create_message `\\" +#~ " \\(content\\, message\\_type\\, ...\\)" #~ msgstr "" #~ msgid "" -#~ "Please consult :code:`pyproject.toml` for a" -#~ " full list of possible extras " -#~ "(section :code:`[tool.poetry.extras]`)." +#~ "Time-to-live for the round trip" +#~ " of this message, i.e., the time " +#~ "from sending this message to receiving" +#~ " a reply. It specifies the duration" +#~ " for which the message and its " +#~ "potential reply are considered valid." #~ msgstr "" -#~ msgid "PyTorch Examples" +#~ msgid "start\\_driver" #~ msgstr "" #~ msgid "" -#~ "Our PyTorch examples are based on " -#~ "PyTorch 1.7. They should work with " -#~ "other releases as well. So far, we" -#~ " provide the following examples." +#~ "The IPv4 or IPv6 address of the" +#~ " Driver API server. Defaults to " +#~ "`\"[::]:8080\"`." #~ msgstr "" -#~ msgid "CIFAR-10 Image Classification" +#~ msgid "" +#~ "A server implementation, either " +#~ "`flwr.server.Server` or a subclass thereof." +#~ " If no instance is provided, then " +#~ "`start_driver` will create one." #~ msgstr "" #~ msgid "" -#~ "`CIFAR-10 and CIFAR-100 " -#~ "`_ are " -#~ "popular RGB image datasets. The Flower" -#~ " CIFAR-10 example uses PyTorch to " -#~ "train a simple CNN classifier in a" -#~ " federated learning setup with two " -#~ "clients." +#~ "An implementation of the class " +#~ "`flwr.server.ClientManager`. If no implementation" +#~ " is provided, then `start_driver` will " +#~ "use `flwr.server.SimpleClientManager`." #~ msgstr "" -#~ msgid "First, start a Flower server:" +#~ msgid "The Driver object to use." #~ msgstr "" -#~ msgid "$ ./src/py/flwr_example/pytorch_cifar/run-server.sh" +#~ msgid "Starting a driver that connects to an insecure server:" #~ msgstr "" -#~ msgid "Then, start the two clients in a new terminal window:" +#~ msgid "Starting a driver that connects to an SSL-enabled server:" #~ msgstr "" -#~ msgid "$ ./src/py/flwr_example/pytorch_cifar/run-clients.sh" +#~ msgid "" +#~ ":py:obj:`run_simulation_from_cli " +#~ "`\\ \\(\\)" #~ msgstr "" -#~ msgid "For more details, see :code:`src/py/flwr_example/pytorch_cifar`." +#~ msgid "Run Simulation Engine from the CLI." #~ msgstr "" -#~ msgid "ImageNet-2012 Image Classification" +#~ msgid "run\\_simulation\\_from\\_cli" #~ msgstr "" #~ msgid "" -#~ "`ImageNet-2012 `_ is " -#~ "one of the major computer vision " -#~ "datasets. The Flower ImageNet example " -#~ "uses PyTorch to train a ResNet-18 " -#~ "classifier in a federated learning setup" -#~ " with ten clients." +#~ "Check out this Federated Learning " +#~ "quickstart tutorial for using Flower " +#~ "with MXNet to train a Sequential " +#~ "model on MNIST." #~ msgstr "" -#~ msgid "$ ./src/py/flwr_example/pytorch_imagenet/run-server.sh" +#~ msgid "Quickstart MXNet" #~ msgstr "" -#~ msgid "$ ./src/py/flwr_example/pytorch_imagenet/run-clients.sh" +#~ msgid "" +#~ "MXNet is no longer maintained and " +#~ "has been moved into `Attic " +#~ "`_. As a " +#~ "result, we would encourage you to " +#~ "use other ML frameworks alongside " +#~ "Flower, for example, PyTorch. This " +#~ "tutorial might be removed in future " +#~ "versions of Flower." #~ msgstr "" -#~ msgid "For more details, see :code:`src/py/flwr_example/pytorch_imagenet`." +#~ msgid "" +#~ "In this tutorial, we will learn " +#~ "how to train a :code:`Sequential` model" +#~ " on MNIST using Flower and MXNet." #~ msgstr "" -#~ msgid "TensorFlow Examples" +#~ msgid "Since we want to use MXNet, let's go ahead and install it:" #~ msgstr "" #~ msgid "" -#~ "Our TensorFlow examples are based on " -#~ "TensorFlow 2.0 or newer. So far, " -#~ "we provide the following examples." +#~ "Now that we have all our " +#~ "dependencies installed, let's run a " +#~ "simple distributed training with two " +#~ "clients and one server. Our training " +#~ "procedure and network architecture are " +#~ "based on MXNet´s `Hand-written Digit " +#~ "Recognition tutorial " +#~ "`_." #~ msgstr "" -#~ msgid "Fashion-MNIST Image Classification" +#~ msgid "" +#~ "In a file called :code:`client.py`, " +#~ "import Flower and MXNet related " +#~ "packages:" +#~ msgstr "" + +#~ msgid "In addition, define the device allocation in MXNet with:" #~ msgstr "" #~ msgid "" -#~ "`Fashion-MNIST `_ is often used as " -#~ "the \"Hello, world!\" of machine " -#~ "learning. We follow this tradition and" -#~ " provide an example which samples " -#~ "random local datasets from Fashion-MNIST" -#~ " and trains a simple image " -#~ "classification model over those partitions." +#~ "We use MXNet to load MNIST, a " +#~ "popular image classification dataset of " +#~ "handwritten digits for machine learning. " +#~ "The MXNet utility :code:`mx.test_utils.get_mnist()`" +#~ " downloads the training and test " +#~ "data." #~ msgstr "" -#~ msgid "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-server.sh" +#~ msgid "" +#~ "Define the training and loss with " +#~ "MXNet. We train the model by " +#~ "looping over the dataset, measure the" +#~ " corresponding loss, and optimize it." #~ msgstr "" -#~ msgid "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-clients.sh" +#~ msgid "" +#~ "Next, we define the validation of " +#~ "our machine learning model. We loop " +#~ "over the test set and measure both" +#~ " loss and accuracy on the test " +#~ "set." #~ msgstr "" #~ msgid "" -#~ "For more details, see " -#~ ":code:`src/py/flwr_example/tensorflow_fashion_mnist`." +#~ "After defining the training and testing" +#~ " of a MXNet machine learning model," +#~ " we use these functions to implement" +#~ " a Flower client." #~ msgstr "" -#~ msgid ":fa:`eye,mr-1` Can Flower run on Juptyter Notebooks / Google Colab?" +#~ msgid "Our Flower clients will use a simple :code:`Sequential` model:" #~ msgstr "" #~ msgid "" -#~ "`Flower meets KOSMoS `_." +#~ "After loading the dataset with " +#~ ":code:`load_data()` we perform one forward " +#~ "propagation to initialize the model and" +#~ " model parameters with :code:`model(init)`. " +#~ "Next, we implement a Flower client." #~ msgstr "" #~ msgid "" -#~ "If you want to check out " -#~ "everything put together, you should " -#~ "check out the full code example: " -#~ "[https://github.com/adap/flower/tree/main/examples/quickstart-" -#~ "huggingface](https://github.com/adap/flower/tree/main/examples" -#~ "/quickstart-huggingface)." +#~ "Flower provides a convenience class " +#~ "called :code:`NumPyClient` which makes it " +#~ "easier to implement the :code:`Client` " +#~ "interface when your workload uses MXNet." +#~ " Implementing :code:`NumPyClient` usually means" +#~ " defining the following methods " +#~ "(:code:`set_parameters` is optional though):" #~ msgstr "" -#~ msgid "" -#~ "First of all, for running the " -#~ "Flower Python server, it is recommended" -#~ " to create a virtual environment and" -#~ " run everything within a `virtualenv " -#~ "`_. " -#~ "For the Flower client implementation in" -#~ " iOS, it is recommended to use " -#~ "Xcode as our IDE." +#~ msgid "They can be implemented in the following way:" #~ msgstr "" #~ msgid "" -#~ "Since CoreML does not allow the " -#~ "model parameters to be seen before " -#~ "training, and accessing the model " -#~ "parameters during or after the training" -#~ " can only be done by specifying " -#~ "the layer name, we need to know" -#~ " this informations beforehand, through " -#~ "looking at the model specification, " -#~ "which are written as proto files. " -#~ "The implementation can be seen in " -#~ ":code:`MLModelInspect`." +#~ "We can now create an instance of" +#~ " our class :code:`MNISTClient` and add " +#~ "one line to actually run this " +#~ "client:" #~ msgstr "" #~ msgid "" -#~ "After we have all of the necessary" -#~ " informations, let's create our Flower " +#~ "That's it for the client. We only" +#~ " have to implement :code:`Client` or " +#~ ":code:`NumPyClient` and call " +#~ ":code:`fl.client.start_client()` or " +#~ ":code:`fl.client.start_numpy_client()`. The string " +#~ ":code:`\"0.0.0.0:8080\"` tells the client " +#~ "which server to connect to. In our" +#~ " case we can run the server and" +#~ " the client on the same machine, " +#~ "therefore we use :code:`\"0.0.0.0:8080\"`. If" +#~ " we run a truly federated workload" +#~ " with the server and clients running" +#~ " on different machines, all that " +#~ "needs to change is the " +#~ ":code:`server_address` we pass to the " #~ "client." #~ msgstr "" #~ msgid "" -#~ "MXNet is no longer maintained and " -#~ "has been moved into `Attic " -#~ "`_. As a " -#~ "result, we would encourage you to " -#~ "use other ML frameworks alongise Flower," -#~ " for example, PyTorch. This tutorial " -#~ "might be removed in future versions " -#~ "of Flower." +#~ "With both client and server ready, " +#~ "we can now run everything and see" +#~ " federated learning in action. Federated" +#~ " learning systems usually have a " +#~ "server and multiple clients. We " +#~ "therefore have to start the server " +#~ "first:" #~ msgstr "" #~ msgid "" -#~ "It is recommended to create a " -#~ "virtual environment and run everything " -#~ "within this `virtualenv `_." +#~ "Congratulations! You've successfully built and" +#~ " run your first federated learning " +#~ "system. The full `source code " +#~ "`_ for this example can " +#~ "be found in :code:`examples/quickstart-mxnet`." #~ msgstr "" -#~ msgid "" -#~ "First of all, it is recommended to" -#~ " create a virtual environment and run" -#~ " everything within a `virtualenv " -#~ "`_." +#~ msgid "Sets the parameters of a :code:`sklean` LogisticRegression model" #~ msgstr "" -#~ msgid "Since we want to use scikt-learn, let's go ahead and install it:" +#~ msgid ":code:`load_mnist()`" +#~ msgstr "" + +#~ msgid "Loads the MNIST dataset using OpenML" +#~ msgstr "" + +#~ msgid ":code:`shuffle()`" +#~ msgstr "" + +#~ msgid "Shuffles data and its label" +#~ msgstr "" + +#~ msgid ":code:`partition()`" +#~ msgstr "" + +#~ msgid "Splits datasets into a number of partitions" #~ msgstr "" #~ msgid "" #~ "We load the MNIST dataset from " -#~ "`OpenML `_, a popular" -#~ " image classification dataset of " -#~ "handwritten digits for machine learning. " -#~ "The utility :code:`utils.load_mnist()` downloads " -#~ "the training and test data. The " +#~ "`OpenML " +#~ "`_, a" +#~ " popular image classification dataset of" +#~ " handwritten digits for machine learning." +#~ " The utility :code:`utils.load_mnist()` downloads" +#~ " the training and test data. The " #~ "training set is split afterwards into" #~ " 10 partitions with :code:`utils.partition()`." #~ msgstr "" #~ msgid "" -#~ "Now that you have known how " -#~ "federated XGBoost work with Flower, it's" -#~ " time to run some more comprehensive" -#~ " experiments by customising the " -#~ "experimental settings. In the xgboost-" -#~ "comprehensive example (`full code " -#~ "`_), we provide more options " -#~ "to define various experimental setups, " -#~ "including aggregation strategies, data " -#~ "partitioning and centralised/distributed evaluation." -#~ " We also support `Flower simulation " -#~ "`_ making it easy to " -#~ "simulate large client cohorts in a " -#~ "resource-aware manner. Let's take a " -#~ "look!" +#~ "The number of federated learning rounds" +#~ " is set in :code:`fit_round()` and " +#~ "the evaluation is defined in " +#~ ":code:`get_evaluate_fn()`. The evaluation function" +#~ " is called after each federated " +#~ "learning round and gives you information" +#~ " about loss and accuracy." #~ msgstr "" -#~ msgid "|31e4b1afa87c4b968327bbeafbf184d4|" +#~ msgid "Let's get stated!" #~ msgstr "" -#~ msgid "|c9d935b4284e4c389a33d86b33e07c0a|" +#~ msgid "" +#~ "We now have a list of ten " +#~ "training sets and ten validation sets" +#~ " (``trainloaders`` and ``valloaders``) " +#~ "representing the data of ten different" +#~ " organizations. Each ``trainloader``/``valloader`` " +#~ "pair contains 4500 training examples and" +#~ " 500 validation examples. There's also " +#~ "a single ``testloader`` (we did not " +#~ "split the test set). Again, this " +#~ "is only necessary for building research" +#~ " or educational systems, actual federated" +#~ " learning systems have their data " +#~ "naturally distributed across multiple " +#~ "partitions." #~ msgstr "" -#~ msgid "|00727b5faffb468f84dd1b03ded88638|" +#~ msgid "|2b5c62c529f6416f840c594cce062fbb|" #~ msgstr "" -#~ msgid "|daf0cf0ff4c24fd29439af78416cf47b|" +#~ msgid "|90b334680cb7467d9a04d39b8e8dca9f|" #~ msgstr "" -#~ msgid "|9f093007080d471d94ca90d3e9fde9b6|" +#~ msgid "|65764ceee89f4335bfd93fd0b115e831|" #~ msgstr "" -#~ msgid "|46a26e6150e0479fbd3dfd655f36eb13|" +#~ msgid "|d97319ec28bb407ea0ab9705e38f3bcf|" #~ msgstr "" -#~ msgid "|3daba297595c4c7fb845d90404a6179a|" +#~ msgid "|11e95ac83a8548d8b3505b4663187d07|" #~ msgstr "" -#~ msgid "|5769874fa9c4455b80b2efda850d39d7|" +#~ msgid "|1dab2f3a23674abc8a6731f20fa10730|" #~ msgstr "" -#~ msgid "|ba47ffb421814b0f8f9fa5719093d839|" +#~ msgid "|7f0ee162da38450788493a21627306f7|" #~ msgstr "" -#~ msgid "|aeac5bf79cbf497082e979834717e01b|" +#~ msgid "|296a1fb72c514b23b3d8905ff0ff98c6|" #~ msgstr "" -#~ msgid "|ce27ed4bbe95459dba016afc42486ba2|" +#~ msgid "|5b1408eec0d746cdb91162a9107b6089|" #~ msgstr "" -#~ msgid "|ae94a7f71dda443cbec2385751427d41|" +#~ msgid "|aef19f4b122c4e8d9f4c57f99bcd5dd2|" #~ msgstr "" -#~ msgid "|e61fce4d43d243e7bb08bdde97d81ce6|" +#~ msgid "|2881a86d8fc54ba29d96b29fc2819f4a|" #~ msgstr "" -#~ msgid "|08cb60859b07461588fe44e55810b050|" +#~ msgid "|ec1fe880237247e0975f52766775ab84|" +#~ msgstr "" + +#~ msgid "|9fdf048ed58d4467b2718cdf4aaf1ec3|" +#~ msgstr "" + +#~ msgid "|ff726bc5505e432388ee2fdd6ef420b9|" #~ msgstr "" + diff --git a/doc/locales/pt_BR/LC_MESSAGES/sphinx.po b/doc/locales/pt_BR/LC_MESSAGES/sphinx.po index 79be6ffce14..45f362cca80 100644 --- a/doc/locales/pt_BR/LC_MESSAGES/sphinx.po +++ b/doc/locales/pt_BR/LC_MESSAGES/sphinx.po @@ -8,7 +8,7 @@ msgid "" msgstr "" "Project-Id-Version: Flower main\n" "Report-Msgid-Bugs-To: \n" -"POT-Creation-Date: 2023-10-19 07:55+0200\n" +"POT-Creation-Date: 2024-05-28 11:47+0200\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language: pt_BR\n" @@ -17,7 +17,7 @@ msgstr "" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: 8bit\n" -"Generated-By: Babel 2.12.1\n" +"Generated-By: Babel 2.15.0\n" #: ../../source/_templates/base.html:18 msgid "About these documents" @@ -28,6 +28,7 @@ msgid "Index" msgstr "" #: ../../source/_templates/base.html:24 +#: ../../source/_templates/sidebar/search.html:2 msgid "Search" msgstr "" @@ -35,3 +36,7 @@ msgstr "" msgid "Copyright" msgstr "" +#: ../../source/_templates/sidebar/search.html:2 +msgid "Type / to search" +msgstr "" + diff --git a/doc/locales/zh_Hans/LC_MESSAGES/framework-docs.po b/doc/locales/zh_Hans/LC_MESSAGES/framework-docs.po index 253dfe0ccc8..47be2dfda76 100644 --- a/doc/locales/zh_Hans/LC_MESSAGES/framework-docs.po +++ b/doc/locales/zh_Hans/LC_MESSAGES/framework-docs.po @@ -7,18 +7,17 @@ msgid "" msgstr "" "Project-Id-Version: Flower main\n" "Report-Msgid-Bugs-To: \n" -"POT-Creation-Date: 2024-03-15 14:23+0000\n" +"POT-Creation-Date: 2024-05-28 11:47+0200\n" "PO-Revision-Date: 2024-05-10 06:59+0000\n" "Last-Translator: Yan Gao \n" -"Language-Team: Chinese (Simplified) \n" "Language: zh_Hans\n" +"Language-Team: Chinese (Simplified) \n" +"Plural-Forms: nplurals=1; plural=0;\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: 8bit\n" -"Plural-Forms: nplurals=1; plural=0;\n" -"X-Generator: Weblate 5.5.4-rc\n" -"Generated-By: Babel 2.14.0\n" +"Generated-By: Babel 2.15.0\n" #: ../../source/contributor-explanation-architecture.rst:2 msgid "Flower Architecture" @@ -30,8 +29,8 @@ msgstr "边缘客户端引擎" #: ../../source/contributor-explanation-architecture.rst:7 msgid "" -"`Flower `_ core framework architecture with Edge Client " -"Engine" +"`Flower `_ core framework architecture with Edge " +"Client Engine" msgstr "具有边缘客户端引擎的`Flower `核心架构" #: ../../source/contributor-explanation-architecture.rst:13 @@ -50,10 +49,9 @@ msgstr "可同步进行的虚拟客户端引擎和边缘客户端引擎" #: ../../source/contributor-explanation-architecture.rst:23 msgid "" -"`Flower `_ core framework architecture with both Virtual " -"Client Engine and Edge Client Engine" -msgstr "" -"具有虚拟客户端引擎和边缘客户端引擎的`Flower `核心架构" +"`Flower `_ core framework architecture with both " +"Virtual Client Engine and Edge Client Engine" +msgstr "具有虚拟客户端引擎和边缘客户端引擎的`Flower `核心架构" #: ../../source/contributor-how-to-build-docker-images.rst:2 msgid "How to build Docker Flower images locally" @@ -62,17 +60,16 @@ msgstr "如何在本地搭建Docker Flower images" #: ../../source/contributor-how-to-build-docker-images.rst:4 #, fuzzy msgid "" -"Flower provides pre-made docker images on `Docker Hub `_ that include all necessary dependencies for " -"running the server. You can also build your own custom docker images from " -"scratch with a different version of Python or Ubuntu if that is what you " -"need. In this guide, we will explain what images exist and how to build them " -"locally." +"Flower provides pre-made docker images on `Docker Hub " +"`_ that include all necessary dependencies" +" for running the SuperLink. You can also build your own custom docker " +"images from scratch with a different version of Python or Ubuntu if that " +"is what you need. In this guide, we will explain what images exist and " +"how to build them locally." msgstr "" -"Flower 在 `Docker Hub `_ 上提供了" -"预制的 docker 镜像,其中包括运行服务器所需的所有依赖项。如果你需要,也可以使" -"用不同版本的 Python 或 Ubuntu 从头开始构建自己的定制 docker 镜像。在本指南" -"中,我们将介绍有哪些镜像,以及如何在本地构建它们。" +"Flower 在 `Docker Hub `_ " +"上提供了预制的 docker 镜像,其中包括运行服务器所需的所有依赖项。如果你需要,也可以使用不同版本的 Python 或 Ubuntu " +"从头开始构建自己的定制 docker 镜像。在本指南中,我们将介绍有哪些镜像,以及如何在本地构建它们。" #: ../../source/contributor-how-to-build-docker-images.rst:9 #, fuzzy @@ -87,58 +84,56 @@ msgid "Clone the flower repository." msgstr "**叉花仓库**" #: ../../source/contributor-how-to-build-docker-images.rst:17 +#: ../../source/how-to-run-flower-using-docker.rst:144 #, fuzzy msgid "Verify the Docker daemon is running." msgstr "验证 Docker 守护进程是否正在运行。" #: ../../source/contributor-how-to-build-docker-images.rst:19 +#: ../../source/how-to-run-flower-using-docker.rst:146 #, fuzzy msgid "" -"Please follow the first section on :doc:`Run Flower using Docker ` which covers this step in more detail." +"Please follow the first section on :doc:`Run Flower using Docker ` which covers this step in more detail." msgstr "" -"请阅读 :doc:`Run Flower using Docker ` 的第一" -"节,其中更详细地介绍了这一步骤。" +"请阅读 :doc:`Run Flower using Docker ` " +"的第一节,其中更详细地介绍了这一步骤。" #: ../../source/contributor-how-to-build-docker-images.rst:23 #, fuzzy msgid "" -"Currently, Flower provides two images, a base image and a server image. " -"There will also be a client image soon. The base image, as the name " -"suggests, contains basic dependencies that both the server and the client " -"need. This includes system dependencies, Python and Python tools. The server " -"image is based on the base image, but it additionally installs the Flower " -"server using ``pip``." +"Currently, Flower provides two images, a ``base`` image and a " +"``superlink`` image. The base image, as the name suggests, contains basic" +" dependencies that the SuperLink needs. This includes system " +"dependencies, Python and Python tools. The SuperLink image is based on " +"the base image, but it additionally installs the SuperLink using ``pip``." msgstr "" -"目前,Flower 提供两个镜像,一个基础镜像和一个服务器镜像。不久还将推出客户端镜" -"像。基础镜像,顾名思义,包含服务器和客户端都需要的基本依赖项。其中包括系统依" -"赖项、Python 和 Python 工具。服务器镜像基于基础镜像,但它会使用 ``pip`` 额外" -"安装 Flower 服务器。" +"目前,Flower " +"提供两个镜像,一个基础镜像和一个服务器镜像。不久还将推出客户端镜像。基础镜像,顾名思义,包含服务器和客户端都需要的基本依赖项。其中包括系统依赖项、Python" +" 和 Python 工具。服务器镜像基于基础镜像,但它会使用 ``pip`` 额外安装 Flower 服务器。" #: ../../source/contributor-how-to-build-docker-images.rst:28 #, fuzzy msgid "" "The build instructions that assemble the images are located in the " -"respective Dockerfiles. You can find them in the subdirectories of ``src/" -"docker``." -msgstr "" -"组装镜像的构建说明位于各自的 Dockerfile 中。你可以在 ``src/docker`` 的子目录" -"中找到它们。" +"respective Dockerfiles. You can find them in the subdirectories of " +"``src/docker``." +msgstr "组装镜像的构建说明位于各自的 Dockerfile 中。你可以在 ``src/docker`` 的子目录中找到它们。" #: ../../source/contributor-how-to-build-docker-images.rst:31 #, fuzzy msgid "" -"Both, base and server image are configured via build arguments. Through " -"build arguments, we can make our build more flexible. For example, in the " -"base image, we can specify the version of Python to install using the " -"``PYTHON_VERSION`` build argument. Some of the build arguments have default " -"values, others must be specified when building the image. All available " -"build arguments for each image are listed in one of the tables below." +"Both, base and SuperLink image are configured via build arguments. " +"Through build arguments, we can make our build more flexible. For " +"example, in the base image, we can specify the version of Python to " +"install using the ``PYTHON_VERSION`` build argument. Some of the build " +"arguments have default values, others must be specified when building the" +" image. All available build arguments for each image are listed in one of" +" the tables below." msgstr "" -"基础镜像和服务器镜像都是通过构建参数配置的。通过联编参数,我们可以使联编更加" -"灵活。例如,在基础镜像中,我们可以使用 ``PYTHON_VERSION`` 联编参数指定要安装" -"的 Python 版本。有些联编参数有默认值,有些则必须在联编映像时指定。每个映像的" -"所有可用联编参数都列在下表中。" +"基础镜像和服务器镜像都是通过构建参数配置的。通过联编参数,我们可以使联编更加灵活。例如,在基础镜像中,我们可以使用 " +"``PYTHON_VERSION`` 联编参数指定要安装的 Python " +"版本。有些联编参数有默认值,有些则必须在联编映像时指定。每个映像的所有可用联编参数都列在下表中。" #: ../../source/contributor-how-to-build-docker-images.rst:38 #, fuzzy @@ -170,6 +165,7 @@ msgid "Example" msgstr "实例" #: ../../source/contributor-how-to-build-docker-images.rst:48 +#: ../../source/contributor-how-to-build-docker-images.rst:94 #, fuzzy msgid "``PYTHON_VERSION``" msgstr "Python 版本" @@ -182,7 +178,7 @@ msgstr "要安装的 ``python`` 版本。" #: ../../source/contributor-how-to-build-docker-images.rst:50 #: ../../source/contributor-how-to-build-docker-images.rst:54 #: ../../source/contributor-how-to-build-docker-images.rst:58 -#: ../../source/contributor-how-to-build-docker-images.rst:100 +#: ../../source/contributor-how-to-build-docker-images.rst:108 #, fuzzy msgid "Yes" msgstr "类型" @@ -223,6 +219,7 @@ msgid "``69.0.2``" msgstr "``1.0.0b0``" #: ../../source/contributor-how-to-build-docker-images.rst:60 +#: ../../source/contributor-how-to-build-docker-images.rst:98 #, fuzzy msgid "``UBUNTU_VERSION``" msgstr "``UBUNTU_VERSION``" @@ -240,25 +237,21 @@ msgstr "默认为 ``22.04``。" #: ../../source/contributor-how-to-build-docker-images.rst:65 #, fuzzy msgid "" -"The following example creates a base image with Python 3.11.0, pip 23.0.1 " -"and setuptools 69.0.2:" -msgstr "" -"下面的示例使用 Python 3.11.0、pip 23.0.1 和 setuptools 69.0.2 创建了基本映" -"像:" +"The following example creates a base image with Python 3.11.0, pip 23.0.1" +" and setuptools 69.0.2:" +msgstr "下面的示例使用 Python 3.11.0、pip 23.0.1 和 setuptools 69.0.2 创建了基本映像:" #: ../../source/contributor-how-to-build-docker-images.rst:76 #, fuzzy msgid "" -"The name of image is ``flwr_base`` and the tag ``0.1.0``. Remember that the " -"build arguments as well as the name and tag can be adapted to your needs. " -"These values serve as examples only." -msgstr "" -"图像名称为 ``flwr_base``,标记为 ``0.1.0``。请记住,编译参数以及名称和标记都" -"可以根据需要进行调整。这些值仅供参考。" +"The name of image is ``flwr_base`` and the tag ``0.1.0``. Remember that " +"the build arguments as well as the name and tag can be adapted to your " +"needs. These values serve as examples only." +msgstr "图像名称为 ``flwr_base``,标记为 ``0.1.0``。请记住,编译参数以及名称和标记都可以根据需要进行调整。这些值仅供参考。" #: ../../source/contributor-how-to-build-docker-images.rst:80 #, fuzzy -msgid "Building the server image" +msgid "Building the SuperLink image" msgstr "启动服务器" #: ../../source/contributor-how-to-build-docker-images.rst:90 @@ -273,72 +266,85 @@ msgstr "基础镜像的存储库名称。" #: ../../source/contributor-how-to-build-docker-images.rst:92 #, fuzzy -msgid "Defaults to ``flwr/server``." +msgid "Defaults to ``flwr/base``." msgstr "默认为 ``flwr/server``。" -#: ../../source/contributor-how-to-build-docker-images.rst:94 -#, fuzzy -msgid "``BASE_IMAGE_TAG``" -msgstr "基本图像标签" - #: ../../source/contributor-how-to-build-docker-images.rst:95 #, fuzzy -msgid "The image tag of the base image." -msgstr "基础图像的图像标记。" +msgid "The Python version of the base image." +msgstr "基础镜像的存储库名称。" #: ../../source/contributor-how-to-build-docker-images.rst:96 #, fuzzy -msgid "Defaults to ``py3.11-ubuntu22.04``." +msgid "Defaults to ``py3.11``." +msgstr "默认为 ``22.04``。" + +#: ../../source/contributor-how-to-build-docker-images.rst:99 +#, fuzzy +msgid "The Ubuntu version of the base image." +msgstr "基础镜像的存储库名称。" + +#: ../../source/contributor-how-to-build-docker-images.rst:100 +#, fuzzy +msgid "Defaults to ``ubuntu22.04``." msgstr "默认为 ``py3.11-ubuntu22.04``。" -#: ../../source/contributor-how-to-build-docker-images.rst:98 +#: ../../source/contributor-how-to-build-docker-images.rst:102 +#, fuzzy +msgid "``FLWR_PACKAGE``" +msgstr "``FLWR_VERSION``" + +#: ../../source/contributor-how-to-build-docker-images.rst:103 +msgid "The PyPI package to install." +msgstr "" + +#: ../../source/contributor-how-to-build-docker-images.rst:104 +#, fuzzy +msgid "Defaults to ``flwr``." +msgstr "默认为 ``flwr/server``。" + +#: ../../source/contributor-how-to-build-docker-images.rst:106 #, fuzzy msgid "``FLWR_VERSION``" msgstr "``FLWR_VERSION``" -#: ../../source/contributor-how-to-build-docker-images.rst:99 +#: ../../source/contributor-how-to-build-docker-images.rst:107 #, fuzzy msgid "Version of Flower to be installed." msgstr "要安装的 Flower 版本。" -#: ../../source/contributor-how-to-build-docker-images.rst:101 +#: ../../source/contributor-how-to-build-docker-images.rst:109 #, fuzzy -msgid "``1.7.0``" +msgid "``1.8.0``" msgstr "``1.0.0b0``" -#: ../../source/contributor-how-to-build-docker-images.rst:103 +#: ../../source/contributor-how-to-build-docker-images.rst:112 #, fuzzy msgid "" -"The following example creates a server image with the official Flower base " -"image py3.11-ubuntu22.04 and Flower 1.7.0:" -msgstr "" -"下面的示例使用官方的 Flower 基本镜像 py3.11-ubuntu22.04 和 Flower 1.7.0 创建" -"了一个服务器镜像:" +"The following example creates a SuperLink image with the official Flower " +"base image py3.11-ubuntu22.04 and Flower 1.8.0:" +msgstr "下面的示例使用官方的 Flower 基本镜像 py3.11-ubuntu22.04 和 Flower 1.7.0 创建了一个服务器镜像:" -#: ../../source/contributor-how-to-build-docker-images.rst:114 +#: ../../source/contributor-how-to-build-docker-images.rst:122 #, fuzzy msgid "" -"The name of image is ``flwr_server`` and the tag ``0.1.0``. Remember that " -"the build arguments as well as the name and tag can be adapted to your " -"needs. These values serve as examples only." -msgstr "" -"图像名称为 ``flwr_server``,标记为 ``0.1.0``。请记住,编译参数以及名称和标记" -"都可以根据需要进行调整。这些值仅供参考。" +"The name of image is ``flwr_superlink`` and the tag ``0.1.0``. Remember " +"that the build arguments as well as the name and tag can be adapted to " +"your needs. These values serve as examples only." +msgstr "图像名称为 ``flwr_server``,标记为 ``0.1.0``。请记住,编译参数以及名称和标记都可以根据需要进行调整。这些值仅供参考。" -#: ../../source/contributor-how-to-build-docker-images.rst:117 +#: ../../source/contributor-how-to-build-docker-images.rst:125 #, fuzzy msgid "" -"If you want to use your own base image instead of the official Flower base " -"image, all you need to do is set the ``BASE_REPOSITORY`` and " -"``BASE_IMAGE_TAG`` build arguments. The value of ``BASE_REPOSITORY`` must " -"match the name of your image and the value of ``BASE_IMAGE_TAG`` must match " -"the tag of your image." +"If you want to use your own base image instead of the official Flower " +"base image, all you need to do is set the ``BASE_REPOSITORY``, " +"``PYTHON_VERSION`` and ``UBUNTU_VERSION`` build arguments." msgstr "" -"如果您想使用自己的基础图片而不是 Flower 官方的基础图片,只需设置 " -"``BASE_REPOSITORY`` 和 ``BASE_IMAGE_TAG`` 联编参数即可。`BASE_REPOSITORY``的" -"值必须与您的图像名称一致,`BASE_IMAGE_TAG``的值必须与您的图像标签一致。" +"如果您想使用自己的基础图片而不是 Flower 官方的基础图片,只需设置 ``BASE_REPOSITORY`` 和 " +"``BASE_IMAGE_TAG`` " +"联编参数即可。`BASE_REPOSITORY``的值必须与您的图像名称一致,`BASE_IMAGE_TAG``的值必须与您的图像标签一致。" -#: ../../source/contributor-how-to-build-docker-images.rst:131 +#: ../../source/contributor-how-to-build-docker-images.rst:138 #, fuzzy msgid "After creating the image, we can test whether the image is working:" msgstr "创建图像后,我们可以测试图像是否正常工作:" @@ -350,26 +356,25 @@ msgstr "贡献译文" #: ../../source/contributor-how-to-contribute-translations.rst:4 #, fuzzy msgid "" -"Since `Flower 1.5 `_ we have introduced translations to our doc pages, " -"but, as you might have noticed, the translations are often imperfect. If you " -"speak languages other than English, you might be able to help us in our " -"effort to make Federated Learning accessible to as many people as possible " -"by contributing to those translations! This might also be a great " -"opportunity for those wanting to become open source contributors with little " -"prerequisites." +"Since `Flower 1.5 `_ we have introduced translations to " +"our doc pages, but, as you might have noticed, the translations are often" +" imperfect. If you speak languages other than English, you might be able " +"to help us in our effort to make Federated Learning accessible to as many" +" people as possible by contributing to those translations! This might " +"also be a great opportunity for those wanting to become open source " +"contributors with little prerequisites." msgstr "" -"从 `Flower 1.5 `_ 开始,我们在文档页面中引入了翻译,但正如你可能已经" -"注意到的,这些翻译往往并不完美。如果您会说英语以外的语言,也许您可以帮助我们" -"翻译这些文档,让更多的人了解 Federated Learning!对于那些想成为开源贡献者的人" -"来说,这也是一个很好的机会。" +"从 `Flower 1.5 `_ " +"开始,我们在文档页面中引入了翻译,但正如你可能已经注意到的,这些翻译往往并不完美。如果您会说英语以外的语言,也许您可以帮助我们翻译这些文档,让更多的人了解" +" Federated Learning!对于那些想成为开源贡献者的人来说,这也是一个很好的机会。" #: ../../source/contributor-how-to-contribute-translations.rst:13 msgid "" -"Our translation project is publicly available over on `Weblate `_, this where most of " -"the work will happen." +"Our translation project is publicly available over on `Weblate " +"`_, this " +"where most of the work will happen." msgstr "" "我们的翻译项目已在 \"Weblate `_\"上公开,大部分工作都将在这里进行。" @@ -380,41 +385,39 @@ msgstr "为现有语言作出贡献" #: ../../source/contributor-how-to-contribute-translations.rst:23 msgid "" -"The first thing you will need to do in order to contribute is to create a " -"free Weblate account on this `page `_. More information about profile settings can be found `here " +"The first thing you will need to do in order to contribute is to create a" +" free Weblate account on this `page " +"`_. More information about" +" profile settings can be found `here " "`_." msgstr "" -"您需要做的第一件事就是在本`网页`_上创建一个免费的Weblate帐户。有关个人资料设置的更多信息,请参阅`这里 " -"`_。" +"您需要做的第一件事就是在本`网页`_上创建一个免费的Weblate帐户。有关个人资料设置的更多信息,请参阅`这里" +" `_。" #: ../../source/contributor-how-to-contribute-translations.rst:29 msgid "" -"Once you are signed in to Weblate, you can navigate to the `Flower Framework " -"project `_. " -"Here, you should see the different existing languages that can be found on " -"the website." +"Once you are signed in to Weblate, you can navigate to the `Flower " +"Framework project `_. Here, you should see the different existing languages" +" that can be found on the website." msgstr "" -"登录到Weblate后,您可以导航到 \"Flower Framework \"项目`_。在这里,您可以看到网站上现有" -"的各种语言。" +"登录到Weblate后,您可以导航到 \"Flower Framework " +"\"项目`_。在这里,您可以看到网站上现有的各种语言。" #: ../../source/contributor-how-to-contribute-translations.rst:34 msgid "" -"Once you have selected the language you want to contribute to, you should " -"see a similar interface to this:" +"Once you have selected the language you want to contribute to, you should" +" see a similar interface to this:" msgstr "选择您要贡献的语言后,您应该会看到与此类似的界面:" #: ../../source/contributor-how-to-contribute-translations.rst:39 msgid "" "The most straight forward option here is to click on the ``Translate`` " -"button on the top right (in the ``Translation status`` section). This will " -"automatically bring you to the translation interface for untranslated " -"strings." -msgstr "" -"最简单的方法是点击右上角(\"翻译状态 \"部分)的 \"翻译 \"按钮。这将自动带您进" -"入未翻译字符串的翻译界面。" +"button on the top right (in the ``Translation status`` section). This " +"will automatically bring you to the translation interface for " +"untranslated strings." +msgstr "最简单的方法是点击右上角(\"翻译状态 \"部分)的 \"翻译 \"按钮。这将自动带您进入未翻译字符串的翻译界面。" #: ../../source/contributor-how-to-contribute-translations.rst:43 msgid "This is what the interface looks like:" @@ -423,46 +426,43 @@ msgstr "这就是界面的样子:" #: ../../source/contributor-how-to-contribute-translations.rst:47 #, fuzzy msgid "" -"You input your translation in the text box at the top and then, once you are " -"happy with it, you either press ``Save and continue`` (to save the " -"translation and go to the next untranslated string), ``Save and stay`` (to " -"save the translation and stay on the same page), ``Suggest`` (to add your " -"translation to suggestions for other users to view), or ``Skip`` (to go to " -"the next untranslated string without saving anything)." +"You input your translation in the text box at the top and then, once you " +"are happy with it, you either press ``Save and continue`` (to save the " +"translation and go to the next untranslated string), ``Save and stay`` " +"(to save the translation and stay on the same page), ``Suggest`` (to add " +"your translation to suggestions for other users to view), or ``Skip`` (to" +" go to the next untranslated string without saving anything)." msgstr "" -"您可以在顶部的文本框中输入翻译内容,满意后按 \"保存并继续\"(保存翻译内容并转" -"到下一个未翻译的字符串)、\"保存并停留\"(保存翻译内容并停留在同一页" -"面)、\"建议\"(将您的翻译添加到建议中供其他用户查看)或 \"跳过\"(转到下一个" -"未翻译的字符串而不保存任何内容)。" +"您可以在顶部的文本框中输入翻译内容,满意后按 " +"\"保存并继续\"(保存翻译内容并转到下一个未翻译的字符串)、\"保存并停留\"(保存翻译内容并停留在同一页面)、\"建议\"(将您的翻译添加到建议中供其他用户查看)或" +" \"跳过\"(转到下一个未翻译的字符串而不保存任何内容)。" #: ../../source/contributor-how-to-contribute-translations.rst:54 msgid "" "In order to help with the translations, you can see on the bottom the " "``Nearby strings``, the ``Comments`` (from other contributors), the " "``Automatic suggestions`` (from machine translation engines), the " -"translations in ``Other languages``, and the ``History`` of translations for " -"this string." +"translations in ``Other languages``, and the ``History`` of translations " +"for this string." msgstr "" -"为了帮助翻译,您可以在底部看到 \"邻近字符串\"、\"评论\"(来自其他贡献" -"者)、\"自动建议\"(来自机器翻译引擎)、\"其他语言 \"中的翻译以及该字符串的 " -"\"历史翻译\"。" +"为了帮助翻译,您可以在底部看到 \"邻近字符串\"、\"评论\"(来自其他贡献者)、\"自动建议\"(来自机器翻译引擎)、\"其他语言 " +"\"中的翻译以及该字符串的 \"历史翻译\"。" #: ../../source/contributor-how-to-contribute-translations.rst:59 msgid "" -"On the right, under the ``String information`` section, you can also click " -"the link under ``Source string location`` in order to view the source of the " -"doc file containing the string." -msgstr "" -"在右侧的 \"字符串信息 \"部分,您还可以单击 \"源字符串位置 \"下的链接,以查看" -"包含字符串的 doc 文件的源文件。" +"On the right, under the ``String information`` section, you can also " +"click the link under ``Source string location`` in order to view the " +"source of the doc file containing the string." +msgstr "在右侧的 \"字符串信息 \"部分,您还可以单击 \"源字符串位置 \"下的链接,以查看包含字符串的 doc 文件的源文件。" #: ../../source/contributor-how-to-contribute-translations.rst:63 msgid "" -"For more information about translating using Weblate, you can check out this " -"`in-depth guide `_." +"For more information about translating using Weblate, you can check out " +"this `in-depth guide " +"`_." msgstr "" -"有关使用 Weblate 进行翻译的更多信息,您可以查看本 \"深入指南 `_\"。" +"有关使用 Weblate 进行翻译的更多信息,您可以查看本 \"深入指南 " +"`_\"。" #: ../../source/contributor-how-to-contribute-translations.rst:67 msgid "Add new languages" @@ -470,13 +470,12 @@ msgstr "添加新语言" #: ../../source/contributor-how-to-contribute-translations.rst:69 msgid "" -"If you want to add a new language, you will first have to contact us, either " -"on `Slack `_, or by opening an issue on our " -"`GitHub repo `_." +"If you want to add a new language, you will first have to contact us, " +"either on `Slack `_, or by opening an issue" +" on our `GitHub repo `_." msgstr "" -"如果您想添加新语言,请先联系我们,可以在 `Slack `_ 上联系,也可以在我们的 `GitHub repo `_ 上提交问题。" +"如果您想添加新语言,请先联系我们,可以在 `Slack `_ 上联系,也可以在我们的 " +"`GitHub repo `_ 上提交问题。" #: ../../source/contributor-how-to-create-new-messages.rst:2 msgid "Creating New Messages" @@ -484,16 +483,15 @@ msgstr "创建新信息" #: ../../source/contributor-how-to-create-new-messages.rst:4 msgid "" -"This is a simple guide for creating a new type of message between the server " -"and clients in Flower." +"This is a simple guide for creating a new type of message between the " +"server and clients in Flower." msgstr "这是一个如何用Flower在服务器和客户端之间创建新类型的信息的简要指导。" #: ../../source/contributor-how-to-create-new-messages.rst:6 msgid "" -"Let's suppose we have the following example functions in :code:`server.py` " -"and :code:`numpy_client.py`..." -msgstr "" -"假设我们在脚本code:`server.py`和code:`numpy_client.py`中有以下的示例函数..." +"Let's suppose we have the following example functions in " +":code:`server.py` and :code:`numpy_client.py`..." +msgstr "假设我们在脚本code:`server.py`和code:`numpy_client.py`中有以下的示例函数..." #: ../../source/contributor-how-to-create-new-messages.rst:8 msgid "Server's side:" @@ -505,11 +503,9 @@ msgstr "在客户端:" #: ../../source/contributor-how-to-create-new-messages.rst:26 msgid "" -"Let's now see what we need to implement in order to get this simple function " -"between the server and client to work!" -msgstr "" -"现在让我们来看看,为了让服务器和客户端之间的这个简单的函数正常工作,我们需要" -"实现哪些功能!" +"Let's now see what we need to implement in order to get this simple " +"function between the server and client to work!" +msgstr "现在让我们来看看,为了让服务器和客户端之间的这个简单的函数正常工作,我们需要实现哪些功能!" #: ../../source/contributor-how-to-create-new-messages.rst:30 msgid "Message Types for Protocol Buffers" @@ -518,15 +514,15 @@ msgstr "协议缓冲区的信息类型" #: ../../source/contributor-how-to-create-new-messages.rst:32 #, fuzzy msgid "" -"The first thing we need to do is to define a message type for the RPC system " -"in :code:`transport.proto`. Note that we have to do it for both the request " -"and response messages. For more details on the syntax of proto3, please see " -"the `official documentation `_." +"The first thing we need to do is to define a message type for the RPC " +"system in :code:`transport.proto`. Note that we have to do it for both " +"the request and response messages. For more details on the syntax of " +"proto3, please see the `official documentation `_." msgstr "" -"我们需要做的第一件事是在脚本code:`transport.proto`中定义 RPC 系统的消息类型。" -"请注意,我们必须对请求信息和响应信息都这样做。有关 proto3 语法的更多详情,请" -"参阅官方文档 `_。" +"我们需要做的第一件事是在脚本code:`transport.proto`中定义 RPC " +"系统的消息类型。请注意,我们必须对请求信息和响应信息都这样做。有关 proto3 语法的更多详情,请参阅官方文档 " +"`_。" #: ../../source/contributor-how-to-create-new-messages.rst:35 msgid "Within the :code:`ServerMessage` block:" @@ -538,8 +534,8 @@ msgstr "在 ClientMessage 代码块中:" #: ../../source/contributor-how-to-create-new-messages.rst:70 msgid "" -"Make sure to also add a field of the newly created message type in :code:" -"`oneof msg`." +"Make sure to also add a field of the newly created message type in " +":code:`oneof msg`." msgstr "确保在 :code:`oneof msg` 中也添加一个新创建的消息类型字段。" #: ../../source/contributor-how-to-create-new-messages.rst:72 @@ -560,9 +556,8 @@ msgid "" "datatypes to or from our defined RPC message types. You should add these " "functions in :code:`serde.py`." msgstr "" -"下一步是添加函数,以便将 Python 数据类型序列化和反序列化为我们定义的 RPC 消息" -"类型或从我们定义的 RPC 消息类型反序列化和反序列化 Python 数据类型。您应该在 :" -"code:`serde.py` 中添加这些函数。" +"下一步是添加函数,以便将 Python 数据类型序列化和反序列化为我们定义的 RPC 消息类型或从我们定义的 RPC 消息类型反序列化和反序列化 " +"Python 数据类型。您应该在 :code:`serde.py` 中添加这些函数。" #: ../../source/contributor-how-to-create-new-messages.rst:91 msgid "The four functions:" @@ -574,11 +569,9 @@ msgstr "从服务器发送信息" #: ../../source/contributor-how-to-create-new-messages.rst:114 msgid "" -"Now write the request function in your Client Proxy class (e.g., :code:" -"`grpc_client_proxy.py`) using the serde functions you just created:" -msgstr "" -"现在,在客户端代理类(例如 :code:`grpc_client_proxy.py`)中使用刚才创建的 " -"serde 函数编写请求函数:" +"Now write the request function in your Client Proxy class (e.g., " +":code:`grpc_client_proxy.py`) using the serde functions you just created:" +msgstr "现在,在客户端代理类(例如 :code:`grpc_client_proxy.py`)中使用刚才创建的 serde 函数编写请求函数:" #: ../../source/contributor-how-to-create-new-messages.rst:128 msgid "Receiving the Message by the Client" @@ -586,12 +579,12 @@ msgstr "由客户端接收信息" #: ../../source/contributor-how-to-create-new-messages.rst:130 msgid "" -"Last step! Modify the code in :code:`message_handler.py` to check the field " -"of your message and call the :code:`example_response` function. Remember to " -"use the serde functions!" +"Last step! Modify the code in :code:`message_handler.py` to check the " +"field of your message and call the :code:`example_response` function. " +"Remember to use the serde functions!" msgstr "" -"最后一步 修改 :code:`message_handler.py` 中的代码,检查信息的字段并调用 :" -"code:`example_response` 函数。记住使用 serde 函数!" +"最后一步 修改 :code:`message_handler.py` 中的代码,检查信息的字段并调用 " +":code:`example_response` 函数。记住使用 serde 函数!" #: ../../source/contributor-how-to-create-new-messages.rst:132 msgid "Within the handle function:" @@ -611,51 +604,46 @@ msgstr "使用 VSCode Dev Containers 进行开发" #: ../../source/contributor-how-to-develop-in-vscode-dev-containers.rst:4 msgid "" -"When working on the Flower framework we want to ensure that all contributors " -"use the same developer environment to format code or run tests. For this " -"purpose we are using the VSCode Remote Containers extension. What is it? " -"Read the following quote:" +"When working on the Flower framework we want to ensure that all " +"contributors use the same developer environment to format code or run " +"tests. For this purpose we are using the VSCode Remote Containers " +"extension. What is it? Read the following quote:" msgstr "" -"在开发 Flower 框架时,我们希望确保所有贡献者使用相同的开发环境来格式化代码或" -"运行测试。为此,我们使用了 VSCode 远程容器扩展。这是什么?请阅读下面这段话:" +"在开发 Flower 框架时,我们希望确保所有贡献者使用相同的开发环境来格式化代码或运行测试。为此,我们使用了 VSCode " +"远程容器扩展。这是什么?请阅读下面这段话:" #: ../../source/contributor-how-to-develop-in-vscode-dev-containers.rst:7 msgid "" -"The Visual Studio Code Remote - Containers extension lets you use a Docker " -"container as a fully-featured development environment. It allows you to open " -"any folder inside (or mounted into) a container and take advantage of Visual " -"Studio Code's full feature set. A :code:`devcontainer.json` file in your " -"project tells VS Code how to access (or create) a development container with " -"a well-defined tool and runtime stack. This container can be used to run an " -"application or to separate tools, libraries, or runtimes needed for working " -"with a codebase." -msgstr "" -"Visual Studio Code Remote - Containers扩展可让你将Docker容器用作功能齐全的开" -"发环境。它允许你打开容器内(或挂载到容器内)的任何文件夹,并利用 Visual " -"Studio Code 的全部功能集。项目中的 :code:`devcontainer.json` 文件会告诉 VS " -"Code 如何访问(或创建)一个带有定义明确的工具和运行时栈的开发容器。该容器可用" -"于运行应用程序,也可用于分离处理代码库所需的工具、库或运行时。" +"The Visual Studio Code Remote - Containers extension lets you use a " +"Docker container as a fully-featured development environment. It allows " +"you to open any folder inside (or mounted into) a container and take " +"advantage of Visual Studio Code's full feature set. A " +":code:`devcontainer.json` file in your project tells VS Code how to " +"access (or create) a development container with a well-defined tool and " +"runtime stack. This container can be used to run an application or to " +"separate tools, libraries, or runtimes needed for working with a " +"codebase." +msgstr "" +"Visual Studio Code Remote - " +"Containers扩展可让你将Docker容器用作功能齐全的开发环境。它允许你打开容器内(或挂载到容器内)的任何文件夹,并利用 Visual " +"Studio Code 的全部功能集。项目中的 :code:`devcontainer.json` 文件会告诉 VS Code " +"如何访问(或创建)一个带有定义明确的工具和运行时栈的开发容器。该容器可用于运行应用程序,也可用于分离处理代码库所需的工具、库或运行时。" #: ../../source/contributor-how-to-develop-in-vscode-dev-containers.rst:9 msgid "" -"Workspace files are mounted from the local file system or copied or cloned " -"into the container. Extensions are installed and run inside the container, " -"where they have full access to the tools, platform, and file system. This " -"means that you can seamlessly switch your entire development environment " -"just by connecting to a different container." -msgstr "" -"工作区文件从本地文件系统加载,或复制或克隆到容器中。扩展在容器内安装和运行," -"在容器内它们可以完全访问工具、平台和文件系统。这意味着,只需连接到不同的容" -"器,就能无缝切换整个开发环境。" +"Workspace files are mounted from the local file system or copied or " +"cloned into the container. Extensions are installed and run inside the " +"container, where they have full access to the tools, platform, and file " +"system. This means that you can seamlessly switch your entire development" +" environment just by connecting to a different container." +msgstr "工作区文件从本地文件系统加载,或复制或克隆到容器中。扩展在容器内安装和运行,在容器内它们可以完全访问工具、平台和文件系统。这意味着,只需连接到不同的容器,就能无缝切换整个开发环境。" #: ../../source/contributor-how-to-develop-in-vscode-dev-containers.rst:11 #, fuzzy msgid "" -"Source: `Official VSCode documentation `_" -msgstr "" -"来源:`VSCode 官方文档 `_" +"Source: `Official VSCode documentation " +"`_" +msgstr "来源:`VSCode 官方文档 `_" #: ../../source/contributor-how-to-develop-in-vscode-dev-containers.rst:15 msgid "Getting started" @@ -664,43 +652,42 @@ msgstr "开始" #: ../../source/contributor-how-to-develop-in-vscode-dev-containers.rst:17 #, fuzzy msgid "" -"Configuring and setting up the :code:`Dockerfile` as well the configuration " -"for the devcontainer can be a bit more involved. The good thing is you don't " -"have to do it. Usually it should be enough to install `Docker `_ on your system and ensure its available on " -"your command line. Additionally, install the `VSCode Containers Extension " -"`_." +"Configuring and setting up the :code:`Dockerfile` as well the " +"configuration for the devcontainer can be a bit more involved. The good " +"thing is you don't have to do it. Usually it should be enough to install " +"`Docker `_ on your system and " +"ensure its available on your command line. Additionally, install the " +"`VSCode Containers Extension `_." msgstr "" -"配置和设置 :code:`Dockerfile` 以及 devcontainer 的配置可能比较复杂。好在你想" -"做就得做。通常只需在系统中安装 Docker 并确保其在命令行中可用即可。此外,请安" -"装 `VSCode Containers Extension `_。" +"配置和设置 :code:`Dockerfile` 以及 devcontainer 的配置可能比较复杂。好在你想做就得做。通常只需在系统中安装 " +"Docker 并确保其在命令行中可用即可。此外,请安装 `VSCode Containers Extension " +"`_。" #: ../../source/contributor-how-to-develop-in-vscode-dev-containers.rst:19 msgid "" -"Now you should be good to go. When starting VSCode, it will ask you to run " -"in the container environment and - if you confirm - automatically build the " -"container and use it. To manually instruct VSCode to use the devcontainer, " -"you can, after installing the extension, click the green area in the bottom " -"left corner of your VSCode window and select the option *(Re)Open Folder in " -"Container*." +"Now you should be good to go. When starting VSCode, it will ask you to " +"run in the container environment and - if you confirm - automatically " +"build the container and use it. To manually instruct VSCode to use the " +"devcontainer, you can, after installing the extension, click the green " +"area in the bottom left corner of your VSCode window and select the " +"option *(Re)Open Folder in Container*." msgstr "" -"现在你应该可以开始了。启动 VSCode 时,它会要求你在容器环境中运行,如果你确" -"认,它会自动构建容器并使用它。要手动指示 VSCode 使用 devcontainer,可以在安装" -"扩展后,点击 VSCode 窗口左下角的绿色区域,然后选择 \"*(重新)在容器中打开文" -"件夹*\"选项。" +"现在你应该可以开始了。启动 VSCode 时,它会要求你在容器环境中运行,如果你确认,它会自动构建容器并使用它。要手动指示 VSCode 使用 " +"devcontainer,可以在安装扩展后,点击 VSCode 窗口左下角的绿色区域,然后选择 \"*(重新)在容器中打开文件夹*\"选项。" #: ../../source/contributor-how-to-develop-in-vscode-dev-containers.rst:21 msgid "" -"In some cases your setup might be more involved. For those cases consult the " -"following sources:" +"In some cases your setup might be more involved. For those cases consult " +"the following sources:" msgstr "在某些情况下,您的设置可能更复杂。有关这些情况,请参考以下资料:" #: ../../source/contributor-how-to-develop-in-vscode-dev-containers.rst:23 #, fuzzy msgid "" -"`Developing inside a Container `_" +"`Developing inside a Container " +"`_" msgstr "" "在容器内开发 `_" @@ -708,11 +695,9 @@ msgstr "" #: ../../source/contributor-how-to-develop-in-vscode-dev-containers.rst:24 #, fuzzy msgid "" -"`Remote development in Containers `_" -msgstr "" -"容器中的远程开发 `_" +"`Remote development in Containers " +"`_" +msgstr "容器中的远程开发 `_" #: ../../source/contributor-how-to-install-development-versions.rst:2 msgid "Install development versions" @@ -728,21 +713,19 @@ msgstr "使用诗歌(推荐)" #: ../../source/contributor-how-to-install-development-versions.rst:10 msgid "" -"Install a ``flwr`` pre-release from PyPI: update the ``flwr`` dependency in " -"``pyproject.toml`` and then reinstall (don't forget to delete ``poetry." -"lock`` (``rm poetry.lock``) before running ``poetry install``)." +"Install a ``flwr`` pre-release from PyPI: update the ``flwr`` dependency " +"in ``pyproject.toml`` and then reinstall (don't forget to delete " +"``poetry.lock`` (``rm poetry.lock``) before running ``poetry install``)." msgstr "" "安装来自 PyPI 的 ``flwr`` 预发布版本:更新 ``pyproject.toml`` 中的 ``flwr`` " -"依赖关系,然后重新安装(运行 ``poetry install` 前,别忘了删除 ``poetry.lock` " -"(``rm poetry.lock`))。" +"依赖关系,然后重新安装(运行 ``poetry install` 前,别忘了删除 ``poetry.lock` (``rm " +"poetry.lock`))。" #: ../../source/contributor-how-to-install-development-versions.rst:12 msgid "" "``flwr = { version = \"1.0.0a0\", allow-prereleases = true }`` (without " "extras)" -msgstr "" -"``flwr = { version = \"1.0.0a0\", allow-prereleases = true }`` (不含额外内" -"容)" +msgstr "``flwr = { version = \"1.0.0a0\", allow-prereleases = true }`` (不含额外内容)" #: ../../source/contributor-how-to-install-development-versions.rst:13 msgid "" @@ -754,8 +737,8 @@ msgstr "" #: ../../source/contributor-how-to-install-development-versions.rst:15 msgid "" -"Install ``flwr`` from a local copy of the Flower source code via ``pyproject." -"toml``:" +"Install ``flwr`` from a local copy of the Flower source code via " +"``pyproject.toml``:" msgstr "通过 ``pyproject.toml`` 从 Flower 源代码的本地副本安装 ``flwr``:" #: ../../source/contributor-how-to-install-development-versions.rst:17 @@ -764,11 +747,11 @@ msgstr "``flwr = { path = \"../../\", develop = true }`` (不含额外内容 #: ../../source/contributor-how-to-install-development-versions.rst:18 msgid "" -"``flwr = { path = \"../../\", develop = true, extras = [\"simulation\"] }`` " -"(with extras)" +"``flwr = { path = \"../../\", develop = true, extras = [\"simulation\"] " +"}`` (with extras)" msgstr "" -"``flwr = { path = \"../../\", develop = true, extras = [\"simulation\"] }`` " -"(包含额外内容)" +"``flwr = { path = \"../../\", develop = true, extras = [\"simulation\"] " +"}`` (包含额外内容)" #: ../../source/contributor-how-to-install-development-versions.rst:20 msgid "Install ``flwr`` from a local wheel file via ``pyproject.toml``:" @@ -777,11 +760,9 @@ msgstr "通过 ``pyproject.toml`` 从本地轮子文件安装 ``flwr``:" #: ../../source/contributor-how-to-install-development-versions.rst:22 #, fuzzy msgid "" -"``flwr = { path = \"../../dist/flwr-1.8.0-py3-none-any.whl\" }`` (without " -"extras)" -msgstr "" -"``flwr = { path = \"../../dist/flwr-1.0.0-py3-none-any.whl\" }``(无额外内" -"容)" +"``flwr = { path = \"../../dist/flwr-1.8.0-py3-none-any.whl\" }`` (without" +" extras)" +msgstr "``flwr = { path = \"../../dist/flwr-1.0.0-py3-none-any.whl\" }``(无额外内容)" #: ../../source/contributor-how-to-install-development-versions.rst:23 #, fuzzy @@ -798,8 +779,8 @@ msgid "" "Dependency Specification `_" msgstr "" -"有关详细信息,请参阅 Poetry 文档: 诗歌依赖性规范 `_" +"有关详细信息,请参阅 Poetry 文档: 诗歌依赖性规范 `_" #: ../../source/contributor-how-to-install-development-versions.rst:28 msgid "Using pip (recommended on Colab)" @@ -821,8 +802,7 @@ msgstr "`pip install -U -pre flwr[simulation]``(包含额外功能)" msgid "" "Python packages can be installed from git repositories. Use one of the " "following commands to install the Flower directly from GitHub." -msgstr "" -"Python 软件包可以从 git 仓库安装。使用以下命令之一直接从 GitHub 安装 Flower。" +msgstr "Python 软件包可以从 git 仓库安装。使用以下命令之一直接从 GitHub 安装 Flower。" #: ../../source/contributor-how-to-install-development-versions.rst:37 msgid "Install ``flwr`` from the default GitHub branch (``main``):" @@ -830,17 +810,17 @@ msgstr "从 GitHub 的默认分支 (``main`) 安装 ``flwr``:" #: ../../source/contributor-how-to-install-development-versions.rst:39 msgid "" -"``pip install flwr@git+https://github.com/adap/flower.git`` (without extras)" -msgstr "" -"`pip install flwr@git+https://github.com/adap/flower.git`` (不含额外功能)" +"``pip install flwr@git+https://github.com/adap/flower.git`` (without " +"extras)" +msgstr "`pip install flwr@git+https://github.com/adap/flower.git`` (不含额外功能)" #: ../../source/contributor-how-to-install-development-versions.rst:40 msgid "" "``pip install flwr[simulation]@git+https://github.com/adap/flower.git`` " "(with extras)" msgstr "" -"`pip install flwr[simulation]@git+https://github.com/adap/flower.git``(带附" -"加功能)" +"`pip install " +"flwr[simulation]@git+https://github.com/adap/flower.git``(带附加功能)" #: ../../source/contributor-how-to-install-development-versions.rst:42 msgid "Install ``flwr`` from a specific GitHub branch (``branch-name``):" @@ -851,51 +831,51 @@ msgid "" "``pip install flwr@git+https://github.com/adap/flower.git@branch-name`` " "(without extras)" msgstr "" -"`pip install flwr@git+https://github.com/adap/flower.git@branch-name`` (不含" -"附加功能)" +"`pip install flwr@git+https://github.com/adap/flower.git@branch-name`` " +"(不含附加功能)" #: ../../source/contributor-how-to-install-development-versions.rst:45 msgid "" -"``pip install flwr[simulation]@git+https://github.com/adap/flower.git@branch-" -"name`` (with extras)" -msgstr "" -"`pip安装flwr[模拟]@git+https://github.com/adap/flower.git@分支名``(带附加功" -"能)" +"``pip install flwr[simulation]@git+https://github.com/adap/flower.git" +"@branch-name`` (with extras)" +msgstr "`pip安装flwr[模拟]@git+https://github.com/adap/flower.git@分支名``(带附加功能)" #: ../../source/contributor-how-to-install-development-versions.rst:49 msgid "Open Jupyter Notebooks on Google Colab" msgstr "在谷歌 Colab 上打开 Jupyter 笔记本" #: ../../source/contributor-how-to-install-development-versions.rst:51 +#, fuzzy msgid "" -"Open the notebook ``doc/source/tutorial-get-started-with-flower-pytorch." -"ipynb``:" -msgstr "" -"打开笔记本 ``doc/source/tutorial-get-started-with-flower-pytorch.ipynb``:" +"Open the notebook ``doc/source/tutorial-series-get-started-with-flower-" +"pytorch.ipynb``:" +msgstr "打开笔记本 ``doc/source/tutorial-get-started-with-flower-pytorch.ipynb``:" #: ../../source/contributor-how-to-install-development-versions.rst:53 +#, fuzzy msgid "" -"https://colab.research.google.com/github/adap/flower/blob/main/doc/source/" -"tutorial-get-started-with-flower-pytorch.ipynb" +"https://colab.research.google.com/github/adap/flower/blob/main/doc/source" +"/tutorial-series-get-started-with-flower-pytorch.ipynb" msgstr "" -"https://colab.research.google.com/github/adap/flower/blob/main/doc/source/" -"tutorial-get-started-with-flower-pytorch.ipynb" +"https://colab.research.google.com/github/adap/flower/blob/main/doc/source" +"/tutorial-get-started-with-flower-pytorch.ipynb" #: ../../source/contributor-how-to-install-development-versions.rst:55 msgid "" -"Open a development version of the same notebook from branch `branch-name` by " -"changing ``main`` to ``branch-name`` (right after ``blob``):" +"Open a development version of the same notebook from branch `branch-name`" +" by changing ``main`` to ``branch-name`` (right after ``blob``):" msgstr "" -"将 ``main`` 改为 ``branch-name``(紧跟在 ``blob``之后),从分支 `branch-" -"name` 打开同一笔记本的开发版本:" +"将 ``main`` 改为 ``branch-name``(紧跟在 ``blob``之后),从分支 `branch-name` " +"打开同一笔记本的开发版本:" #: ../../source/contributor-how-to-install-development-versions.rst:57 +#, fuzzy msgid "" -"https://colab.research.google.com/github/adap/flower/blob/branch-name/doc/" -"source/tutorial-get-started-with-flower-pytorch.ipynb" +"https://colab.research.google.com/github/adap/flower/blob/branch-" +"name/doc/source/tutorial-series-get-started-with-flower-pytorch.ipynb" msgstr "" -"https://colab.research.google.com/github/adap/flower/blob/branch-name/doc/" -"source/tutorial-get-started-with-flower-pytorch.ipynb" +"https://colab.research.google.com/github/adap/flower/blob/branch-" +"name/doc/source/tutorial-get-started-with-flower-pytorch.ipynb" #: ../../source/contributor-how-to-install-development-versions.rst:59 msgid "Install a `whl` on Google Colab:" @@ -903,8 +883,8 @@ msgstr "在 Google Colab 上安装 `whl`:" #: ../../source/contributor-how-to-install-development-versions.rst:61 msgid "" -"In the vertical icon grid on the left hand side, select ``Files`` > ``Upload " -"to session storage``" +"In the vertical icon grid on the left hand side, select ``Files`` > " +"``Upload to session storage``" msgstr "在左侧的垂直图标网格中,选择 \"文件\">\"上传到会话存储\"" #: ../../source/contributor-how-to-install-development-versions.rst:62 @@ -915,13 +895,13 @@ msgstr "更新 whl (e.g., ``flwr-1.7.0-py3-none-any.whl``)" #: ../../source/contributor-how-to-install-development-versions.rst:63 #, fuzzy msgid "" -"Change ``!pip install -q 'flwr[simulation]' torch torchvision matplotlib`` " -"to ``!pip install -q 'flwr-1.8.0-py3-none-any.whl[simulation]' torch " -"torchvision matplotlib``" +"Change ``!pip install -q 'flwr[simulation]' torch torchvision " +"matplotlib`` to ``!pip install -q 'flwr-1.8.0-py3-none-" +"any.whl[simulation]' torch torchvision matplotlib``" msgstr "" -"把``!pip install -q 'flwr[simulation]' torch torchvision matplotlib``变为``!" -"pip install -q 'flwr-1.7.0-py3-none-any.whl[simulation]' torch torchvision " -"matplotlib``" +"把``!pip install -q 'flwr[simulation]' torch torchvision " +"matplotlib``变为``!pip install -q 'flwr-1.7.0-py3-none-any.whl[simulation]'" +" torch torchvision matplotlib``" #: ../../source/contributor-how-to-release-flower.rst:2 msgid "Release Flower" @@ -939,11 +919,10 @@ msgstr "在发布期间" #: ../../source/contributor-how-to-release-flower.rst:9 msgid "" -"The version number of a release is stated in ``pyproject.toml``. To release " -"a new version of Flower, the following things need to happen (in that order):" -msgstr "" -"版本号在 ``pyproject.toml`` 中说明。要发布 Flower 的新版本,需要完成以下工作" -"(按顺序排列):" +"The version number of a release is stated in ``pyproject.toml``. To " +"release a new version of Flower, the following things need to happen (in " +"that order):" +msgstr "版本号在 ``pyproject.toml`` 中说明。要发布 Flower 的新版本,需要完成以下工作(按顺序排列):" #: ../../source/contributor-how-to-release-flower.rst:11 #, fuzzy @@ -952,40 +931,37 @@ msgid "" "order to add every new change to the changelog (feel free to make manual " "changes to the changelog afterwards until it looks good)." msgstr "" -"运行 ``python3 src/py/flwr_tool/update_changelog.py `` 以将每" -"项新更改添加到更新日志中(之后可对更新日志进行手动更改,直到看起来不错为" -"止)。" +"运行 ``python3 src/py/flwr_tool/update_changelog.py `` " +"以将每项新更改添加到更新日志中(之后可对更新日志进行手动更改,直到看起来不错为止)。" #: ../../source/contributor-how-to-release-flower.rst:12 #, fuzzy msgid "" -"Once the changelog has been updated with all the changes, run ``./dev/" -"prepare-release-changelog.sh v``, where ```` is " -"the version stated in ``pyproject.toml`` (notice the ``v`` added before it). " -"This will replace the ``Unreleased`` header of the changelog by the version " -"and current date, and it will add a thanking message for the contributors. " -"Open a pull request with those changes." +"Once the changelog has been updated with all the changes, run ``./dev" +"/prepare-release-changelog.sh v``, where ```` " +"is the version stated in ``pyproject.toml`` (notice the ``v`` added " +"before it). This will replace the ``Unreleased`` header of the changelog " +"by the version and current date, and it will add a thanking message for " +"the contributors. Open a pull request with those changes." msgstr "" -"更新更新日志后,运行``./dev/prepare-release-changelog.sh v``,其" -"中````是``pyproject.toml``中的版本(注意前面的``v``)。这将用版" -"本和当前日期替换更新日志中的 ``Unreleased`` 标头,并为贡献者添加一条感谢信" -"息。打开一个包含这些更改的拉取请求。" +"更新更新日志后,运行``./dev/prepare-release-changelog.sh " +"v``,其中````是``pyproject.toml``中的版本(注意前面的``v``)。这将用版本和当前日期替换更新日志中的" +" ``Unreleased`` 标头,并为贡献者添加一条感谢信息。打开一个包含这些更改的拉取请求。" #: ../../source/contributor-how-to-release-flower.rst:13 #, fuzzy msgid "" "Once the pull request is merged, tag the release commit with the version " -"number as soon as the PR is merged: ``git tag v`` (notice the " -"``v`` added before the version number), then ``git push --tags``. This will " -"create a draft release on GitHub containing the correct artifacts and the " -"relevant part of the changelog." +"number as soon as the PR is merged: ``git tag v`` (notice " +"the ``v`` added before the version number), then ``git push --tags``. " +"This will create a draft release on GitHub containing the correct " +"artifacts and the relevant part of the changelog." msgstr "" -"在 PR 合并后立即用版本号标记发布提交:``git tag v0.12.3``,然后``git push --" -"tags``。这将在 GitHub 上创建一个包含正确工件和更新日志相关部分的发布草案。" +"在 PR 合并后立即用版本号标记发布提交:``git tag v0.12.3``,然后``git push --tags``。这将在 GitHub" +" 上创建一个包含正确工件和更新日志相关部分的发布草案。" #: ../../source/contributor-how-to-release-flower.rst:14 -msgid "" -"Check the draft release on GitHub, and if everything is good, publish it." +msgid "Check the draft release on GitHub, and if everything is good, publish it." msgstr "检查 GitHub 上的发布稿,如果一切正常,就发布它。" #: ../../source/contributor-how-to-release-flower.rst:17 @@ -1010,8 +986,8 @@ msgstr "在 ``changelog.md`` 中添加新的 ``Unreleased`` 部分。" #: ../../source/contributor-how-to-release-flower.rst:25 msgid "" -"Merge the pull request on the same day (i.e., before a new nightly release " -"gets published to PyPI)." +"Merge the pull request on the same day (i.e., before a new nightly " +"release gets published to PyPI)." msgstr "在同一天合并拉取请求(即在新版本发布到 PyPI 之前)。" #: ../../source/contributor-how-to-release-flower.rst:28 @@ -1024,11 +1000,9 @@ msgstr "释放前命名" #: ../../source/contributor-how-to-release-flower.rst:33 msgid "" -"PyPI supports pre-releases (alpha, beta, release candidate). Pre-releases " -"MUST use one of the following naming patterns:" -msgstr "" -"PyPI 支持预发布版本(alpha、beta、release candidate)。预发布版本必须使用以下" -"命名模式之一:" +"PyPI supports pre-releases (alpha, beta, release candidate). Pre-releases" +" MUST use one of the following naming patterns:" +msgstr "PyPI 支持预发布版本(alpha、beta、release candidate)。预发布版本必须使用以下命名模式之一:" #: ../../source/contributor-how-to-release-flower.rst:35 msgid "Alpha: ``MAJOR.MINOR.PATCHaN``" @@ -1074,41 +1048,38 @@ msgstr "`PEP-440 `_" #: ../../source/contributor-how-to-release-flower.rst:50 msgid "" -"`PyPA Choosing a versioning scheme `_" +"`PyPA Choosing a versioning scheme " +"`_" msgstr "" -"`PyPA 选择版本控制方案 `_" +"`PyPA 选择版本控制方案 `_" #: ../../source/contributor-how-to-release-flower.rst:52 msgid "" -"Note that the approach defined by PyPA is not compatible with SemVer 2.0.0 " -"spec, for details consult the `Semantic Versioning Specification `_ (specifically item 11 on " -"precedence)." +"Note that the approach defined by PyPA is not compatible with SemVer " +"2.0.0 spec, for details consult the `Semantic Versioning Specification " +"`_ (specifically item " +"11 on precedence)." msgstr "" -"请注意,PyPA 所定义的方法与 SemVer 2.0.0 规范不兼容,详情请查阅《语义版本规" -"范》`_(特别是关于优先级的" -"第 11 项)。" +"请注意,PyPA 所定义的方法与 SemVer 2.0.0 " +"规范不兼容,详情请查阅《语义版本规范》`_(特别是关于优先级的第 11 项)。" #: ../../source/contributor-how-to-release-flower.rst:55 msgid "Pre-release classification" msgstr "发布前分类" #: ../../source/contributor-how-to-release-flower.rst:57 -msgid "" -"Should the next pre-release be called alpha, beta, or release candidate?" +msgid "Should the next pre-release be called alpha, beta, or release candidate?" msgstr "下一个预发布版应该叫阿尔法版、贝塔版还是候选发布版?" #: ../../source/contributor-how-to-release-flower.rst:59 msgid "" -"RC: feature complete, no known issues (apart from issues that are classified " -"as \"won't fix\" for the next stable release) - if no issues surface this " -"will become the next stable release" -msgstr "" -"RC:功能完整,无已知问题(除了下一个稳定版中被列为 \"不会修复 \"的问题)--如" -"果没有问题出现,这将成为下一个稳定版" +"RC: feature complete, no known issues (apart from issues that are " +"classified as \"won't fix\" for the next stable release) - if no issues " +"surface this will become the next stable release" +msgstr "RC:功能完整,无已知问题(除了下一个稳定版中被列为 \"不会修复 \"的问题)--如果没有问题出现,这将成为下一个稳定版" #: ../../source/contributor-how-to-release-flower.rst:60 msgid "Beta: feature complete, allowed to have known issues" @@ -1126,12 +1097,11 @@ msgstr "建立虚拟环境" msgid "" "It is recommended to run your Python setup within a virtual environment. " "This guide shows three different examples how to create a virtual " -"environment with pyenv virtualenv, poetry, or Anaconda. You can follow the " -"instructions or choose your preferred setup." +"environment with pyenv virtualenv, poetry, or Anaconda. You can follow " +"the instructions or choose your preferred setup." msgstr "" -"建议在虚拟环境中运行 Python 设置。本指南展示了如何使用 pyenv virtualenv、" -"poes 或 Anaconda 创建虚拟环境的三个不同示例。您可以按照说明或选择您喜欢的设" -"置。" +"建议在虚拟环境中运行 Python 设置。本指南展示了如何使用 pyenv virtualenv、poes 或 Anaconda " +"创建虚拟环境的三个不同示例。您可以按照说明或选择您喜欢的设置。" #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:9 msgid "Python Version" @@ -1140,92 +1110,100 @@ msgstr "Python 版本" #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:11 #: ../../source/how-to-install-flower.rst:8 msgid "" -"Flower requires at least `Python 3.8 `_, but " -"`Python 3.10 `_ or above is recommended." +"Flower requires at least `Python 3.8 `_, " +"but `Python 3.10 `_ or above is " +"recommended." msgstr "" -"Flower 至少需要 `Python 3.8 `_,但建议使用 " -"`Python 3.10 `_或更高版本。" +"Flower 至少需要 `Python 3.8 `_,但建议使用 `Python " +"3.10 `_或更高版本。" #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:14 -msgid "Virutualenv with Pyenv/Virtualenv" +msgid "" +"Due to a known incompatibility with `ray " +"`_, we currently recommend utilizing at " +"most `Python 3.11 `_ for running Flower " +"simulations." +msgstr "" + +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:19 +#, fuzzy +msgid "Virtualenv with Pyenv/Virtualenv" msgstr "Virutualenv 和 Pyenv/Virtualenv" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:16 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:21 msgid "" -"One of the recommended virtual environment is `pyenv `_/`virtualenv `_. " -"Please see `Flower examples `_ for details." +"One of the recommended virtual environment is `pyenv " +"`_/`virtualenv `_. Please see `Flower examples " +"`_ for details." msgstr "" -"其中一个推荐的虚拟环境是 `pyenv `_/" -"`virtualenv `_。详情请参见 " -"`Flower 示例 `_。" +"其中一个推荐的虚拟环境是 `pyenv `_/`virtualenv " +"`_。详情请参见 `Flower 示例 " +"`_。" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:18 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:23 msgid "" "Once Pyenv is set up, you can use it to install `Python Version 3.10 " "`_ or above:" -msgstr "" -"一旦设置好 Pyenv,就可以用它来安装 `Python 3.10 `_ 或更高版本:" +msgstr "一旦设置好 Pyenv,就可以用它来安装 `Python 3.10 `_ 或更高版本:" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:24 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:29 msgid "Create the virtualenv with:" msgstr "创建虚拟环境:" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:31 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:36 msgid "Activate the virtualenv by running the following command:" msgstr "运行以下命令激活 virtualenv:" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:39 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:44 msgid "Virtualenv with Poetry" msgstr "有诗意的 Virtualenv" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:41 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:46 msgid "" -"The Flower examples are based on `Poetry `_ " -"to manage dependencies. After installing Poetry you simply create a virtual " -"environment with:" +"The Flower examples are based on `Poetry `_ to manage dependencies. After installing Poetry you " +"simply create a virtual environment with:" msgstr "" -"Flower 示例基于 `Poetry `_ 来管理依赖关系。" -"安装 Poetry 后,只需创建一个虚拟环境即可:" +"Flower 示例基于 `Poetry `_ 来管理依赖关系。安装 Poetry" +" 后,只需创建一个虚拟环境即可:" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:47 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:52 msgid "" -"If you open a new terminal you can activate the previously created virtual " -"environment with the following command:" +"If you open a new terminal you can activate the previously created " +"virtual environment with the following command:" msgstr "如果打开一个新终端,可以使用以下命令激活之前创建的虚拟环境:" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:55 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:60 msgid "Virtualenv with Anaconda" msgstr "使用 Anaconda 的 Virtualenv" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:57 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:62 msgid "" -"If you prefer to use Anaconda for your virtual environment then install and " -"setup the `conda `_ package. After setting it up you can create a virtual " -"environment with:" +"If you prefer to use Anaconda for your virtual environment then install " +"and setup the `conda `_ package. After setting it up you can " +"create a virtual environment with:" msgstr "" -"如果你更喜欢在虚拟环境中使用 Anaconda,那么请安装并设置 `conda `_ 软件包。设" -"置完成后,您就可以使用以下工具创建虚拟环境:" +"如果你更喜欢在虚拟环境中使用 Anaconda,那么请安装并设置 `conda " +"`_ 软件包。设置完成后,您就可以使用以下工具创建虚拟环境:" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:63 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:68 msgid "and activate the virtual environment with:" msgstr "并激活虚拟环境:" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:71 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:76 msgid "And then?" msgstr "然后呢?" -#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:73 +#: ../../source/contributor-how-to-set-up-a-virtual-env.rst:78 msgid "" -"As soon as you created your virtual environment you clone one of the `Flower " -"examples `_." +"As soon as you created your virtual environment you clone one of the " +"`Flower examples `_." msgstr "" -"创建虚拟环境后,您可以克隆一个 `Flower 示例 `_。" +"创建虚拟环境后,您可以克隆一个 `Flower 示例 " +"`_。" #: ../../source/contributor-how-to-write-documentation.rst:2 msgid "Write documentation" @@ -1237,23 +1215,24 @@ msgstr "项目布局" #: ../../source/contributor-how-to-write-documentation.rst:8 msgid "" -"The Flower documentation lives in the ``doc`` directory. The Sphinx-based " -"documentation system supports both reStructuredText (``.rst`` files) and " -"Markdown (``.md`` files)." +"The Flower documentation lives in the ``doc`` directory. The Sphinx-based" +" documentation system supports both reStructuredText (``.rst`` files) and" +" Markdown (``.md`` files)." msgstr "" -"Flower 文档位于 ``doc`` 目录中。基于 Sphinx 的文档系统支持 reStructuredText" -"(``.rst`` 文件)和 Markdown(``.md`` 文件)。" +"Flower 文档位于 ``doc`` 目录中。基于 Sphinx 的文档系统支持 reStructuredText(``.rst`` 文件)和 " +"Markdown(``.md`` 文件)。" #: ../../source/contributor-how-to-write-documentation.rst:10 -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:142 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:169 #, fuzzy msgid "" -"Note that, in order to build the documentation locally (with ``poetry run " -"make html``, like described below), `Pandoc `_ needs to be installed on the system." +"Note that, in order to build the documentation locally (with ``poetry run" +" make html``, like described below), `Pandoc " +"`_ needs to be installed on the " +"system." msgstr "" -"请注意,要在本地构建文档(使用 ``poetry run make html``,如下所述),系统上必" -"须安装 ``Pandoc _`。" +"请注意,要在本地构建文档(使用 ``poetry run make html``,如下所述),系统上必须安装 ``Pandoc " +"_`。" #: ../../source/contributor-how-to-write-documentation.rst:14 msgid "Edit an existing page" @@ -1295,13 +1274,13 @@ msgstr "首次代码贡献" #: ../../source/contributor-ref-good-first-contributions.rst:4 msgid "" -"We welcome contributions to Flower! However, it is not always easy to know " -"where to start. We therefore put together a few recommendations on where to " -"start to increase your chances of getting your PR accepted into the Flower " -"codebase." +"We welcome contributions to Flower! However, it is not always easy to " +"know where to start. We therefore put together a few recommendations on " +"where to start to increase your chances of getting your PR accepted into " +"the Flower codebase." msgstr "" -"我们欢迎为Flower做出代码贡献!然而,要知道从哪里开始并非易事。因此,我们提出" -"了一些建议,告诉您从哪里开始,以增加您的 PR 被 Flower 代码库接受的机会。" +"我们欢迎为Flower做出代码贡献!然而,要知道从哪里开始并非易事。因此,我们提出了一些建议,告诉您从哪里开始,以增加您的 PR 被 Flower" +" 代码库接受的机会。" #: ../../source/contributor-ref-good-first-contributions.rst:11 msgid "Where to start" @@ -1309,12 +1288,10 @@ msgstr "从哪里开始" #: ../../source/contributor-ref-good-first-contributions.rst:13 msgid "" -"Until the Flower core library matures it will be easier to get PR's accepted " -"if they only touch non-core areas of the codebase. Good candidates to get " -"started are:" -msgstr "" -"在 Flower 核心库成熟之前,如果 PR 只涉及代码库中的非核心区域,则会更容易被接" -"受。可以从以下方面入手:" +"Until the Flower core library matures it will be easier to get PR's " +"accepted if they only touch non-core areas of the codebase. Good " +"candidates to get started are:" +msgstr "在 Flower 核心库成熟之前,如果 PR 只涉及代码库中的非核心区域,则会更容易被接受。可以从以下方面入手:" #: ../../source/contributor-ref-good-first-contributions.rst:17 msgid "Documentation: What's missing? What could be expressed more clearly?" @@ -1335,9 +1312,9 @@ msgstr "Flower Baselines的申请" #: ../../source/contributor-ref-good-first-contributions.rst:25 #, fuzzy msgid "" -"If you are not familiar with Flower Baselines, you should probably check-out " -"our `contributing guide for baselines `_." +"If you are not familiar with Flower Baselines, you should probably check-" +"out our `contributing guide for baselines " +"`_." msgstr "" "如果您对 Flower Baselines 还不熟悉,也许可以看看我们的 `Baselines贡献指南 " "`_。" @@ -1345,22 +1322,21 @@ msgstr "" #: ../../source/contributor-ref-good-first-contributions.rst:27 #, fuzzy msgid "" -"You should then check out the open `issues `_ for baseline " -"requests. If you find a baseline that you'd like to work on and that has no " -"assignees, feel free to assign it to yourself and start working on it!" +"You should then check out the open `issues " +"`_" +" for baseline requests. If you find a baseline that you'd like to work on" +" and that has no assignees, feel free to assign it to yourself and start " +"working on it!" msgstr "" -"然后查看开放的 `issues `_ baseline请求。如果您发现" -"了自己想做的baseline,而它还没有被分配,请随时把它分配给自己,然后开始工作!" +"然后查看开放的 `issues " +"`_" +" baseline请求。如果您发现了自己想做的baseline,而它还没有被分配,请随时把它分配给自己,然后开始工作!" #: ../../source/contributor-ref-good-first-contributions.rst:31 msgid "" -"Otherwise, if you don't find a baseline you'd like to work on, be sure to " -"open a new issue with the baseline request template!" -msgstr "" -"如果您没有找到想要做的baseline,请务必使用baseline请求模板打开一个新问题" -"(GitHub issue)!" +"Otherwise, if you don't find a baseline you'd like to work on, be sure to" +" open a new issue with the baseline request template!" +msgstr "如果您没有找到想要做的baseline,请务必使用baseline请求模板打开一个新问题(GitHub issue)!" #: ../../source/contributor-ref-good-first-contributions.rst:34 msgid "Request for examples" @@ -1369,11 +1345,9 @@ msgstr "示例请求" #: ../../source/contributor-ref-good-first-contributions.rst:36 msgid "" "We wish we had more time to write usage examples because we believe they " -"help users to get started with building what they want to build. Here are a " -"few ideas where we'd be happy to accept a PR:" -msgstr "" -"我们希望有更多的时间来撰写使用示例,因为我们相信这些示例可以帮助用户开始构建" -"他们想要的东西。以下是我们乐意接受 PR 的几个想法:" +"help users to get started with building what they want to build. Here are" +" a few ideas where we'd be happy to accept a PR:" +msgstr "我们希望有更多的时间来撰写使用示例,因为我们相信这些示例可以帮助用户开始构建他们想要的东西。以下是我们乐意接受 PR 的几个想法:" #: ../../source/contributor-ref-good-first-contributions.rst:40 msgid "Llama 2 fine-tuning, with Hugging Face Transformers and PyTorch" @@ -1393,13 +1367,13 @@ msgstr "安全聚合协议" #: ../../source/contributor-ref-secure-aggregation-protocols.rst:4 msgid "" -"Include SecAgg, SecAgg+, and LightSecAgg protocol. The LightSecAgg protocol " -"has not been implemented yet, so its diagram and abstraction may not be " -"accurate in practice. The SecAgg protocol can be considered as a special " -"case of the SecAgg+ protocol." +"Include SecAgg, SecAgg+, and LightSecAgg protocol. The LightSecAgg " +"protocol has not been implemented yet, so its diagram and abstraction may" +" not be accurate in practice. The SecAgg protocol can be considered as a " +"special case of the SecAgg+ protocol." msgstr "" -"包括 SecAgg、SecAgg+ 和 LightSecAgg 协议。LightSecAgg 协议尚未实施,因此其图" -"表和抽象在实践中可能并不准确。SecAgg 协议可视为 SecAgg+ 协议的特例。" +"包括 SecAgg、SecAgg+ 和 LightSecAgg 协议。LightSecAgg " +"协议尚未实施,因此其图表和抽象在实践中可能并不准确。SecAgg 协议可视为 SecAgg+ 协议的特例。" #: ../../source/contributor-ref-secure-aggregation-protocols.rst:8 msgid "The :code:`SecAgg+` abstraction" @@ -1409,17 +1383,17 @@ msgstr "代码:`SecAgg+` 抽象" #: ../../source/contributor-ref-secure-aggregation-protocols.rst:161 msgid "" "In this implementation, each client will be assigned with a unique index " -"(int) for secure aggregation, and thus many python dictionaries used have " -"keys of int type rather than ClientProxy type." +"(int) for secure aggregation, and thus many python dictionaries used have" +" keys of int type rather than ClientProxy type." msgstr "" -"在此实现中,将为每个客户端分配一个唯一索引(int),以确保聚合的安全性,因此使" -"用的许多 python 字典的键都是 int 类型,而不是 ClientProxy 类型。" +"在此实现中,将为每个客户端分配一个唯一索引(int),以确保聚合的安全性,因此使用的许多 python 字典的键都是 int 类型,而不是 " +"ClientProxy 类型。" #: ../../source/contributor-ref-secure-aggregation-protocols.rst:65 #: ../../source/contributor-ref-secure-aggregation-protocols.rst:198 msgid "" -"The Flower server will execute and process received results in the following " -"order:" +"The Flower server will execute and process received results in the " +"following order:" msgstr "Flower 服务器将按以下顺序执行和处理收到的结果:" #: ../../source/contributor-ref-secure-aggregation-protocols.rst:159 @@ -1436,20 +1410,20 @@ msgstr "在 GitHub 上投稿" #: ../../source/contributor-tutorial-contribute-on-github.rst:4 msgid "" -"This guide is for people who want to get involved with Flower, but who are " -"not used to contributing to GitHub projects." +"This guide is for people who want to get involved with Flower, but who " +"are not used to contributing to GitHub projects." msgstr "本指南适用于想参与 Flower,但不习惯为 GitHub 项目贡献的人。" #: ../../source/contributor-tutorial-contribute-on-github.rst:6 #, fuzzy msgid "" -"If you're familiar with how contributing on GitHub works, you can directly " -"checkout our :doc:`getting started guide for contributors `." +"If you're familiar with how contributing on GitHub works, you can " +"directly checkout our :doc:`getting started guide for contributors " +"`." msgstr "" -"如果您熟悉如何在 GitHub 上贡献,可以直接查看我们的 \"贡献者入门指南\" " -"`_ 和 \"优秀的" -"首次贡献示例\" `_。" +"如果您熟悉如何在 GitHub 上贡献,可以直接查看我们的 \"贡献者入门指南\" `_ 和 \"优秀的首次贡献示例\" " +"`_。" #: ../../source/contributor-tutorial-contribute-on-github.rst:10 msgid "Setting up the repository" @@ -1465,38 +1439,34 @@ msgid "" "Git is a distributed version control tool. This allows for an entire " "codebase's history to be stored and every developer's machine. It is a " "software that will need to be installed on your local machine, you can " -"follow this `guide `_ to set it up." +"follow this `guide `_ to set it up." msgstr "" -"Git 是一种分布式版本控制工具。它可以将整个代码库的历史记录保存在每个开发人员" -"的机器上。您需要在本地计算机上安装该软件,可以按照本指南 `_ 进行设置。" +"Git 是一种分布式版本控制工具。它可以将整个代码库的历史记录保存在每个开发人员的机器上。您需要在本地计算机上安装该软件,可以按照本指南 " +"`_ 进行设置。" #: ../../source/contributor-tutorial-contribute-on-github.rst:16 msgid "" "GitHub, itself, is a code hosting platform for version control and " -"collaboration. It allows for everyone to collaborate and work from anywhere " -"on remote repositories." -msgstr "" -"GitHub 本身是一个用于版本控制和协作的代码托管平台。它允许每个人在任何地方对远" -"程仓库进行协作和工作。" +"collaboration. It allows for everyone to collaborate and work from " +"anywhere on remote repositories." +msgstr "GitHub 本身是一个用于版本控制和协作的代码托管平台。它允许每个人在任何地方对远程仓库进行协作和工作。" #: ../../source/contributor-tutorial-contribute-on-github.rst:18 msgid "" "If you haven't already, you will need to create an account on `GitHub " "`_." -msgstr "" -"如果还没有,您需要在 `GitHub `_ 上创建一个账户。" +msgstr "如果还没有,您需要在 `GitHub `_ 上创建一个账户。" #: ../../source/contributor-tutorial-contribute-on-github.rst:20 msgid "" -"The idea behind the generic Git and GitHub workflow boils down to this: you " -"download code from a remote repository on GitHub, make changes locally and " -"keep track of them using Git and then you upload your new history back to " -"GitHub." +"The idea behind the generic Git and GitHub workflow boils down to this: " +"you download code from a remote repository on GitHub, make changes " +"locally and keep track of them using Git and then you upload your new " +"history back to GitHub." msgstr "" -"通用的 Git 和 GitHub 工作流程背后的理念可以归结为:从 GitHub 上的远程仓库下载" -"代码,在本地进行修改并使用 Git 进行跟踪,然后将新的历史记录上传回 GitHub。" +"通用的 Git 和 GitHub 工作流程背后的理念可以归结为:从 GitHub 上的远程仓库下载代码,在本地进行修改并使用 Git " +"进行跟踪,然后将新的历史记录上传回 GitHub。" #: ../../source/contributor-tutorial-contribute-on-github.rst:32 msgid "**Forking the Flower repository**" @@ -1505,25 +1475,23 @@ msgstr "**叉花仓库**" #: ../../source/contributor-tutorial-contribute-on-github.rst:24 #, fuzzy msgid "" -"A fork is a personal copy of a GitHub repository. To create one for Flower, " -"you must navigate to ``_ (while connected to " -"your GitHub account) and click the ``Fork`` button situated on the top right " -"of the page." +"A fork is a personal copy of a GitHub repository. To create one for " +"Flower, you must navigate to ``_ (while " +"connected to your GitHub account) and click the ``Fork`` button situated " +"on the top right of the page." msgstr "" "fork 是 GitHub 仓库的个人副本。要为 Flower 创建一个 fork,您必须导航到 " -"https://github.com/adap/flower(同时连接到您的 GitHub 账户),然后点击页面右" -"上方的 ``Fork`` 按钮。" +"https://github.com/adap/flower(同时连接到您的 GitHub 账户),然后点击页面右上方的 ``Fork`` 按钮。" #: ../../source/contributor-tutorial-contribute-on-github.rst:29 msgid "" "You can change the name if you want, but this is not necessary as this " -"version of Flower will be yours and will sit inside your own account (i.e., " -"in your own list of repositories). Once created, you should see on the top " -"left corner that you are looking at your own version of Flower." +"version of Flower will be yours and will sit inside your own account " +"(i.e., in your own list of repositories). Once created, you should see on" +" the top left corner that you are looking at your own version of Flower." msgstr "" -"您可以更改名称,但没有必要,因为这个版本的 Flower 将是您自己的,并位于您自己" -"的账户中(即,在您自己的版本库列表中)。创建完成后,您会在左上角看到自己的 " -"Flower 版本。" +"您可以更改名称,但没有必要,因为这个版本的 Flower " +"将是您自己的,并位于您自己的账户中(即,在您自己的版本库列表中)。创建完成后,您会在左上角看到自己的 Flower 版本。" #: ../../source/contributor-tutorial-contribute-on-github.rst:47 msgid "**Cloning your forked repository**" @@ -1532,29 +1500,25 @@ msgstr "**克隆你的分叉仓库**" #: ../../source/contributor-tutorial-contribute-on-github.rst:35 msgid "" "The next step is to download the forked repository on your machine to be " -"able to make changes to it. On your forked repository page, you should first " -"click on the ``Code`` button on the right, this will give you the ability to " -"copy the HTTPS link of the repository." +"able to make changes to it. On your forked repository page, you should " +"first click on the ``Code`` button on the right, this will give you the " +"ability to copy the HTTPS link of the repository." msgstr "" -"下一步是在你的机器上下载分叉版本库,以便对其进行修改。在分叉版本库页面上,首" -"先点击右侧的 \"代码 \"按钮,这样就能复制版本库的 HTTPS 链接。" +"下一步是在你的机器上下载分叉版本库,以便对其进行修改。在分叉版本库页面上,首先点击右侧的 \"代码 \"按钮,这样就能复制版本库的 HTTPS " +"链接。" #: ../../source/contributor-tutorial-contribute-on-github.rst:41 msgid "" "Once you copied the \\, you can open a terminal on your machine, " "navigate to the place you want to download the repository to and type:" -msgstr "" -"一旦复制了 (),你就可以在你的机器上打开一个终端,导航到你想下载软件" -"源的地方,然后键入:" +msgstr "一旦复制了 (),你就可以在你的机器上打开一个终端,导航到你想下载软件源的地方,然后键入:" #: ../../source/contributor-tutorial-contribute-on-github.rst:47 #, fuzzy msgid "" -"This will create a ``flower/`` (or the name of your fork if you renamed it) " -"folder in the current working directory." -msgstr "" -"这将在当前工作目录下创建一个 `flower/`(如果重命名了,则使用 fork 的名称)文" -"件夹。" +"This will create a ``flower/`` (or the name of your fork if you renamed " +"it) folder in the current working directory." +msgstr "这将在当前工作目录下创建一个 `flower/`(如果重命名了,则使用 fork 的名称)文件夹。" #: ../../source/contributor-tutorial-contribute-on-github.rst:66 msgid "**Add origin**" @@ -1566,14 +1530,13 @@ msgstr "然后,您就可以进入存储库文件夹:" #: ../../source/contributor-tutorial-contribute-on-github.rst:56 msgid "" -"And here we will need to add an origin to our repository. The origin is the " -"\\ of the remote fork repository. To obtain it, we can do as " -"previously mentioned by going to our fork repository on our GitHub account " -"and copying the link." +"And here we will need to add an origin to our repository. The origin is " +"the \\ of the remote fork repository. To obtain it, we can do as " +"previously mentioned by going to our fork repository on our GitHub " +"account and copying the link." msgstr "" "在这里,我们需要为我们的版本库添加一个 origin。origin 是远程 fork 仓库的 " -"\\。要获得它,我们可以像前面提到的那样,访问 GitHub 账户上的分叉仓库并" -"复制链接。" +"\\。要获得它,我们可以像前面提到的那样,访问 GitHub 账户上的分叉仓库并复制链接。" #: ../../source/contributor-tutorial-contribute-on-github.rst:61 msgid "" @@ -1590,33 +1553,28 @@ msgstr "**增加上游**" msgid "" "Now we will add an upstream address to our repository. Still in the same " "directory, we must run the following command:" -msgstr "" -"现在,我们要为版本库添加一个上游地址。还是在同一目录下,我们必须运行以下命" -"令:" +msgstr "现在,我们要为版本库添加一个上游地址。还是在同一目录下,我们必须运行以下命令:" #: ../../source/contributor-tutorial-contribute-on-github.rst:76 -msgid "" -"The following diagram visually explains what we did in the previous steps:" +msgid "The following diagram visually explains what we did in the previous steps:" msgstr "下图直观地解释了我们在前面步骤中的操作:" #: ../../source/contributor-tutorial-contribute-on-github.rst:80 msgid "" -"The upstream is the GitHub remote address of the parent repository (in this " -"case Flower), i.e. the one we eventually want to contribute to and therefore " -"need an up-to-date history of. The origin is just the GitHub remote address " -"of the forked repository we created, i.e. the copy (fork) in our own account." +"The upstream is the GitHub remote address of the parent repository (in " +"this case Flower), i.e. the one we eventually want to contribute to and " +"therefore need an up-to-date history of. The origin is just the GitHub " +"remote address of the forked repository we created, i.e. the copy (fork) " +"in our own account." msgstr "" -"上游是父版本库(这里是 Flower)的 GitHub 远程地址,即我们最终要贡献的版本库," -"因此需要最新的历史记录。origin 只是我们创建的分叉仓库的 GitHub 远程地址,即我" -"们自己账户中的副本(分叉)。" +"上游是父版本库(这里是 Flower)的 GitHub 远程地址,即我们最终要贡献的版本库,因此需要最新的历史记录。origin " +"只是我们创建的分叉仓库的 GitHub 远程地址,即我们自己账户中的副本(分叉)。" #: ../../source/contributor-tutorial-contribute-on-github.rst:84 msgid "" "To make sure our local version of the fork is up-to-date with the latest " "changes from the Flower repository, we can execute the following command:" -msgstr "" -"为了确保本地版本的分叉程序与 Flower 代码库的最新更改保持一致,我们可以执行以" -"下命令:" +msgstr "为了确保本地版本的分叉程序与 Flower 代码库的最新更改保持一致,我们可以执行以下命令:" #: ../../source/contributor-tutorial-contribute-on-github.rst:93 msgid "Setting up the coding environment" @@ -1626,12 +1584,10 @@ msgstr "设置编码环境" #, fuzzy msgid "" "This can be achieved by following this :doc:`getting started guide for " -"contributors ` (note that " -"you won't need to clone the repository). Once you are able to write code and " -"test it, you can finally start making changes!" -msgstr "" -"您可以按照这份 \"贡献者入门指南\"__(注意,您不需要克隆版本库)来实现这一点。" -"一旦您能够编写代码并进行测试,您就可以开始修改了!" +"contributors ` (note " +"that you won't need to clone the repository). Once you are able to write " +"code and test it, you can finally start making changes!" +msgstr "您可以按照这份 \"贡献者入门指南\"__(注意,您不需要克隆版本库)来实现这一点。一旦您能够编写代码并进行测试,您就可以开始修改了!" #: ../../source/contributor-tutorial-contribute-on-github.rst:100 msgid "Making changes" @@ -1639,7 +1595,8 @@ msgstr "做出改变" #: ../../source/contributor-tutorial-contribute-on-github.rst:102 msgid "" -"Before making any changes make sure you are up-to-date with your repository:" +"Before making any changes make sure you are up-to-date with your " +"repository:" msgstr "在进行任何更改之前,请确保您的版本库是最新的:" #: ../../source/contributor-tutorial-contribute-on-github.rst:108 @@ -1652,15 +1609,15 @@ msgstr "**创建一个新分支**" #: ../../source/contributor-tutorial-contribute-on-github.rst:115 msgid "" -"To make the history cleaner and easier to work with, it is good practice to " -"create a new branch for each feature/project that needs to be implemented." -msgstr "" -"为了使历史记录更简洁、更易于操作,为每个需要实现的功能/项目创建一个新分支是个" -"不错的做法。" +"To make the history cleaner and easier to work with, it is good practice " +"to create a new branch for each feature/project that needs to be " +"implemented." +msgstr "为了使历史记录更简洁、更易于操作,为每个需要实现的功能/项目创建一个新分支是个不错的做法。" #: ../../source/contributor-tutorial-contribute-on-github.rst:118 msgid "" -"To do so, just run the following command inside the repository's directory:" +"To do so, just run the following command inside the repository's " +"directory:" msgstr "为此,只需在版本库目录下运行以下命令即可:" #: ../../source/contributor-tutorial-contribute-on-github.rst:125 @@ -1668,8 +1625,7 @@ msgid "**Make changes**" msgstr "**进行修改**" #: ../../source/contributor-tutorial-contribute-on-github.rst:125 -msgid "" -"Write great code and create wonderful changes using your favorite editor!" +msgid "Write great code and create wonderful changes using your favorite editor!" msgstr "使用您最喜欢的编辑器编写优秀的代码并创建精彩的更改!" #: ../../source/contributor-tutorial-contribute-on-github.rst:138 @@ -1678,12 +1634,10 @@ msgstr "**测试并格式化您的代码**" #: ../../source/contributor-tutorial-contribute-on-github.rst:128 msgid "" -"Don't forget to test and format your code! Otherwise your code won't be able " -"to be merged into the Flower repository. This is done so the codebase stays " -"consistent and easy to understand." -msgstr "" -"不要忘记测试和格式化您的代码!否则您的代码将无法并入 Flower 代码库。这样做是" -"为了使代码库保持一致并易于理解。" +"Don't forget to test and format your code! Otherwise your code won't be " +"able to be merged into the Flower repository. This is done so the " +"codebase stays consistent and easy to understand." +msgstr "不要忘记测试和格式化您的代码!否则您的代码将无法并入 Flower 代码库。这样做是为了使代码库保持一致并易于理解。" #: ../../source/contributor-tutorial-contribute-on-github.rst:131 msgid "To do so, we have written a few scripts that you can execute:" @@ -1695,8 +1649,8 @@ msgstr "**舞台变化**" #: ../../source/contributor-tutorial-contribute-on-github.rst:141 msgid "" -"Before creating a commit that will update your history, you must specify to " -"Git which files it needs to take into account." +"Before creating a commit that will update your history, you must specify " +"to Git which files it needs to take into account." msgstr "在创建更新历史记录的提交之前,必须向 Git 说明需要考虑哪些文件。" #: ../../source/contributor-tutorial-contribute-on-github.rst:143 @@ -1705,12 +1659,10 @@ msgstr "这可以通过:" #: ../../source/contributor-tutorial-contribute-on-github.rst:149 msgid "" -"To check which files have been modified compared to the last version (last " -"commit) and to see which files are staged for commit, you can use the :code:" -"`git status` command." -msgstr "" -"要查看与上一版本(上次提交)相比哪些文件已被修改,以及哪些文件处于提交阶段," -"可以使用 :code:`git status` 命令。" +"To check which files have been modified compared to the last version " +"(last commit) and to see which files are staged for commit, you can use " +"the :code:`git status` command." +msgstr "要查看与上一版本(上次提交)相比哪些文件已被修改,以及哪些文件处于提交阶段,可以使用 :code:`git status` 命令。" #: ../../source/contributor-tutorial-contribute-on-github.rst:160 msgid "**Commit changes**" @@ -1720,17 +1672,16 @@ msgstr "**提交更改**" msgid "" "Once you have added all the files you wanted to commit using :code:`git " "add`, you can finally create your commit using this command:" -msgstr "" -"使用 :code:`git add` 添加完所有要提交的文件后,就可以使用此命令创建提交了:" +msgstr "使用 :code:`git add` 添加完所有要提交的文件后,就可以使用此命令创建提交了:" #: ../../source/contributor-tutorial-contribute-on-github.rst:159 msgid "" -"The \\ is there to explain to others what the commit does. " -"It should be written in an imperative style and be concise. An example would " -"be :code:`git commit -m \"Add images to README\"`." +"The \\ is there to explain to others what the commit " +"does. It should be written in an imperative style and be concise. An " +"example would be :code:`git commit -m \"Add images to README\"`." msgstr "" -" 用于向他人解释提交的作用。它应该以命令式风格书写,并且简明" -"扼要。例如 :code:`git commit -m \"Add images to README\"`。" +" 用于向他人解释提交的作用。它应该以命令式风格书写,并且简明扼要。例如 :code:`git commit " +"-m \"Add images to README\"`。" #: ../../source/contributor-tutorial-contribute-on-github.rst:171 msgid "**Push the changes to the fork**" @@ -1738,32 +1689,29 @@ msgstr "**将更改推送到分叉**" #: ../../source/contributor-tutorial-contribute-on-github.rst:163 msgid "" -"Once we have committed our changes, we have effectively updated our local " -"history, but GitHub has no way of knowing this unless we push our changes to " -"our origin's remote address:" -msgstr "" -"一旦提交了修改,我们就有效地更新了本地历史记录,但除非我们将修改推送到原点的" -"远程地址,否则 GitHub 无法得知:" +"Once we have committed our changes, we have effectively updated our local" +" history, but GitHub has no way of knowing this unless we push our " +"changes to our origin's remote address:" +msgstr "一旦提交了修改,我们就有效地更新了本地历史记录,但除非我们将修改推送到原点的远程地址,否则 GitHub 无法得知:" #: ../../source/contributor-tutorial-contribute-on-github.rst:170 msgid "" "Once this is done, you will see on the GitHub that your forked repo was " "updated with the changes you have made." -msgstr "" -"完成此操作后,您将在 GitHub 上看到您的分叉仓库已根据您所做的更改进行了更新。" +msgstr "完成此操作后,您将在 GitHub 上看到您的分叉仓库已根据您所做的更改进行了更新。" #: ../../source/contributor-tutorial-contribute-on-github.rst:174 msgid "Creating and merging a pull request (PR)" msgstr "创建和合并拉取请求 (PR)" -#: ../../source/contributor-tutorial-contribute-on-github.rst:205 +#: ../../source/contributor-tutorial-contribute-on-github.rst:206 msgid "**Create the PR**" msgstr "**创建 PR**" #: ../../source/contributor-tutorial-contribute-on-github.rst:177 msgid "" -"Once you have pushed changes, on the GitHub webpage of your repository you " -"should see the following message:" +"Once you have pushed changes, on the GitHub webpage of your repository " +"you should see the following message:" msgstr "推送更改后,在仓库的 GitHub 网页上应该会看到以下信息:" #: ../../source/contributor-tutorial-contribute-on-github.rst:181 @@ -1779,393 +1727,282 @@ msgid "" msgstr "点击 \"比较和拉取请求 \"按钮后,您应该会看到类似下面的内容:" #: ../../source/contributor-tutorial-contribute-on-github.rst:187 -msgid "" -"At the top you have an explanation of which branch will be merged where:" +msgid "At the top you have an explanation of which branch will be merged where:" msgstr "在顶部,你可以看到关于哪个分支将被合并的说明:" #: ../../source/contributor-tutorial-contribute-on-github.rst:191 msgid "" -"In this example you can see that the request is to merge the branch ``doc-" -"fixes`` from my forked repository to branch ``main`` from the Flower " -"repository." -msgstr "" -"在这个例子中,你可以看到请求将我分叉的版本库中的分支 ``doc-fixes`` 合并到 " -"Flower 版本库中的分支 ``main``。" +"In this example you can see that the request is to merge the branch " +"``doc-fixes`` from my forked repository to branch ``main`` from the " +"Flower repository." +msgstr "在这个例子中,你可以看到请求将我分叉的版本库中的分支 ``doc-fixes`` 合并到 Flower 版本库中的分支 ``main``。" #: ../../source/contributor-tutorial-contribute-on-github.rst:193 msgid "" -"The input box in the middle is there for you to describe what your PR does " -"and to link it to existing issues. We have placed comments (that won't be " -"rendered once the PR is opened) to guide you through the process." +"The title should be changed to adhere to the :ref:`pr_title_format` " +"guidelines, otherwise it won't be possible to merge the PR. So in this " +"case, a correct title might be ``docs(framework:skip) Fix typos``." msgstr "" -"中间的输入框供您描述 PR 的作用,并将其与现有问题联系起来。我们在此放置了注释" -"(一旦 PR 打开,注释将不会显示),以指导您完成整个过程。" #: ../../source/contributor-tutorial-contribute-on-github.rst:196 -#, fuzzy msgid "" -"It is important to follow the instructions described in comments. For " -"instance, in order to not break how our changelog system works, you should " -"read the information above the ``Changelog entry`` section carefully. You " -"can also checkout some examples and details in the :ref:`changelogentry` " -"appendix." +"The input box in the middle is there for you to describe what your PR " +"does and to link it to existing issues. We have placed comments (that " +"won't be rendered once the PR is opened) to guide you through the " +"process." +msgstr "中间的输入框供您描述 PR 的作用,并将其与现有问题联系起来。我们在此放置了注释(一旦 PR 打开,注释将不会显示),以指导您完成整个过程。" + +#: ../../source/contributor-tutorial-contribute-on-github.rst:199 +msgid "It is important to follow the instructions described in comments." msgstr "" -"请务必遵守注释中的说明。例如,为了不破坏我们的更新日志系统,你应该仔细阅读\"`" -"更新日志条目``\"部分上面的信息。您还可以查看 :ref:`changelogentry` 附录中的一" -"些示例和细节。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:200 +#: ../../source/contributor-tutorial-contribute-on-github.rst:201 msgid "" "At the bottom you will find the button to open the PR. This will notify " -"reviewers that a new PR has been opened and that they should look over it to " -"merge or to request changes." -msgstr "" -"在底部,您可以找到打开 PR 的按钮。这将通知审核人员新的 PR 已经打开,他们应该" -"查看该 PR 以进行合并或要求修改。" +"reviewers that a new PR has been opened and that they should look over it" +" to merge or to request changes." +msgstr "在底部,您可以找到打开 PR 的按钮。这将通知审核人员新的 PR 已经打开,他们应该查看该 PR 以进行合并或要求修改。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:203 +#: ../../source/contributor-tutorial-contribute-on-github.rst:204 msgid "" -"If your PR is not yet ready for review, and you don't want to notify anyone, " -"you have the option to create a draft pull request:" -msgstr "" -"如果您的 PR 尚未准备好接受审核,而且您不想通知任何人,您可以选择创建一个草案" -"拉取请求:" +"If your PR is not yet ready for review, and you don't want to notify " +"anyone, you have the option to create a draft pull request:" +msgstr "如果您的 PR 尚未准备好接受审核,而且您不想通知任何人,您可以选择创建一个草案拉取请求:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:208 +#: ../../source/contributor-tutorial-contribute-on-github.rst:209 msgid "**Making new changes**" msgstr "**作出新的改变**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:208 +#: ../../source/contributor-tutorial-contribute-on-github.rst:209 msgid "" "Once the PR has been opened (as draft or not), you can still push new " -"commits to it the same way we did before, by making changes to the branch " -"associated with the PR." -msgstr "" -"一旦 PR 被打开(无论是否作为草案),你仍然可以像以前一样,通过修改与 PR 关联" -"的分支来推送新的提交。" +"commits to it the same way we did before, by making changes to the branch" +" associated with the PR." +msgstr "一旦 PR 被打开(无论是否作为草案),你仍然可以像以前一样,通过修改与 PR 关联的分支来推送新的提交。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:230 +#: ../../source/contributor-tutorial-contribute-on-github.rst:231 msgid "**Review the PR**" msgstr "**审查 PR**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:211 +#: ../../source/contributor-tutorial-contribute-on-github.rst:212 msgid "" -"Once the PR has been opened or once the draft PR has been marked as ready, a " -"review from code owners will be automatically requested:" +"Once the PR has been opened or once the draft PR has been marked as " +"ready, a review from code owners will be automatically requested:" msgstr "一旦 PR 被打开或 PR 草案被标记为就绪,就会自动要求代码所有者进行审核:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:215 +#: ../../source/contributor-tutorial-contribute-on-github.rst:216 msgid "" -"Code owners will then look into the code, ask questions, request changes or " -"validate the PR." +"Code owners will then look into the code, ask questions, request changes " +"or validate the PR." msgstr "然后,代码所有者会查看代码、提出问题、要求修改或验证 PR。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:217 +#: ../../source/contributor-tutorial-contribute-on-github.rst:218 msgid "Merging will be blocked if there are ongoing requested changes." msgstr "如果有正在进行的更改请求,合并将被阻止。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:221 +#: ../../source/contributor-tutorial-contribute-on-github.rst:222 msgid "" -"To resolve them, just push the necessary changes to the branch associated " -"with the PR:" +"To resolve them, just push the necessary changes to the branch associated" +" with the PR:" msgstr "要解决这些问题,只需将必要的更改推送到与 PR 关联的分支即可:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:225 +#: ../../source/contributor-tutorial-contribute-on-github.rst:226 msgid "And resolve the conversation:" msgstr "并解决对话:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:229 +#: ../../source/contributor-tutorial-contribute-on-github.rst:230 msgid "" -"Once all the conversations have been resolved, you can re-request a review." +"Once all the conversations have been resolved, you can re-request a " +"review." msgstr "一旦所有对话都得到解决,您就可以重新申请审核。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:250 +#: ../../source/contributor-tutorial-contribute-on-github.rst:251 msgid "**Once the PR is merged**" msgstr "**一旦 PR 被合并**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:233 +#: ../../source/contributor-tutorial-contribute-on-github.rst:234 msgid "" -"If all the automatic tests have passed and reviewers have no more changes to " -"request, they can approve the PR and merge it." -msgstr "" -"如果所有自动测试都已通过,且审核员不再需要修改,他们就可以批准 PR 并将其合" -"并。" +"If all the automatic tests have passed and reviewers have no more changes" +" to request, they can approve the PR and merge it." +msgstr "如果所有自动测试都已通过,且审核员不再需要修改,他们就可以批准 PR 并将其合并。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:237 +#: ../../source/contributor-tutorial-contribute-on-github.rst:238 msgid "" "Once it is merged, you can delete the branch on GitHub (a button should " "appear to do so) and also delete it locally by doing:" -msgstr "" -"合并后,您可以在 GitHub 上删除该分支(会出现一个删除按钮),也可以在本地删除" -"该分支:" +msgstr "合并后,您可以在 GitHub 上删除该分支(会出现一个删除按钮),也可以在本地删除该分支:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:244 +#: ../../source/contributor-tutorial-contribute-on-github.rst:245 msgid "Then you should update your forked repository by doing:" msgstr "然后,你应该更新你的分叉仓库:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:253 +#: ../../source/contributor-tutorial-contribute-on-github.rst:254 msgid "Example of first contribution" msgstr "首次捐款实例" -#: ../../source/contributor-tutorial-contribute-on-github.rst:256 +#: ../../source/contributor-tutorial-contribute-on-github.rst:257 msgid "Problem" msgstr "问题" -#: ../../source/contributor-tutorial-contribute-on-github.rst:258 +#: ../../source/contributor-tutorial-contribute-on-github.rst:259 #, fuzzy msgid "" -"For our documentation, we've started to use the `Diàtaxis framework `_." -msgstr "" -"对于我们的文档,我们已经开始使用 \"Diàtaxis 框架 `_\"。" +"For our documentation, we've started to use the `Diàtaxis framework " +"`_." +msgstr "对于我们的文档,我们已经开始使用 \"Diàtaxis 框架 `_\"。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:260 +#: ../../source/contributor-tutorial-contribute-on-github.rst:261 #, fuzzy msgid "" -"Our \"How to\" guides should have titles that continue the sentence \"How to " -"…\", for example, \"How to upgrade to Flower 1.0\"." -msgstr "" -"我们的 \"如何 \"指南的标题应延续 \"如何...... \"的句式,例如 \"如何升级到 " -"Flower 1.0\"。" +"Our \"How to\" guides should have titles that continue the sentence \"How" +" to …\", for example, \"How to upgrade to Flower 1.0\"." +msgstr "我们的 \"如何 \"指南的标题应延续 \"如何...... \"的句式,例如 \"如何升级到 Flower 1.0\"。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:262 +#: ../../source/contributor-tutorial-contribute-on-github.rst:263 msgid "" "Most of our guides do not follow this new format yet, and changing their " "title is (unfortunately) more involved than one might think." -msgstr "" -"我们的大多数指南还没有采用这种新格式,而更改其标题(不幸的是)比人们想象的要" -"复杂得多。" +msgstr "我们的大多数指南还没有采用这种新格式,而更改其标题(不幸的是)比人们想象的要复杂得多。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:264 +#: ../../source/contributor-tutorial-contribute-on-github.rst:265 #, fuzzy msgid "" -"This issue is about changing the title of a doc from present continuous to " -"present simple." +"This issue is about changing the title of a doc from present continuous " +"to present simple." msgstr "这个问题是关于将文档标题从现在进行时改为现在进行时。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:266 +#: ../../source/contributor-tutorial-contribute-on-github.rst:267 #, fuzzy msgid "" "Let's take the example of \"Saving Progress\" which we changed to \"Save " "Progress\". Does this pass our check?" -msgstr "" -"以 \"保存进度 \"为例,我们将其改为 \"保存进度\"。这是否通过了我们的检查?" +msgstr "以 \"保存进度 \"为例,我们将其改为 \"保存进度\"。这是否通过了我们的检查?" -#: ../../source/contributor-tutorial-contribute-on-github.rst:268 +#: ../../source/contributor-tutorial-contribute-on-github.rst:269 #, fuzzy msgid "Before: \"How to saving progress\" ❌" msgstr "之前: \"如何保存进度\" ❌" -#: ../../source/contributor-tutorial-contribute-on-github.rst:270 +#: ../../source/contributor-tutorial-contribute-on-github.rst:271 #, fuzzy msgid "After: \"How to save progress\" ✅" msgstr "之后: \"如何保存进度\"✅" -#: ../../source/contributor-tutorial-contribute-on-github.rst:273 +#: ../../source/contributor-tutorial-contribute-on-github.rst:274 msgid "Solution" msgstr "解决方案" -#: ../../source/contributor-tutorial-contribute-on-github.rst:275 +#: ../../source/contributor-tutorial-contribute-on-github.rst:276 #, fuzzy msgid "" "This is a tiny change, but it'll allow us to test your end-to-end setup. " "After cloning and setting up the Flower repo, here's what you should do:" -msgstr "" -"这只是一个很小的改动,但可以让我们测试你的端到端设置。克隆并设置好 Flower " -"repo 后,你应该这样做:" +msgstr "这只是一个很小的改动,但可以让我们测试你的端到端设置。克隆并设置好 Flower repo 后,你应该这样做:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:277 +#: ../../source/contributor-tutorial-contribute-on-github.rst:278 #, fuzzy msgid "Find the source file in ``doc/source``" msgstr "在 `doc/source` 中查找源文件" -#: ../../source/contributor-tutorial-contribute-on-github.rst:278 +#: ../../source/contributor-tutorial-contribute-on-github.rst:279 #, fuzzy msgid "" "Make the change in the ``.rst`` file (beware, the dashes under the title " "should be the same length as the title itself)" msgstr "在 `.rst` 文件中进行修改(注意,标题下的破折号应与标题本身的长度相同)" -#: ../../source/contributor-tutorial-contribute-on-github.rst:279 +#: ../../source/contributor-tutorial-contribute-on-github.rst:280 #, fuzzy msgid "" -"Build the docs and `check the result `_" +"Build the docs and `check the result `_" msgstr "" -"构建文档并检查结果: ``_" +"构建文档并检查结果: ``_" -#: ../../source/contributor-tutorial-contribute-on-github.rst:282 +#: ../../source/contributor-tutorial-contribute-on-github.rst:283 msgid "Rename file" msgstr "重命名文件" -#: ../../source/contributor-tutorial-contribute-on-github.rst:284 +#: ../../source/contributor-tutorial-contribute-on-github.rst:285 msgid "" -"You might have noticed that the file name still reflects the old wording. If " -"we just change the file, then we break all existing links to it - it is " -"**very important** to avoid that, breaking links can harm our search engine " -"ranking." +"You might have noticed that the file name still reflects the old wording." +" If we just change the file, then we break all existing links to it - it " +"is **very important** to avoid that, breaking links can harm our search " +"engine ranking." msgstr "" -"您可能已经注意到,文件名仍然反映了旧的措辞。如果我们只是更改文件,那么就会破" -"坏与该文件的所有现有链接--避免这种情况是***重要的,破坏链接会损害我们的搜索引" -"擎排名。" +"您可能已经注意到,文件名仍然反映了旧的措辞。如果我们只是更改文件,那么就会破坏与该文件的所有现有链接--" +"避免这种情况是***重要的,破坏链接会损害我们的搜索引擎排名。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:287 +#: ../../source/contributor-tutorial-contribute-on-github.rst:288 #, fuzzy msgid "Here's how to change the file name:" msgstr "下面是更改文件名的方法:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:289 +#: ../../source/contributor-tutorial-contribute-on-github.rst:290 #, fuzzy msgid "Change the file name to ``save-progress.rst``" msgstr "将文件名改为`save-progress.rst`" -#: ../../source/contributor-tutorial-contribute-on-github.rst:290 +#: ../../source/contributor-tutorial-contribute-on-github.rst:291 #, fuzzy msgid "Add a redirect rule to ``doc/source/conf.py``" msgstr "在 `doc/source/conf.py` 中添加重定向规则" -#: ../../source/contributor-tutorial-contribute-on-github.rst:292 +#: ../../source/contributor-tutorial-contribute-on-github.rst:293 #, fuzzy msgid "" -"This will cause a redirect from ``saving-progress.html`` to ``save-progress." -"html``, old links will continue to work." -msgstr "" -"这将导致从 `saving-progress.html` 重定向到 `save-progress.html`,旧链接将继续" -"工作。" +"This will cause a redirect from ``saving-progress.html`` to ``save-" +"progress.html``, old links will continue to work." +msgstr "这将导致从 `saving-progress.html` 重定向到 `save-progress.html`,旧链接将继续工作。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:295 +#: ../../source/contributor-tutorial-contribute-on-github.rst:296 msgid "Apply changes in the index file" msgstr "应用索引文件中的更改" -#: ../../source/contributor-tutorial-contribute-on-github.rst:297 +#: ../../source/contributor-tutorial-contribute-on-github.rst:298 #, fuzzy msgid "" "For the lateral navigation bar to work properly, it is very important to " "update the ``index.rst`` file as well. This is where we define the whole " "arborescence of the navbar." -msgstr "" -"要使横向导航栏正常工作,更新 `index.rst` 文件也非常重要。我们就是在这里定义整" -"个导航栏的结构。" +msgstr "要使横向导航栏正常工作,更新 `index.rst` 文件也非常重要。我们就是在这里定义整个导航栏的结构。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:300 +#: ../../source/contributor-tutorial-contribute-on-github.rst:301 #, fuzzy msgid "Find and modify the file name in ``index.rst``" msgstr "查找并修改 `index.rst` 中的文件名" -#: ../../source/contributor-tutorial-contribute-on-github.rst:303 +#: ../../source/contributor-tutorial-contribute-on-github.rst:304 msgid "Open PR" msgstr "开放式 PR" -#: ../../source/contributor-tutorial-contribute-on-github.rst:305 +#: ../../source/contributor-tutorial-contribute-on-github.rst:306 #, fuzzy msgid "" -"Commit the changes (commit messages are always imperative: \"Do something\", " -"in this case \"Change …\")" +"Commit the changes (commit messages are always imperative: \"Do " +"something\", in this case \"Change …\")" msgstr "提交更改(提交信息总是命令式的:\"做某事\",这里是 \"更改......\")" -#: ../../source/contributor-tutorial-contribute-on-github.rst:306 +#: ../../source/contributor-tutorial-contribute-on-github.rst:307 msgid "Push the changes to your fork" msgstr "将更改推送到分叉" -#: ../../source/contributor-tutorial-contribute-on-github.rst:307 -msgid "Open a PR (as shown above)" -msgstr "打开 PR(如上图所示)" - #: ../../source/contributor-tutorial-contribute-on-github.rst:308 +msgid "" +"Open a PR (as shown above) with title ``docs(framework) Update how-to " +"guide title``" +msgstr "" + +#: ../../source/contributor-tutorial-contribute-on-github.rst:309 msgid "Wait for it to be approved!" msgstr "等待审批!" -#: ../../source/contributor-tutorial-contribute-on-github.rst:309 +#: ../../source/contributor-tutorial-contribute-on-github.rst:310 msgid "Congrats! 🥳 You're now officially a Flower contributor!" msgstr "祝贺你 🥳 您现在正式成为 \"Flower \"贡献者!" -#: ../../source/contributor-tutorial-contribute-on-github.rst:313 -msgid "How to write a good PR title" -msgstr "如何撰写好的公关标题" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:315 -msgid "" -"A well-crafted PR title helps team members quickly understand the purpose " -"and scope of the changes being proposed. Here's a guide to help you write a " -"good GitHub PR title:" -msgstr "" -"一个精心撰写的公关标题能帮助团队成员迅速了解所提修改的目的和范围。以下指南可" -"帮助您撰写一个好的 GitHub PR 标题:" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:317 -msgid "" -"1. Be Clear and Concise: Provide a clear summary of the changes in a concise " -"manner. 1. Use Actionable Verbs: Start with verbs like \"Add,\" \"Update,\" " -"or \"Fix\" to indicate the purpose. 1. Include Relevant Information: Mention " -"the affected feature or module for context. 1. Keep it Short: Avoid lengthy " -"titles for easy readability. 1. Use Proper Capitalization and Punctuation: " -"Follow grammar rules for clarity." -msgstr "" -"1. 简明扼要: 以简明扼要的方式清楚地概述变化。1. 使用可操作的动词: 使用 \"添" -"加\"、\"更新 \"或 \"修复 \"等动词来表明目的。1. 包含相关信息: 提及受影响的功" -"能或模块以了解上下文。1. 简短:避免冗长的标题,以方便阅读。1. 使用正确的大小" -"写和标点符号: 遵守语法规则,以确保清晰。" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:323 -msgid "" -"Let's start with a few examples for titles that should be avoided because " -"they do not provide meaningful information:" -msgstr "让我们先举例说明几个应该避免使用的标题,因为它们不能提供有意义的信息:" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:325 -msgid "Implement Algorithm" -msgstr "执行算法" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:326 -msgid "Database" -msgstr "数据库" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:327 -msgid "Add my_new_file.py to codebase" -msgstr "在代码库中添加 my_new_file.py" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:328 -msgid "Improve code in module" -msgstr "改进模块中的代码" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:329 -msgid "Change SomeModule" -msgstr "更改 SomeModule" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:331 -msgid "" -"Here are a few positive examples which provide helpful information without " -"repeating how they do it, as that is already visible in the \"Files " -"changed\" section of the PR:" -msgstr "" -"这里有几个正面的例子,提供了有用的信息,但没有重复他们是如何做的,因为在 PR " -"的 \"已更改文件 \"部分已经可以看到:" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:333 -msgid "Update docs banner to mention Flower Summit 2023" -msgstr "更新文件横幅,提及 2023 年 Flower 峰会" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:334 -msgid "Remove unnecessary XGBoost dependency" -msgstr "移除不必要的 XGBoost 依赖性" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:335 -msgid "Remove redundant attributes in strategies subclassing FedAvg" -msgstr "删除 FedAvg 子类化策略中的多余属性" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:336 -#, fuzzy -msgid "" -"Add CI job to deploy the staging system when the ``main`` branch changes" -msgstr "添加 CI 作业,以便在 \"主 \"分支发生变化时部署暂存系统" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:337 -msgid "" -"Add new amazing library which will be used to improve the simulation engine" -msgstr "添加新的惊人库,用于改进模拟引擎" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:341 +#: ../../source/contributor-tutorial-contribute-on-github.rst:314 #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:548 #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:946 #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:727 @@ -2174,205 +2011,104 @@ msgstr "添加新的惊人库,用于改进模拟引擎" msgid "Next steps" msgstr "接下来的步骤" -#: ../../source/contributor-tutorial-contribute-on-github.rst:343 +#: ../../source/contributor-tutorial-contribute-on-github.rst:316 msgid "" -"Once you have made your first PR, and want to contribute more, be sure to " -"check out the following :" +"Once you have made your first PR, and want to contribute more, be sure to" +" check out the following :" msgstr "一旦您完成了第一份 PR,并希望做出更多贡献,请务必查看以下内容:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:345 +#: ../../source/contributor-tutorial-contribute-on-github.rst:318 #, fuzzy msgid "" -":doc:`Good first contributions `, " -"where you should particularly look into the :code:`baselines` contributions." +":doc:`Good first contributions `, where you should particularly look into the " +":code:`baselines` contributions." msgstr "" "`优秀的首次贡献 `_,在这里你应该特别看看 :code:`baselines` 的贡献。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:349 +#: ../../source/contributor-tutorial-contribute-on-github.rst:322 #: ../../source/fed/0000-20200102-fed-template.md:60 msgid "Appendix" msgstr "附录" -#: ../../source/contributor-tutorial-contribute-on-github.rst:354 -#, fuzzy -msgid "Changelog entry" -msgstr "更新日志" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:356 -#, fuzzy -msgid "" -"When opening a new PR, inside its description, there should be a ``Changelog " -"entry`` header." -msgstr "打开一个新 PR 时,在其描述中应有一个 ``Changelog entry`` 标头。" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:358 -#, fuzzy -msgid "" -"Above this header you should see the following comment that explains how to " -"write your changelog entry:" -msgstr "在页眉上方,你会看到以下注释,说明如何编写更新日志条目:" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:360 -#, fuzzy -msgid "" -"Inside the following 'Changelog entry' section, you should put the " -"description of your changes that will be added to the changelog alongside " -"your PR title." +#: ../../source/contributor-tutorial-contribute-on-github.rst:327 +msgid "PR title format" msgstr "" -"在下面的 \"更新日志条目 \"部分中,您应该在 PR 标题旁边写上将添加到更新日志中" -"的更改描述。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:363 -#, fuzzy -msgid "" -"If the section is completely empty (without any token) or non-existent, the " -"changelog will just contain the title of the PR for the changelog entry, " -"without any description." +#: ../../source/contributor-tutorial-contribute-on-github.rst:329 +msgid "We enforce the following PR title format:" msgstr "" -"如果该部分完全为空(没有任何标记)或不存在,更新日志将只包含更新日志条目的 " -"PR 标题,而不包含任何描述。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:366 -#, fuzzy +#: ../../source/contributor-tutorial-contribute-on-github.rst:335 msgid "" -"If the section contains some text other than tokens, it will use it to add a " -"description to the change." -msgstr "如果该部分包含标记以外的文本,它将使用这些文本为更改添加说明。" +"(or ``(:skip) `` to ignore the PR in the " +"changelog)" +msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:368 -#, fuzzy +#: ../../source/contributor-tutorial-contribute-on-github.rst:337 msgid "" -"If the section contains one of the following tokens it will ignore any other " -"text and put the PR under the corresponding section of the changelog:" +"Where ```` needs to be in ``{ci, fix, feat, docs, refactor, " +"break}``, ```` should be in ``{framework, baselines, datasets, " +"examples, or '*' when modifying multiple projects which requires the " +"':skip' flag to be used}``, and ```` starts with a capitalised " +"verb in the imperative mood." msgstr "" -"如果该部分包含以下标记之一,它将忽略任何其他文本,并将 PR 放在更新日志的相应" -"部分下:" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:370 -#, fuzzy -msgid " is for classifying a PR as a general improvement." -msgstr " 用于将 PR 划分为一般改进。" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:372 -#, fuzzy -msgid " is to not add the PR to the changelog" -msgstr "表示不将 PR 添加到更新日志中" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:374 -#, fuzzy -msgid " is to add a general baselines change to the PR" -msgstr " 是指在 PR 中添加一般基线更改" -#: ../../source/contributor-tutorial-contribute-on-github.rst:376 -#, fuzzy -msgid " is to add a general examples change to the PR" -msgstr " 是在 PR 中添加对一般示例的修改" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:378 -#, fuzzy -msgid " is to add a general sdk change to the PR" -msgstr " 是指在 PR 中添加一般的 sdk 更改" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:380 -#, fuzzy -msgid " is to add a general simulations change to the PR" -msgstr "(模拟)是在 PR 中添加一般模拟变更" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:382 -#, fuzzy -msgid "Note that only one token should be used." -msgstr "请注意,只能使用一个标记。" - -#: ../../source/contributor-tutorial-contribute-on-github.rst:384 +#: ../../source/contributor-tutorial-contribute-on-github.rst:341 #, fuzzy -msgid "" -"Its content must have a specific format. We will break down what each " -"possibility does:" -msgstr "其内容必须有特定的格式。我们将分析每种可能性的作用:" +msgid "Valid examples:" +msgstr "实例" -#: ../../source/contributor-tutorial-contribute-on-github.rst:386 -#, fuzzy -msgid "" -"If the ``### Changelog entry`` section contains nothing or doesn't exist, " -"the following text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:343 +msgid "``feat(framework) Add flwr build CLI command``" msgstr "" -"如果 ``#### Changelog entry`` 部分不包含任何内容或不存在,则会在更新日志中添" -"加以下文本::" -#: ../../source/contributor-tutorial-contribute-on-github.rst:390 -#, fuzzy -msgid "" -"If the ``### Changelog entry`` section contains a description (and no " -"token), the following text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:344 +msgid "``refactor(examples:skip) Improve quickstart-pytorch logging``" msgstr "" -"如果 ``#### Changelog entry`` 部分包含描述(但没有标记),则会在更新日志中添" -"加以下文本::" -#: ../../source/contributor-tutorial-contribute-on-github.rst:396 -#, fuzzy -msgid "" -"If the ``### Changelog entry`` section contains ````, nothing will " -"change in the changelog." +#: ../../source/contributor-tutorial-contribute-on-github.rst:345 +msgid "``ci(*:skip) Enforce PR title format``" msgstr "" -"如果 ``#### Changelog entry`` 部分包含 ````,更新日志中将不会有任何更" -"改。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:398 +#: ../../source/contributor-tutorial-contribute-on-github.rst:347 #, fuzzy -msgid "" -"If the ``### Changelog entry`` section contains ````, the following " -"text will be added to the changelog::" +msgid "Invalid examples:" +msgstr "模拟示例" + +#: ../../source/contributor-tutorial-contribute-on-github.rst:349 +msgid "``feat(framework): Add flwr build CLI command`` (extra ``:``)" msgstr "" -"如果 ``### Changelog entry`` 部分包含 ````,则会在更新日志中添加以下" -"文本::" -#: ../../source/contributor-tutorial-contribute-on-github.rst:402 -#, fuzzy +#: ../../source/contributor-tutorial-contribute-on-github.rst:350 msgid "" -"If the ``### Changelog entry`` section contains ````, the " -"following text will be added to the changelog::" +"``feat(*) Add flwr build CLI command`` (missing ``skip`` flag along with " +"``*``)" msgstr "" -"如果``### 更新日志条目``部分包含``<基准线>``,则会在更新日志中添加以下文" -"本::" -#: ../../source/contributor-tutorial-contribute-on-github.rst:406 -#, fuzzy -msgid "" -"If the ``### Changelog entry`` section contains ````, the " -"following text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:351 +msgid "``feat(skip) Add flwr build CLI command`` (missing ````)" msgstr "" -"如果``### 更新日志条目``部分包含``<示例>``,则会在更新日志中添加以下文本::" -#: ../../source/contributor-tutorial-contribute-on-github.rst:410 -#, fuzzy -msgid "" -"If the ``### Changelog entry`` section contains ````, the following " -"text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:352 +msgid "``feat(framework) add flwr build CLI command`` (non capitalised verb)" msgstr "" -"如果``### 更新日志条目``部分包含````,则会在更新日志中添加以下文本::" -#: ../../source/contributor-tutorial-contribute-on-github.rst:414 -#, fuzzy -msgid "" -"If the ``### Changelog entry`` section contains ````, the " -"following text will be added to the changelog::" +#: ../../source/contributor-tutorial-contribute-on-github.rst:353 +msgid "``feat(framework) Add flwr build CLI command.`` (dot at the end)" msgstr "" -"如果 ``### Changelog entry`` 部分包含 ````,则会在更新日志中添加" -"以下文本::" -#: ../../source/contributor-tutorial-contribute-on-github.rst:418 -#, fuzzy -msgid "" -"Note that only one token must be provided, otherwise, only the first action " -"(in the order listed above), will be performed." -msgstr "请注意,必须只提供一个标记,否则将只执行第一个操作(按上述顺序)。" +#: ../../source/contributor-tutorial-contribute-on-github.rst:354 +msgid "``Add flwr build CLI command.`` (missing ``()``)" +msgstr "" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:2 msgid "Get started as a contributor" msgstr "成为贡献者" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:5 +#: ../../source/how-to-run-flower-using-docker.rst:132 msgid "Prerequisites" msgstr "先决条件" @@ -2390,21 +2126,18 @@ msgid "(Optional) `pyenv `_" msgstr "(可选) `pyenv `_" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:10 -msgid "" -"(Optional) `pyenv-virtualenv `_" -msgstr "" -"(可选) `pyenv-virtualenv `_" +msgid "(Optional) `pyenv-virtualenv `_" +msgstr "(可选) `pyenv-virtualenv `_" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:12 #, fuzzy msgid "" "Flower uses :code:`pyproject.toml` to manage dependencies and configure " -"development tools (the ones which support it). Poetry is a build tool which " -"supports `PEP 517 `_." +"development tools (the ones which support it). Poetry is a build tool " +"which supports `PEP 517 `_." msgstr "" -"Flower 使用 :code:`pyproject.toml` 来管理依赖关系和配置开发工具(支持它的)。" -"Poetry 是一种支持 `PEP 517 `_ 的构" -"建工具。" +"Flower 使用 :code:`pyproject.toml` 来管理依赖关系和配置开发工具(支持它的)。Poetry 是一种支持 `PEP " +"517 `_ 的构建工具。" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:18 msgid "Developer Machine Setup" @@ -2428,17 +2161,15 @@ msgstr "适用于 macOS" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:27 #, fuzzy msgid "" -"Install `homebrew `_. Don't forget the post-installation " -"actions to add `brew` to your PATH." -msgstr "" -"安装 `homebrew `_。别忘了安装后的操作,将 `brew` 添加到你" -"的 PATH。" +"Install `homebrew `_. Don't forget the post-" +"installation actions to add `brew` to your PATH." +msgstr "安装 `homebrew `_。别忘了安装后的操作,将 `brew` 添加到你的 PATH。" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:28 #, fuzzy msgid "" -"Install `xz` (to install different Python versions) and `pandoc` to build " -"the docs::" +"Install `xz` (to install different Python versions) and `pandoc` to build" +" the docs::" msgstr "安装 `xz`(用于安装不同的 Python 版本)和 `pandoc` 以构建文档::" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:34 @@ -2449,8 +2180,8 @@ msgstr "针对 Ubuntu" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:35 #, fuzzy msgid "" -"Ensure you system (Ubuntu 22.04+) is up-to-date, and you have all necessary " -"packages::" +"Ensure you system (Ubuntu 22.04+) is up-to-date, and you have all " +"necessary packages::" msgstr "确保您的系统(Ubuntu 22.04+)为最新版本,并安装了所有必要的软件包::" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:44 @@ -2463,50 +2194,46 @@ msgstr "创建/删除虚拟环境" msgid "" "1. Clone the `Flower repository `_ from " "GitHub::" -msgstr "" -"首先,从 GitHub 克隆 \"Flower 存储库 `_\":" +msgstr "首先,从 GitHub 克隆 \"Flower 存储库 `_\":" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:52 #, fuzzy msgid "" -"Let's create the Python environment for all-things Flower. If you wish to " -"use :code:`pyenv`, we provide two convenience scripts that you can use. If " -"you prefer using something else than :code:`pyenv`, create a new " +"Let's create the Python environment for all-things Flower. If you wish to" +" use :code:`pyenv`, we provide two convenience scripts that you can use. " +"If you prefer using something else than :code:`pyenv`, create a new " "environment, activate and skip to the last point where all packages are " "installed." msgstr "" -"让我们为 Flower 创建一个 Python 环境。如果您想使用 :code:`pyenv`,我们提供了" -"两个方便的脚本供您使用。如果你不喜欢使用 :code:`pyenv`,请创建一个新环境,激" -"活并跳到最后一点,即安装所有软件包。" +"让我们为 Flower 创建一个 Python 环境。如果您想使用 :code:`pyenv`,我们提供了两个方便的脚本供您使用。如果你不喜欢使用" +" :code:`pyenv`,请创建一个新环境,激活并跳到最后一点,即安装所有软件包。" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:54 #, fuzzy msgid "" -"If you don't have :code:`pyenv` installed, the following script that will " -"install it, set it up, and create the virtual environment (with :code:" -"`Python 3.8.17` by default)::" +"If you don't have :code:`pyenv` installed, the following script that will" +" install it, set it up, and create the virtual environment (with " +":code:`Python 3.8.17` by default)::" msgstr "" -"如果没有安装 :code:`pyenv`,可以使用以下脚本安装 pyenv、设置并创建虚拟环境" -"(默认使用 :code:`Python3.8.17)::" +"如果没有安装 :code:`pyenv`,可以使用以下脚本安装 pyenv、设置并创建虚拟环境(默认使用 " +":code:`Python3.8.17)::" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:58 #, fuzzy msgid "" "If you already have :code:`pyenv` installed (along with the :code:`pyenv-" -"virtualenv` plugin), you can use the following convenience script (with :" -"code:`Python 3.8.17` by default)::" +"virtualenv` plugin), you can use the following convenience script (with " +":code:`Python 3.8.17` by default)::" msgstr "" -"如果没有安装 :code:`pyenv`,可以使用以下脚本安装 pyenv、设置并创建虚拟环境" -"(默认使用 :code:`Python3.8.17)::" +"如果没有安装 :code:`pyenv`,可以使用以下脚本安装 pyenv、设置并创建虚拟环境(默认使用 " +":code:`Python3.8.17)::" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:62 #, fuzzy msgid "" -"3. Install the Flower package in development mode (think :code:`pip install -" -"e`) along with all necessary dependencies::" -msgstr "" -"第三,在开发模式下安装 Flower 软件包(想想 :code:`pip install -e`)以及所有必" -"要的依赖项::" +"3. Install the Flower package in development mode (think :code:`pip " +"install -e`) along with all necessary dependencies::" +msgstr "第三,在开发模式下安装 Flower 软件包(想想 :code:`pip install -e`)以及所有必要的依赖项::" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:69 msgid "Convenience Scripts" @@ -2515,12 +2242,10 @@ msgstr "便捷脚本" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:71 msgid "" "The Flower repository contains a number of convenience scripts to make " -"recurring development tasks easier and less error-prone. See the :code:`/" -"dev` subdirectory for a full list. The following scripts are amongst the " -"most important ones:" -msgstr "" -"Flower 软件仓库包含大量便捷脚本,可使重复性开发任务更轻松、更不易出错。完整列" -"表请参见 :code:`/dev` 子目录。以下是最重要的脚本:" +"recurring development tasks easier and less error-prone. See the " +":code:`/dev` subdirectory for a full list. The following scripts are " +"amongst the most important ones:" +msgstr "Flower 软件仓库包含大量便捷脚本,可使重复性开发任务更轻松、更不易出错。完整列表请参见 :code:`/dev` 子目录。以下是最重要的脚本:" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:77 msgid "Create/Delete Virtual Environment" @@ -2539,58 +2264,101 @@ msgid "Run Linters and Tests" msgstr "运行分类器和测试" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:106 +msgid "Add a pre-commit hook" +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:108 +msgid "" +"Developers may integrate a pre-commit hook into their workflow utilizing " +"the `pre-commit `_ library. The pre-" +"commit hook is configured to execute two primary operations: " +"``./dev/format.sh`` and ``./dev/test.sh`` scripts." +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:110 +msgid "There are multiple ways developers can use this:" +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:112 +msgid "Install the pre-commit hook to your local git directory by simply running:" +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:118 +msgid "" +"Each ``git commit`` will trigger the execution of formatting and " +"linting/test scripts." +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:119 +msgid "" +"If in a hurry, bypass the hook using ``--no-verify`` with the ``git " +"commit`` command. ::" +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:124 +msgid "" +"For developers who prefer not to install the hook permanently, it is " +"possible to execute a one-time check prior to committing changes by using" +" the following command:" +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:130 +msgid "" +"This executes the formatting and linting checks/tests on all the files " +"without modifying the default behavior of ``git commit``." +msgstr "" + +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:133 msgid "Run Github Actions (CI) locally" msgstr "在本地运行 Github 操作 (CI)" -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:108 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:135 #, fuzzy msgid "" -"Developers could run the full set of Github Actions workflows under their " -"local environment by using `Act `_. Please " -"refer to the installation instructions under the linked repository and run " -"the next command under Flower main cloned repository folder::" +"Developers could run the full set of Github Actions workflows under their" +" local environment by using `Act `_. " +"Please refer to the installation instructions under the linked repository" +" and run the next command under Flower main cloned repository folder::" msgstr "" -"开发人员可以使用 `Act _` 在本地环境下运行全套 " -"Github Actions 工作流程。请参考链接仓库下的安装说明,并在 Flower 主克隆仓库文" -"件夹下运行下一条命令::" +"开发人员可以使用 `Act _` 在本地环境下运行全套 Github Actions" +" 工作流程。请参考链接仓库下的安装说明,并在 Flower 主克隆仓库文件夹下运行下一条命令::" -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:115 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:142 msgid "" "The Flower default workflow would run by setting up the required Docker " "machines underneath." msgstr "Flower 默认工作流程将通过在下面设置所需的 Docker 机器来运行。" -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:120 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:147 msgid "Build Release" msgstr "版本发布" -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:122 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:149 msgid "" -"Flower uses Poetry to build releases. The necessary command is wrapped in a " -"simple script::" +"Flower uses Poetry to build releases. The necessary command is wrapped in" +" a simple script::" msgstr "Flower 使用 Poetry 创建发布版本。必要的命令封装在一个简单的脚本中::" -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:127 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:154 msgid "" -"The resulting :code:`.whl` and :code:`.tar.gz` releases will be stored in " -"the :code:`/dist` subdirectory." -msgstr "" -"生成的 :code:`.whl` 和 :code:`.tar.gz` 版本将存储在 :code:`/dist` 子目录中。" +"The resulting :code:`.whl` and :code:`.tar.gz` releases will be stored in" +" the :code:`/dist` subdirectory." +msgstr "生成的 :code:`.whl` 和 :code:`.tar.gz` 版本将存储在 :code:`/dist` 子目录中。" -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:132 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:159 msgid "Build Documentation" msgstr "构建文档" -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:134 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:161 msgid "" -"Flower's documentation uses `Sphinx `_. There's " -"no convenience script to re-build the documentation yet, but it's pretty " -"easy::" +"Flower's documentation uses `Sphinx `_. " +"There's no convenience script to re-build the documentation yet, but it's" +" pretty easy::" msgstr "" -"Flower 的文档使用 `Sphinx `_。目前还没有很方便的" -"脚本来重新构建文档,不过这很容易::" +"Flower 的文档使用 `Sphinx `_。目前还没有很方便的脚本来重新构建文档,不过这很容易::" -#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:140 +#: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:167 msgid "This will generate HTML documentation in ``doc/build/html``." msgstr "这将在 ``doc/build/html`` 中生成 HTML 文档。" @@ -2601,19 +2369,19 @@ msgstr "示例: PyTorch 中的 FedBN - 从集中式到联邦式" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:4 #, fuzzy msgid "" -"This tutorial will show you how to use Flower to build a federated version " -"of an existing machine learning workload with `FedBN `_, a federated training strategy designed for non-iid data. We " -"are using PyTorch to train a Convolutional Neural Network(with Batch " -"Normalization layers) on the CIFAR-10 dataset. When applying FedBN, only few " -"changes needed compared to :doc:`Example: PyTorch - From Centralized To " -"Federated `." +"This tutorial will show you how to use Flower to build a federated " +"version of an existing machine learning workload with `FedBN " +"`_, a federated training strategy " +"designed for non-iid data. We are using PyTorch to train a Convolutional " +"Neural Network(with Batch Normalization layers) on the CIFAR-10 dataset. " +"When applying FedBN, only few changes needed compared to :doc:`Example: " +"PyTorch - From Centralized To Federated `." msgstr "" -"本教程将向您展示如何使用 Flower 为现有的机器学习框架构建一个联邦学习的版本," -"并使用 \"FedBN `_\"(一种针对非 iid 数据设" -"计的联邦训练策略)。我们使用 PyTorch 在 CIFAR-10 数据集上训练一个卷积神经网络" -"(带有Batch Normalization层)。在应用 FedBN 时,只需对 `示例: PyTorch - 从集" -"中式到联邦式 `_\"(一种针对非 iid 数据设计的联邦训练策略)。我们使用 PyTorch 在 CIFAR-10 " +"数据集上训练一个卷积神经网络(带有Batch Normalization层)。在应用 FedBN 时,只需对 `示例: PyTorch - " +"从集中式到联邦式 `_ 做少量改动。" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:9 @@ -2624,14 +2392,14 @@ msgstr "集中式训练" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:10 #, fuzzy msgid "" -"All files are revised based on :doc:`Example: PyTorch - From Centralized To " -"Federated `. The only thing " -"to do is modifying the file called :code:`cifar.py`, revised part is shown " -"below:" +"All files are revised based on :doc:`Example: PyTorch - From Centralized " +"To Federated `. The only " +"thing to do is modifying the file called :code:`cifar.py`, revised part " +"is shown below:" msgstr "" -"所有文件均根据 `示例: PyTorch -从集中式到联邦式 `_。唯一要做的就是修改名" -"为 :code:`cifar.py` 的文件,修改部分如下所示:" +"所有文件均根据 `示例: PyTorch -从集中式到联邦式 `_。唯一要做的就是修改名为 :code:`cifar.py` " +"的文件,修改部分如下所示:" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:13 msgid "" @@ -2647,14 +2415,13 @@ msgstr "现在,您可以运行您的机器学习工作了:" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:47 #, fuzzy msgid "" -"So far this should all look fairly familiar if you've used PyTorch before. " -"Let's take the next step and use what we've built to create a federated " -"learning system within FedBN, the system consists of one server and two " -"clients." +"So far this should all look fairly familiar if you've used PyTorch " +"before. Let's take the next step and use what we've built to create a " +"federated learning system within FedBN, the system consists of one server" +" and two clients." msgstr "" -"到目前为止,如果您以前使用过 PyTorch,这一切看起来应该相当熟悉。让我们进行下" -"一步,使用我们所构建的内容在 FedBN 中创建一个联邦学习系统,该系统由一个服务器" -"和两个客户端组成。" +"到目前为止,如果您以前使用过 PyTorch,这一切看起来应该相当熟悉。让我们进行下一步,使用我们所构建的内容在 FedBN " +"中创建一个联邦学习系统,该系统由一个服务器和两个客户端组成。" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:51 #: ../../source/example-pytorch-from-centralized-to-federated.rst:167 @@ -2665,37 +2432,34 @@ msgstr "联邦培训" #, fuzzy msgid "" "If you have read :doc:`Example: PyTorch - From Centralized To Federated " -"`, the following parts are " -"easy to follow, only :code:`get_parameters` and :code:`set_parameters` " -"function in :code:`client.py` needed to revise. If not, please read the :doc:" -"`Example: PyTorch - From Centralized To Federated `. first." -msgstr "" -"如果你读过 `示例: PyTorch - 从集中式到联邦式 `_,下面的部分就很容易理" -"解了,只需要修改 :code:`get_parameters` 和 :code:`set_parameters` 中的 :code:" -"`client.py` 函数。如果没有,请阅读 `示例: PyTorch - 从集中式到联邦式 " -"`_。" +"`, the following parts are" +" easy to follow, only :code:`get_parameters` and :code:`set_parameters` " +"function in :code:`client.py` needed to revise. If not, please read the " +":doc:`Example: PyTorch - From Centralized To Federated `. first." +msgstr "" +"如果你读过 `示例: PyTorch - 从集中式到联邦式 `_,下面的部分就很容易理解了,只需要修改 " +":code:`get_parameters` 和 :code:`set_parameters` 中的 :code:`client.py` " +"函数。如果没有,请阅读 `示例: PyTorch - 从集中式到联邦式 `_。" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:56 msgid "" -"Our example consists of one *server* and two *clients*. In FedBN, :code:" -"`server.py` keeps unchanged, we can start the server directly." -msgstr "" -"我们的示例包括一个*服务器*和两个*客户端*。在 FedBN 中,:code:`server.py` 保持" -"不变,我们可以直接启动服务器。" +"Our example consists of one *server* and two *clients*. In FedBN, " +":code:`server.py` keeps unchanged, we can start the server directly." +msgstr "我们的示例包括一个*服务器*和两个*客户端*。在 FedBN 中,:code:`server.py` 保持不变,我们可以直接启动服务器。" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:62 msgid "" -"Finally, we will revise our *client* logic by changing :code:" -"`get_parameters` and :code:`set_parameters` in :code:`client.py`, we will " -"exclude batch normalization parameters from model parameter list when " -"sending to or receiving from the server." +"Finally, we will revise our *client* logic by changing " +":code:`get_parameters` and :code:`set_parameters` in :code:`client.py`, " +"we will exclude batch normalization parameters from model parameter list " +"when sending to or receiving from the server." msgstr "" -"最后,我们将修改 *client* 的逻辑,修改 :code:`client.py` 中的 :code:" -"`get_parameters` 和 :code:`set_parameters`,在向服务器发送或从服务器接收时," -"我们将从模型参数列表中排除batch normalization层的参数。" +"最后,我们将修改 *client* 的逻辑,修改 :code:`client.py` 中的 :code:`get_parameters` 和 " +":code:`set_parameters`,在向服务器发送或从服务器接收时,我们将从模型参数列表中排除batch " +"normalization层的参数。" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:85 msgid "Now, you can now open two additional terminal windows and run" @@ -2703,16 +2467,13 @@ msgstr "现在,您可以打开另外两个终端窗口并运行程序" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:91 msgid "" -"in each window (make sure that the server is still running before you do so) " -"and see your (previously centralized) PyTorch project run federated learning " -"with FedBN strategy across two clients. Congratulations!" -msgstr "" -"确保服务器仍在运行后,然后您就能看到您的 PyTorch 项目(之前是集中式的)通过 " -"FedBN 策略在两个客户端上运行联合学习。祝贺!" +"in each window (make sure that the server is still running before you do " +"so) and see your (previously centralized) PyTorch project run federated " +"learning with FedBN strategy across two clients. Congratulations!" +msgstr "确保服务器仍在运行后,然后您就能看到您的 PyTorch 项目(之前是集中式的)通过 FedBN 策略在两个客户端上运行联合学习。祝贺!" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:94 #: ../../source/example-jax-from-centralized-to-federated.rst:277 -#: ../../source/example-mxnet-walk-through.rst:356 #: ../../source/example-pytorch-from-centralized-to-federated.rst:310 #: ../../source/tutorial-quickstart-jax.rst:283 msgid "Next Steps" @@ -2720,17 +2481,18 @@ msgstr "下一步工作" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:96 msgid "" -"The full source code for this example can be found `here `_. Our " -"example is of course somewhat over-simplified because both clients load the " -"exact same dataset, which isn't realistic. You're now prepared to explore " -"this topic further. How about using different subsets of CIFAR-10 on each " -"client? How about adding more clients?" +"The full source code for this example can be found `here " +"`_. Our example is of course somewhat over-" +"simplified because both clients load the exact same dataset, which isn't " +"realistic. You're now prepared to explore this topic further. How about " +"using different subsets of CIFAR-10 on each client? How about adding more" +" clients?" msgstr "" -"本示例的完整源代码可在 `_ 找到。当然,我们的示例有些过于简单," -"因为两个客户端都加载了完全相同的数据集,这并不真实。让我们准备好进一步探讨这" -"一主题。如在每个客户端使用不同的 CIFAR-10 子集,或者增加客户端的数量。" +"本示例的完整源代码可在 `_ " +"找到。当然,我们的示例有些过于简单,因为两个客户端都加载了完全相同的数据集,这并不真实。让我们准备好进一步探讨这一主题。如在每个客户端使用不同的 " +"CIFAR-10 子集,或者增加客户端的数量。" #: ../../source/example-jax-from-centralized-to-federated.rst:2 msgid "Example: JAX - Run JAX Federated" @@ -2739,32 +2501,32 @@ msgstr "示例: JAX - 运行联邦式 JAX" #: ../../source/example-jax-from-centralized-to-federated.rst:4 #: ../../source/tutorial-quickstart-jax.rst:10 msgid "" -"This tutorial will show you how to use Flower to build a federated version " -"of an existing JAX workload. We are using JAX to train a linear regression " -"model on a scikit-learn dataset. We will structure the example similar to " -"our `PyTorch - From Centralized To Federated `_ walkthrough. " -"First, we build a centralized training approach based on the `Linear " -"Regression with JAX `_ tutorial`. Then, we build upon the centralized " -"training code to run the training in a federated fashion." -msgstr "" -"本教程将向您展示如何使用 Flower 构建现有 JAX 的联邦学习版本。我们将使用 JAX " -"在 scikit-learn 数据集上训练线性回归模型。我们将采用与 `PyTorch - 从集中式到" -"联邦式 `_ 教程中类似的示例结构。首先,我们根据 `JAX 的线性" -"回归 `_ 教程构建集中式训练方法。然后,我们在集中式训练代码的基础上以联邦方式" -"运行训练。" +"This tutorial will show you how to use Flower to build a federated " +"version of an existing JAX workload. We are using JAX to train a linear " +"regression model on a scikit-learn dataset. We will structure the example" +" similar to our `PyTorch - From Centralized To Federated " +"`_ walkthrough. First, we build a centralized " +"training approach based on the `Linear Regression with JAX " +"`_" +" tutorial`. Then, we build upon the centralized training code to run the " +"training in a federated fashion." +msgstr "" +"本教程将向您展示如何使用 Flower 构建现有 JAX 的联邦学习版本。我们将使用 JAX 在 scikit-learn " +"数据集上训练线性回归模型。我们将采用与 `PyTorch - 从集中式到联邦式 " +"`_ 教程中类似的示例结构。首先,我们根据 `JAX 的线性回归 " +"`_" +" 教程构建集中式训练方法。然后,我们在集中式训练代码的基础上以联邦方式运行训练。" #: ../../source/example-jax-from-centralized-to-federated.rst:10 #: ../../source/tutorial-quickstart-jax.rst:16 msgid "" -"Before we start building our JAX example, we need install the packages :code:" -"`jax`, :code:`jaxlib`, :code:`scikit-learn`, and :code:`flwr`:" +"Before we start building our JAX example, we need install the packages " +":code:`jax`, :code:`jaxlib`, :code:`scikit-learn`, and :code:`flwr`:" msgstr "" -"在开始构建 JAX 示例之前,我们需要安装软件包 :code:`jax`、:code:`jaxlib`、:" -"code:`scikit-learn` 和 :code:`flwr`:" +"在开始构建 JAX 示例之前,我们需要安装软件包 :code:`jax`、:code:`jaxlib`、:code:`scikit-learn` " +"和 :code:`flwr`:" #: ../../source/example-jax-from-centralized-to-federated.rst:18 #: ../../source/tutorial-quickstart-jax.rst:24 @@ -2774,83 +2536,77 @@ msgstr "使用 JAX 进行线性回归" #: ../../source/example-jax-from-centralized-to-federated.rst:20 #: ../../source/tutorial-quickstart-jax.rst:26 msgid "" -"We begin with a brief description of the centralized training code based on " -"a :code:`Linear Regression` model. If you want a more in-depth explanation " -"of what's going on then have a look at the official `JAX documentation " -"`_." +"We begin with a brief description of the centralized training code based " +"on a :code:`Linear Regression` model. If you want a more in-depth " +"explanation of what's going on then have a look at the official `JAX " +"documentation `_." msgstr "" -"首先,我们将简要介绍基于 :code:`Linear Regression` 模型的集中式训练代码。如果" -"您想获得更深入的解释,请参阅官方的 `JAX 文档 `_。" +"首先,我们将简要介绍基于 :code:`Linear Regression` 模型的集中式训练代码。如果您想获得更深入的解释,请参阅官方的 " +"`JAX 文档 `_。" #: ../../source/example-jax-from-centralized-to-federated.rst:23 #: ../../source/tutorial-quickstart-jax.rst:29 msgid "" "Let's create a new file called :code:`jax_training.py` with all the " "components required for a traditional (centralized) linear regression " -"training. First, the JAX packages :code:`jax` and :code:`jaxlib` need to be " -"imported. In addition, we need to import :code:`sklearn` since we use :code:" -"`make_regression` for the dataset and :code:`train_test_split` to split the " -"dataset into a training and test set. You can see that we do not yet import " -"the :code:`flwr` package for federated learning. This will be done later." -msgstr "" -"让我们创建一个名为 :code:`jax_training.py` 的新文件,其中包含传统(集中式)线" -"性回归训练所需的所有组件。首先,需要导入 JAX 包 :code:`jax` 和 :code:" -"`jaxlib`。此外,我们还需要导入 :code:`sklearn`,因为我们使用 :code:" -"`make_regression` 创建数据集,并使用 :code:`train_test_split` 将数据集拆分成" -"训练集和测试集。您可以看到,我们还没有导入用于联邦学习的 :code:`flwr` 软件" -"包,这将在稍后完成。" +"training. First, the JAX packages :code:`jax` and :code:`jaxlib` need to " +"be imported. In addition, we need to import :code:`sklearn` since we use " +":code:`make_regression` for the dataset and :code:`train_test_split` to " +"split the dataset into a training and test set. You can see that we do " +"not yet import the :code:`flwr` package for federated learning. This will" +" be done later." +msgstr "" +"让我们创建一个名为 :code:`jax_training.py` 的新文件,其中包含传统(集中式)线性回归训练所需的所有组件。首先,需要导入 " +"JAX 包 :code:`jax` 和 :code:`jaxlib`。此外,我们还需要导入 :code:`sklearn`,因为我们使用 " +":code:`make_regression` 创建数据集,并使用 :code:`train_test_split` " +"将数据集拆分成训练集和测试集。您可以看到,我们还没有导入用于联邦学习的 :code:`flwr` 软件包,这将在稍后完成。" #: ../../source/example-jax-from-centralized-to-federated.rst:37 #: ../../source/tutorial-quickstart-jax.rst:43 msgid "" -"The :code:`load_data()` function loads the mentioned training and test sets." +"The :code:`load_data()` function loads the mentioned training and test " +"sets." msgstr ":code:`load_data()` 函数会加载上述训练集和测试集。" #: ../../source/example-jax-from-centralized-to-federated.rst:47 #: ../../source/tutorial-quickstart-jax.rst:53 msgid "" -"The model architecture (a very simple :code:`Linear Regression` model) is " -"defined in :code:`load_model()`." -msgstr "" -"模型结构(一个非常简单的 :code:`Linear Regression` 线性回归模型)在 :code:" -"`load_model()` 中定义。" +"The model architecture (a very simple :code:`Linear Regression` model) is" +" defined in :code:`load_model()`." +msgstr "模型结构(一个非常简单的 :code:`Linear Regression` 线性回归模型)在 :code:`load_model()` 中定义。" #: ../../source/example-jax-from-centralized-to-federated.rst:59 #: ../../source/tutorial-quickstart-jax.rst:65 msgid "" -"We now need to define the training (function :code:`train()`), which loops " -"over the training set and measures the loss (function :code:`loss_fn()`) for " -"each batch of training examples. The loss function is separate since JAX " -"takes derivatives with a :code:`grad()` function (defined in the :code:" -"`main()` function and called in :code:`train()`)." +"We now need to define the training (function :code:`train()`), which " +"loops over the training set and measures the loss (function " +":code:`loss_fn()`) for each batch of training examples. The loss function" +" is separate since JAX takes derivatives with a :code:`grad()` function " +"(defined in the :code:`main()` function and called in :code:`train()`)." msgstr "" -"现在,我们需要定义训练函数( :code:`train()`)。它循环遍历训练集,并计算每批" -"训练数据的损失值(函数 :code:`loss_fn()`)。由于 JAX 使用 :code:`grad()` 函数" -"提取导数(在 :code:`main()` 函数中定义,并在 :code:`train()` 中调用),因此损" -"失函数是独立的。" +"现在,我们需要定义训练函数( :code:`train()`)。它循环遍历训练集,并计算每批训练数据的损失值(函数 " +":code:`loss_fn()`)。由于 JAX 使用 :code:`grad()` 函数提取导数(在 :code:`main()` " +"函数中定义,并在 :code:`train()` 中调用),因此损失函数是独立的。" #: ../../source/example-jax-from-centralized-to-federated.rst:77 #: ../../source/tutorial-quickstart-jax.rst:83 msgid "" -"The evaluation of the model is defined in the function :code:`evaluation()`. " -"The function takes all test examples and measures the loss of the linear " -"regression model." -msgstr "" -"模型的评估在函数 :code:`evaluation()` 中定义。该函数获取所有测试数据,并计算" -"线性回归模型的损失值。" +"The evaluation of the model is defined in the function " +":code:`evaluation()`. The function takes all test examples and measures " +"the loss of the linear regression model." +msgstr "模型的评估在函数 :code:`evaluation()` 中定义。该函数获取所有测试数据,并计算线性回归模型的损失值。" #: ../../source/example-jax-from-centralized-to-federated.rst:88 #: ../../source/tutorial-quickstart-jax.rst:94 msgid "" "Having defined the data loading, model architecture, training, and " -"evaluation we can put everything together and train our model using JAX. As " -"already mentioned, the :code:`jax.grad()` function is defined in :code:" -"`main()` and passed to :code:`train()`." +"evaluation we can put everything together and train our model using JAX. " +"As already mentioned, the :code:`jax.grad()` function is defined in " +":code:`main()` and passed to :code:`train()`." msgstr "" -"在定义了数据加载、模型架构、训练和评估之后,我们就可以把这些放在一起,使用 " -"JAX 训练我们的模型了。如前所述,:code:`jax.grad()` 函数在 :code:`main()` 中定" -"义,并传递给 :code:`train()`。" +"在定义了数据加载、模型架构、训练和评估之后,我们就可以把这些放在一起,使用 JAX " +"训练我们的模型了。如前所述,:code:`jax.grad()` 函数在 :code:`main()` 中定义,并传递给 " +":code:`train()`。" #: ../../source/example-jax-from-centralized-to-federated.rst:105 #: ../../source/tutorial-quickstart-jax.rst:111 @@ -2860,12 +2616,10 @@ msgstr "现在您可以运行(集中式)JAX 线性回归工作了:" #: ../../source/example-jax-from-centralized-to-federated.rst:111 #: ../../source/tutorial-quickstart-jax.rst:117 msgid "" -"So far this should all look fairly familiar if you've used JAX before. Let's " -"take the next step and use what we've built to create a simple federated " -"learning system consisting of one server and two clients." -msgstr "" -"到目前为止,如果你以前使用过 JAX,就会对这一切感到很熟悉。下一步,让我们利用" -"已构建的代码创建一个简单的联邦学习系统(一个服务器和两个客户端)。" +"So far this should all look fairly familiar if you've used JAX before. " +"Let's take the next step and use what we've built to create a simple " +"federated learning system consisting of one server and two clients." +msgstr "到目前为止,如果你以前使用过 JAX,就会对这一切感到很熟悉。下一步,让我们利用已构建的代码创建一个简单的联邦学习系统(一个服务器和两个客户端)。" #: ../../source/example-jax-from-centralized-to-federated.rst:115 #: ../../source/tutorial-quickstart-jax.rst:121 @@ -2875,37 +2629,31 @@ msgstr "JAX 结合 Flower" #: ../../source/example-jax-from-centralized-to-federated.rst:117 #: ../../source/tutorial-quickstart-jax.rst:123 msgid "" -"The concept of federating an existing workload is always the same and easy " -"to understand. We have to start a *server* and then use the code in :code:" -"`jax_training.py` for the *clients* that are connected to the *server*. The " -"*server* sends model parameters to the clients. The *clients* run the " -"training and update the parameters. The updated parameters are sent back to " -"the *server*, which averages all received parameter updates. This describes " -"one round of the federated learning process, and we repeat this for multiple " -"rounds." -msgstr "" -"把现有工作联邦化的概念始终是相同的,也很容易理解。我们要启动一个*服务器*,然" -"后对连接到*服务器*的*客户端*运行 :code:`jax_training.py`中的代码。*服务器*向" -"客户端发送模型参数,*客户端*运行训练并更新参数。更新后的参数被发回*服务器*," -"然后服务器对所有收到的参数进行平均聚合。以上的描述构成了一轮联邦学习,我们将" -"重复进行多轮学习。" +"The concept of federating an existing workload is always the same and " +"easy to understand. We have to start a *server* and then use the code in " +":code:`jax_training.py` for the *clients* that are connected to the " +"*server*. The *server* sends model parameters to the clients. The " +"*clients* run the training and update the parameters. The updated " +"parameters are sent back to the *server*, which averages all received " +"parameter updates. This describes one round of the federated learning " +"process, and we repeat this for multiple rounds." +msgstr "" +"把现有工作联邦化的概念始终是相同的,也很容易理解。我们要启动一个*服务器*,然后对连接到*服务器*的*客户端*运行 " +":code:`jax_training.py`中的代码。*服务器*向客户端发送模型参数,*客户端*运行训练并更新参数。更新后的参数被发回*服务器*,然后服务器对所有收到的参数进行平均聚合。以上的描述构成了一轮联邦学习,我们将重复进行多轮学习。" #: ../../source/example-jax-from-centralized-to-federated.rst:123 -#: ../../source/example-mxnet-walk-through.rst:204 #: ../../source/example-pytorch-from-centralized-to-federated.rst:181 #: ../../source/tutorial-quickstart-jax.rst:129 msgid "" -"Our example consists of one *server* and two *clients*. Let's set up :code:" -"`server.py` first. The *server* needs to import the Flower package :code:" -"`flwr`. Next, we use the :code:`start_server` function to start a server and " -"tell it to perform three rounds of federated learning." +"Our example consists of one *server* and two *clients*. Let's set up " +":code:`server.py` first. The *server* needs to import the Flower package " +":code:`flwr`. Next, we use the :code:`start_server` function to start a " +"server and tell it to perform three rounds of federated learning." msgstr "" -"我们的示例包括一个*服务器*和两个*客户端*。让我们先设置 :code:`server.py`。*服" -"务器*需要导入 Flower 软件包 :code:`flwr`。接下来,我们使用 :code:" -"`start_server` 函数启动服务器,并让它执行三轮联邦学习。" +"我们的示例包括一个*服务器*和两个*客户端*。让我们先设置 :code:`server.py`。*服务器*需要导入 Flower 软件包 " +":code:`flwr`。接下来,我们使用 :code:`start_server` 函数启动服务器,并让它执行三轮联邦学习。" #: ../../source/example-jax-from-centralized-to-federated.rst:133 -#: ../../source/example-mxnet-walk-through.rst:214 #: ../../source/example-pytorch-from-centralized-to-federated.rst:191 #: ../../source/tutorial-quickstart-jax.rst:139 msgid "We can already start the *server*:" @@ -2914,47 +2662,47 @@ msgstr "我们已经可以启动*服务器*了:" #: ../../source/example-jax-from-centralized-to-federated.rst:139 #: ../../source/tutorial-quickstart-jax.rst:145 msgid "" -"Finally, we will define our *client* logic in :code:`client.py` and build " -"upon the previously defined JAX training in :code:`jax_training.py`. Our " -"*client* needs to import :code:`flwr`, but also :code:`jax` and :code:" -"`jaxlib` to update the parameters on our JAX model:" +"Finally, we will define our *client* logic in :code:`client.py` and build" +" upon the previously defined JAX training in :code:`jax_training.py`. Our" +" *client* needs to import :code:`flwr`, but also :code:`jax` and " +":code:`jaxlib` to update the parameters on our JAX model:" msgstr "" -"最后,我们将在 :code:`client.py` 中定义我们的 *client* 逻辑,并以之前在 :" -"code:`jax_training.py` 中定义的 JAX 训练为基础。我们的 *client* 需要导入 :" -"code:`flwr`,还需要导入 :code:`jax` 和 :code:`jaxlib` 以更新 JAX 模型的参数:" +"最后,我们将在 :code:`client.py` 中定义我们的 *client* 逻辑,并以之前在 " +":code:`jax_training.py` 中定义的 JAX 训练为基础。我们的 *client* 需要导入 " +":code:`flwr`,还需要导入 :code:`jax` 和 :code:`jaxlib` 以更新 JAX 模型的参数:" #: ../../source/example-jax-from-centralized-to-federated.rst:154 #: ../../source/tutorial-quickstart-jax.rst:160 msgid "" -"Implementing a Flower *client* basically means implementing a subclass of " -"either :code:`flwr.client.Client` or :code:`flwr.client.NumPyClient`. Our " -"implementation will be based on :code:`flwr.client.NumPyClient` and we'll " -"call it :code:`FlowerClient`. :code:`NumPyClient` is slightly easier to " -"implement than :code:`Client` if you use a framework with good NumPy " -"interoperability (like JAX) because it avoids some of the boilerplate that " -"would otherwise be necessary. :code:`FlowerClient` needs to implement four " -"methods, two methods for getting/setting model parameters, one method for " -"training the model, and one method for testing the model:" -msgstr "" -"实现一个 Flower *client*基本上意味着去实现一个 :code:`flwr.client.Client` " -"或 :code:`flwr.client.NumPyClient` 的子类。我们的代码实现将基于 :code:`flwr." -"client.NumPyClient`,并将其命名为 :code:`FlowerClient`。如果使用具有良好 " -"NumPy 互操作性的框架(如 JAX),:code:`NumPyClient` 比 :code:`Client`更容易实" -"现,因为它避免了一些不必要的操作。:code:`FlowerClient` 需要实现四个方法,两个" -"用于获取/设置模型参数,一个用于训练模型,一个用于测试模型:" +"Implementing a Flower *client* basically means implementing a subclass of" +" either :code:`flwr.client.Client` or :code:`flwr.client.NumPyClient`. " +"Our implementation will be based on :code:`flwr.client.NumPyClient` and " +"we'll call it :code:`FlowerClient`. :code:`NumPyClient` is slightly " +"easier to implement than :code:`Client` if you use a framework with good " +"NumPy interoperability (like JAX) because it avoids some of the " +"boilerplate that would otherwise be necessary. :code:`FlowerClient` needs" +" to implement four methods, two methods for getting/setting model " +"parameters, one method for training the model, and one method for testing" +" the model:" +msgstr "" +"实现一个 Flower *client*基本上意味着去实现一个 :code:`flwr.client.Client` 或 " +":code:`flwr.client.NumPyClient` 的子类。我们的代码实现将基于 " +":code:`flwr.client.NumPyClient`,并将其命名为 :code:`FlowerClient`。如果使用具有良好 " +"NumPy 互操作性的框架(如 JAX),:code:`NumPyClient` 比 " +":code:`Client`更容易实现,因为它避免了一些不必要的操作。:code:`FlowerClient` " +"需要实现四个方法,两个用于获取/设置模型参数,一个用于训练模型,一个用于测试模型:" #: ../../source/example-jax-from-centralized-to-federated.rst:161 -#: ../../source/example-mxnet-walk-through.rst:242 #: ../../source/tutorial-quickstart-jax.rst:167 msgid ":code:`set_parameters (optional)`" msgstr ":code:`set_parameters (可选)`" #: ../../source/example-jax-from-centralized-to-federated.rst:160 -#: ../../source/example-mxnet-walk-through.rst:241 #: ../../source/example-pytorch-from-centralized-to-federated.rst:219 #: ../../source/tutorial-quickstart-jax.rst:166 msgid "" -"set the model parameters on the local model that are received from the server" +"set the model parameters on the local model that are received from the " +"server" msgstr "在本地模型上设置从服务器接收的模型参数" #: ../../source/example-jax-from-centralized-to-federated.rst:161 @@ -2963,62 +2711,51 @@ msgid "transform parameters to NumPy :code:`ndarray`'s" msgstr "将参数转换为 NumPy :code:`ndarray`格式" #: ../../source/example-jax-from-centralized-to-federated.rst:162 -#: ../../source/example-mxnet-walk-through.rst:243 #: ../../source/example-pytorch-from-centralized-to-federated.rst:220 #: ../../source/tutorial-quickstart-jax.rst:168 msgid "" -"loop over the list of model parameters received as NumPy :code:`ndarray`'s " -"(think list of neural network layers)" -msgstr "" -"循环遍历以 NumPy :code:`ndarray` 形式接收的模型参数列表(可以看作神经网络的列" -"表)" +"loop over the list of model parameters received as NumPy " +":code:`ndarray`'s (think list of neural network layers)" +msgstr "循环遍历以 NumPy :code:`ndarray` 形式接收的模型参数列表(可以看作神经网络的列表)" #: ../../source/example-jax-from-centralized-to-federated.rst:163 -#: ../../source/example-mxnet-walk-through.rst:244 #: ../../source/example-pytorch-from-centralized-to-federated.rst:221 #: ../../source/tutorial-quickstart-jax.rst:169 -#: ../../source/tutorial-quickstart-mxnet.rst:171 #: ../../source/tutorial-quickstart-pytorch.rst:155 -#: ../../source/tutorial-quickstart-scikitlearn.rst:108 +#: ../../source/tutorial-quickstart-scikitlearn.rst:118 msgid ":code:`get_parameters`" msgstr ":code:`get_parameters`" #: ../../source/example-jax-from-centralized-to-federated.rst:164 -#: ../../source/example-mxnet-walk-through.rst:245 #: ../../source/example-pytorch-from-centralized-to-federated.rst:222 #: ../../source/tutorial-quickstart-jax.rst:170 msgid "" -"get the model parameters and return them as a list of NumPy :code:" -"`ndarray`'s (which is what :code:`flwr.client.NumPyClient` expects)" +"get the model parameters and return them as a list of NumPy " +":code:`ndarray`'s (which is what :code:`flwr.client.NumPyClient` expects)" msgstr "" -"获取模型参数,并以 NumPy :code:`ndarray`的列表形式返回(这正是 :code:`flwr." -"client.NumPyClient`所匹配的格式)" +"获取模型参数,并以 NumPy :code:`ndarray`的列表形式返回(这正是 " +":code:`flwr.client.NumPyClient`所匹配的格式)" #: ../../source/example-jax-from-centralized-to-federated.rst:167 -#: ../../source/example-mxnet-walk-through.rst:248 #: ../../source/example-pytorch-from-centralized-to-federated.rst:225 #: ../../source/tutorial-quickstart-jax.rst:173 -#: ../../source/tutorial-quickstart-mxnet.rst:177 #: ../../source/tutorial-quickstart-pytorch.rst:161 -#: ../../source/tutorial-quickstart-scikitlearn.rst:115 +#: ../../source/tutorial-quickstart-scikitlearn.rst:125 msgid ":code:`fit`" msgstr ":code:`fit`" #: ../../source/example-jax-from-centralized-to-federated.rst:166 #: ../../source/example-jax-from-centralized-to-federated.rst:170 -#: ../../source/example-mxnet-walk-through.rst:247 -#: ../../source/example-mxnet-walk-through.rst:251 #: ../../source/example-pytorch-from-centralized-to-federated.rst:224 #: ../../source/example-pytorch-from-centralized-to-federated.rst:228 #: ../../source/tutorial-quickstart-jax.rst:172 #: ../../source/tutorial-quickstart-jax.rst:176 msgid "" -"update the parameters of the local model with the parameters received from " -"the server" +"update the parameters of the local model with the parameters received " +"from the server" msgstr "用从服务器接收到的参数更新本地模型的参数" #: ../../source/example-jax-from-centralized-to-federated.rst:167 -#: ../../source/example-mxnet-walk-through.rst:248 #: ../../source/example-pytorch-from-centralized-to-federated.rst:225 #: ../../source/tutorial-quickstart-jax.rst:173 msgid "train the model on the local training set" @@ -3030,17 +2767,14 @@ msgid "get the updated local model parameters and return them to the server" msgstr "获取更新后的本地模型参数并返回服务器" #: ../../source/example-jax-from-centralized-to-federated.rst:172 -#: ../../source/example-mxnet-walk-through.rst:253 #: ../../source/example-pytorch-from-centralized-to-federated.rst:230 #: ../../source/tutorial-quickstart-jax.rst:178 -#: ../../source/tutorial-quickstart-mxnet.rst:180 #: ../../source/tutorial-quickstart-pytorch.rst:164 -#: ../../source/tutorial-quickstart-scikitlearn.rst:118 +#: ../../source/tutorial-quickstart-scikitlearn.rst:128 msgid ":code:`evaluate`" msgstr ":code:`evaluate`" #: ../../source/example-jax-from-centralized-to-federated.rst:171 -#: ../../source/example-mxnet-walk-through.rst:252 #: ../../source/example-pytorch-from-centralized-to-federated.rst:229 #: ../../source/tutorial-quickstart-jax.rst:177 msgid "evaluate the updated model on the local test set" @@ -3054,28 +2788,28 @@ msgstr "向服务器返回本地损失值" #: ../../source/example-jax-from-centralized-to-federated.rst:174 #: ../../source/tutorial-quickstart-jax.rst:180 msgid "" -"The challenging part is to transform the JAX model parameters from :code:" -"`DeviceArray` to :code:`NumPy ndarray` to make them compatible with " -"`NumPyClient`." +"The challenging part is to transform the JAX model parameters from " +":code:`DeviceArray` to :code:`NumPy ndarray` to make them compatible with" +" `NumPyClient`." msgstr "" -"具有挑战性的部分是将 JAX 模型参数从 :code:`DeviceArray` 转换为 :code:`NumPy " -"ndarray`,使其与 `NumPyClient` 兼容。" +"具有挑战性的部分是将 JAX 模型参数从 :code:`DeviceArray` 转换为 :code:`NumPy ndarray`,使其与 " +"`NumPyClient` 兼容。" #: ../../source/example-jax-from-centralized-to-federated.rst:176 #: ../../source/tutorial-quickstart-jax.rst:182 msgid "" -"The two :code:`NumPyClient` methods :code:`fit` and :code:`evaluate` make " -"use of the functions :code:`train()` and :code:`evaluate()` previously " +"The two :code:`NumPyClient` methods :code:`fit` and :code:`evaluate` make" +" use of the functions :code:`train()` and :code:`evaluate()` previously " "defined in :code:`jax_training.py`. So what we really do here is we tell " -"Flower through our :code:`NumPyClient` subclass which of our already defined " -"functions to call for training and evaluation. We included type annotations " -"to give you a better understanding of the data types that get passed around." +"Flower through our :code:`NumPyClient` subclass which of our already " +"defined functions to call for training and evaluation. We included type " +"annotations to give you a better understanding of the data types that get" +" passed around." msgstr "" -"这两个 :code:`NumPyClient` 方法 :code:`fit` 和 :code:`evaluate` 使用了之前" -"在 :code:`jax_training.py` 中定义的函数 :code:`train()` 和 :code:" -"`evaluate()`。因此,我们在这里要做的就是通过 :code:`NumPyClient` 子类告知 " -"Flower 在训练和评估时要调用哪些已定义的函数。我们加入了类型注解,以便让您更好" -"地理解传递的数据类型。" +"这两个 :code:`NumPyClient` 方法 :code:`fit` 和 :code:`evaluate` 使用了之前在 " +":code:`jax_training.py` 中定义的函数 :code:`train()` 和 " +":code:`evaluate()`。因此,我们在这里要做的就是通过 :code:`NumPyClient` 子类告知 Flower " +"在训练和评估时要调用哪些已定义的函数。我们加入了类型注解,以便让您更好地理解传递的数据类型。" #: ../../source/example-jax-from-centralized-to-federated.rst:245 #: ../../source/tutorial-quickstart-jax.rst:251 @@ -3083,7 +2817,6 @@ msgid "Having defined the federation process, we can run it." msgstr "定义了联邦进程后,我们就可以运行它了。" #: ../../source/example-jax-from-centralized-to-federated.rst:268 -#: ../../source/example-mxnet-walk-through.rst:347 #: ../../source/example-pytorch-from-centralized-to-federated.rst:301 #: ../../source/tutorial-quickstart-jax.rst:274 msgid "And that's it. You can now open two additional terminal windows and run" @@ -3092,286 +2825,30 @@ msgstr "就是这样,现在你可以打开另外两个终端窗口,然后运 #: ../../source/example-jax-from-centralized-to-federated.rst:274 #: ../../source/tutorial-quickstart-jax.rst:280 msgid "" -"in each window (make sure that the server is still running before you do so) " -"and see your JAX project run federated learning across two clients. " +"in each window (make sure that the server is still running before you do " +"so) and see your JAX project run federated learning across two clients. " "Congratulations!" -msgstr "" -"确保服务器仍在运行,然后在每个客户端窗口就能看到你的 JAX 项目在两个客户端上运" -"行联邦学习了。祝贺!" +msgstr "确保服务器仍在运行,然后在每个客户端窗口就能看到你的 JAX 项目在两个客户端上运行联邦学习了。祝贺!" #: ../../source/example-jax-from-centralized-to-federated.rst:279 #: ../../source/tutorial-quickstart-jax.rst:285 msgid "" "The source code of this example was improved over time and can be found " -"here: `Quickstart JAX `_. Our example is somewhat over-simplified because both " +"here: `Quickstart JAX `_. Our example is somewhat over-simplified because both " "clients load the same dataset." msgstr "" -"此示例的源代码经过长期改进,可在此处找到: `Quickstart JAX `_。我们的示例有些过于简" -"单,因为两个客户端都加载了相同的数据集。" +"此示例的源代码经过长期改进,可在此处找到: `Quickstart JAX " +"`_。我们的示例有些过于简单,因为两个客户端都加载了相同的数据集。" #: ../../source/example-jax-from-centralized-to-federated.rst:282 #: ../../source/tutorial-quickstart-jax.rst:288 msgid "" -"You're now prepared to explore this topic further. How about using a more " -"sophisticated model or using a different dataset? How about adding more " +"You're now prepared to explore this topic further. How about using a more" +" sophisticated model or using a different dataset? How about adding more " "clients?" -msgstr "" -"现在,您已准备好进行更深一步探索了。例如使用更复杂的模型或使用不同的数据集会" -"如何?增加更多客户端会如何?" - -#: ../../source/example-mxnet-walk-through.rst:2 -msgid "Example: MXNet - Run MXNet Federated" -msgstr "示例: MXNet - 运行联邦式 MXNet" - -#: ../../source/example-mxnet-walk-through.rst:4 -msgid "" -"This tutorial will show you how to use Flower to build a federated version " -"of an existing MXNet workload. We are using MXNet to train a Sequential " -"model on the MNIST dataset. We will structure the example similar to our " -"`PyTorch - From Centralized To Federated `_ walkthrough. " -"MXNet and PyTorch are very similar and a very good comparison between MXNet " -"and PyTorch is given `here `_. First, we " -"build a centralized training approach based on the `Handwritten Digit " -"Recognition `_ tutorial. Then, we build upon " -"the centralized training code to run the training in a federated fashion." -msgstr "" -"本教程将向您展示如何使用 Flower 构建现有 MXNet 的联学习版本。我们将使用 " -"MXNet 在 MNIST 数据集上训练一个序列模型。另外,我们将采用与我们的 `PyTorch - " -"从集中式到联邦式 `_ 教程类似的示例结构。MXNet 和 PyTorch 非常相" -"似,参考 `此处 `_对 MXNet 和 PyTorch 进行了" -"详细的比较。首先,我们根据 `手写数字识别 ` 教" -"程 建立了集中式训练方法。然后,我们在集中式训练代码的基础上,以联邦方式运行训" -"练。" - -#: ../../source/example-mxnet-walk-through.rst:10 -msgid "" -"Before we start setting up our MXNet example, we install the :code:`mxnet` " -"and :code:`flwr` packages:" -msgstr "" -"在开始设置 MXNet 示例之前,我们先安装 :code:`mxnet` 和 :code:`flwr` 软件包:" - -#: ../../source/example-mxnet-walk-through.rst:19 -msgid "MNIST Training with MXNet" -msgstr "使用 MXNet 进行 MNIST 训练" - -#: ../../source/example-mxnet-walk-through.rst:21 -msgid "" -"We begin with a brief description of the centralized training code based on " -"a :code:`Sequential` model. If you want a more in-depth explanation of " -"what's going on then have a look at the official `MXNet tutorial `_." -msgstr "" -"首先,我们将简要介绍基于 :code:`Sequential` 模型的集中式训练代码。如果您想获" -"得更深入的解释,请参阅官方的 `MXNet教程 `_。" - -#: ../../source/example-mxnet-walk-through.rst:24 -msgid "" -"Let's create a new file called:code:`mxnet_mnist.py` with all the components " -"required for a traditional (centralized) MNIST training. First, the MXNet " -"package :code:`mxnet` needs to be imported. You can see that we do not yet " -"import the :code:`flwr` package for federated learning. This will be done " -"later." -msgstr "" -"让我们创建一个名为:code:`mxnet_mnist.py`的新文件,其中包含传统(集中式)" -"MNIST 训练所需的所有组件。首先,需要导入 MXNet 包 :code:`mxnet`。您可以看到," -"我们尚未导入用于联合学习的 :code:`flwr` 包,这将在稍后完成。" - -#: ../../source/example-mxnet-walk-through.rst:42 -msgid "" -"The :code:`load_data()` function loads the MNIST training and test sets." -msgstr ":code:`load_data()` 函数加载 MNIST 训练集和测试集。" - -#: ../../source/example-mxnet-walk-through.rst:57 -msgid "" -"As already mentioned, we will use the MNIST dataset for this machine " -"learning workload. The model architecture (a very simple :code:`Sequential` " -"model) is defined in :code:`model()`." -msgstr "" -"如前所述,我们将使用 MNIST 数据集进行机器学习。模型架构(一个非常简单的 :" -"code:`Sequential` 模型)在 :code:`model()` 中定义。" - -#: ../../source/example-mxnet-walk-through.rst:70 -msgid "" -"We now need to define the training (function :code:`train()`) which loops " -"over the training set and measures the loss for each batch of training " -"examples." -msgstr "" -"现在,我们需要定义训练函数( :code:`train()`),该函数在训练集上循环训练,并" -"计算每批训练示例的损失值。" - -#: ../../source/example-mxnet-walk-through.rst:123 -msgid "" -"The evaluation of the model is defined in function :code:`test()`. The " -"function loops over all test samples and measures the loss and accuracy of " -"the model based on the test dataset." -msgstr "" -"模型的评估在函数 :code:`test()` 中定义。该函数循环遍历所有测试样本,并根据测" -"试数据集计算模型的损失值和准确度。" - -#: ../../source/example-mxnet-walk-through.rst:158 -msgid "" -"Having defined the data loading, model architecture, training, and " -"evaluation we can put everything together and train our model on MNIST. Note " -"that the GPU/CPU device for the training and testing is defined within the :" -"code:`ctx` (context)." -msgstr "" -"在定义了数据加载、模型架构、训练和评估之后,我们就可以把所有放在一起,在 " -"MNIST 上训练我们的模型了。请注意,用于训练和测试的 GPU/CPU 设备是在 :code:" -"`ctx`中定义的。" - -#: ../../source/example-mxnet-walk-through.rst:184 -msgid "You can now run your (centralized) MXNet machine learning workload:" -msgstr "现在,您可以运行(集中式)MXNet 机器学习工作:" - -#: ../../source/example-mxnet-walk-through.rst:190 -msgid "" -"So far this should all look fairly familiar if you've used MXNet (or even " -"PyTorch) before. Let's take the next step and use what we've built to create " -"a simple federated learning system consisting of one server and two clients." -msgstr "" -"到目前为止,如果你以前使用过 MXNet(甚至 PyTorch),这一切看起来应该相当熟" -"悉。下一步,让我们利用已构建的内容创建一个简单联邦学习系统(由一个服务器和两" -"个客户端组成)。" - -#: ../../source/example-mxnet-walk-through.rst:194 -msgid "MXNet meets Flower" -msgstr "MXNet 结合 Flower" - -#: ../../source/example-mxnet-walk-through.rst:196 -msgid "" -"So far, it was not easily possible to use MXNet workloads for federated " -"learning because federated learning is not supported in MXNet. Since Flower " -"is fully agnostic towards the underlying machine learning framework, it can " -"be used to federated arbitrary machine learning workloads. This section will " -"show you how Flower can be used to federate our centralized MXNet workload." -msgstr "" -"由于 MXNet 目前不支持联邦学习,因此无法轻松地直接将 MXNet 用于联邦学习之中。" -"Flower 与底层机器学习框架完全无关,因此它可用于任意联邦式机器学习工作。本节将" -"向你展示如何使用 Flower 将我们的集中式 MXNet 改为联邦式训练。" - -#: ../../source/example-mxnet-walk-through.rst:198 -msgid "" -"The concept to federate an existing workload is always the same and easy to " -"understand. We have to start a *server* and then use the code in :code:" -"`mxnet_mnist.py` for the *clients* that are connected to the *server*. The " -"*server* sends model parameters to the clients. The *clients* run the " -"training and update the parameters. The updated parameters are sent back to " -"the *server* which averages all received parameter updates. This describes " -"one round of the federated learning process and we repeat this for multiple " -"rounds." -msgstr "" -"将现有模型框架联邦化的概念始终是相同的,也很容易理解。我们必须启动一个*服务器" -"*,然后对连接到*服务器*的*客户端*使用 :code:`mxnet_mnist.py`中的代码。*服务器" -"*向客户端发送模型参数,然后*客户端*运行训练并更新参数。更新后的参数被发回*服" -"务器*,然后会对所有收到的参数更新进行平均聚合。以上描述的是一轮联邦学习过程," -"我们将重复进行多轮学习。" - -#: ../../source/example-mxnet-walk-through.rst:220 -msgid "" -"Finally, we will define our *client* logic in :code:`client.py` and build " -"upon the previously defined MXNet training in :code:`mxnet_mnist.py`. Our " -"*client* needs to import :code:`flwr`, but also :code:`mxnet` to update the " -"parameters on our MXNet model:" -msgstr "" -"最后,我们将在 :code:`client.py` 中定义我们的 *client* 逻辑,并以之前在 :" -"code:`mxnet_mnist.py` 中定义的 MXNet 训练为基础。我们的 *client* 不仅需要导" -"入 :code:`flwr`,还需要导入 :code:`mxnet`,以更新 MXNet 模型的参数:" - -#: ../../source/example-mxnet-walk-through.rst:235 -msgid "" -"Implementing a Flower *client* basically means implementing a subclass of " -"either :code:`flwr.client.Client` or :code:`flwr.client.NumPyClient`. Our " -"implementation will be based on :code:`flwr.client.NumPyClient` and we'll " -"call it :code:`MNISTClient`. :code:`NumPyClient` is slightly easier to " -"implement than :code:`Client` if you use a framework with good NumPy " -"interoperability (like PyTorch or MXNet) because it avoids some of the " -"boilerplate that would otherwise be necessary. :code:`MNISTClient` needs to " -"implement four methods, two methods for getting/setting model parameters, " -"one method for training the model, and one method for testing the model:" -msgstr "" -"实现 Flower *client*基本上意味着要实现 :code:`flwr.client.Client` 或 :code:" -"`flwr.client.NumPyClient` 的子类。我们的代码实现将基于 :code:`flwr.client." -"NumPyClient`,并将其命名为 :code:`MNISTClient`。如果使用具有良好 NumPy 互操作" -"性的框架(如 PyTorch 或 MXNet),:code:`NumPyClient` 比 :code:`Client`更容易" -"实现,因为它避免了一些不必要的操作。:code:`MNISTClient` 需要实现四个方法,两" -"个用于获取/设置模型参数,一个用于训练模型,一个用于测试模型:" - -#: ../../source/example-mxnet-walk-through.rst:242 -msgid "transform MXNet :code:`NDArray`'s to NumPy :code:`ndarray`'s" -msgstr "将 MXNet :code:`NDArray` 转换为 NumPy :code:`ndarray`" - -#: ../../source/example-mxnet-walk-through.rst:249 -#: ../../source/example-pytorch-from-centralized-to-federated.rst:226 -msgid "get the updated local model weights and return them to the server" -msgstr "获取更新后的本地模型参数并发送回服务器" - -#: ../../source/example-mxnet-walk-through.rst:253 -#: ../../source/example-pytorch-from-centralized-to-federated.rst:230 -msgid "return the local loss and accuracy to the server" -msgstr "向服务器返回本地损失值和精确度" - -#: ../../source/example-mxnet-walk-through.rst:255 -msgid "" -"The challenging part is to transform the MXNet parameters from :code:" -"`NDArray` to :code:`NumPy Arrays` to make it readable for Flower." -msgstr "" -"具有挑战性的部分是将 MXNet 参数从 :code:`NDArray` 转换为 :code:`NumPy " -"Arrays` 以便 Flower 可以读取。" - -#: ../../source/example-mxnet-walk-through.rst:257 -msgid "" -"The two :code:`NumPyClient` methods :code:`fit` and :code:`evaluate` make " -"use of the functions :code:`train()` and :code:`test()` previously defined " -"in :code:`mxnet_mnist.py`. So what we really do here is we tell Flower " -"through our :code:`NumPyClient` subclass which of our already defined " -"functions to call for training and evaluation. We included type annotations " -"to give you a better understanding of the data types that get passed around." -msgstr "" -"这两个 :code:`NumPyClient` 方法 :code:`fit` 和 :code:`evaluate` 使用了之前" -"在 :code:`mxnet_mnist.py` 中定义的函数 :code:`train()` 和 :code:`test()`。因" -"此,我们要做的就是通过 :code:`NumPyClient` 子类告知 Flower 在训练和评估时要调" -"用哪些已定义的函数。我们加入了类型注解,以便让您更好地理解传递的数据类型。" - -#: ../../source/example-mxnet-walk-through.rst:319 -msgid "" -"Having defined data loading, model architecture, training, and evaluation we " -"can put everything together and train our :code:`Sequential` model on MNIST." -msgstr "" -"在定义了数据加载、模型架构、训练和评估之后,我们就可以将所有内容整合在一起," -"在 MNIST 上训练我们的 :code:`Sequential` 模型。" - -#: ../../source/example-mxnet-walk-through.rst:353 -msgid "" -"in each window (make sure that the server is still running before you do so) " -"and see your MXNet project run federated learning across two clients. " -"Congratulations!" -msgstr "" -"确保服务器仍在运行后,然后就能在每个窗口中看到 MXNet 项目在两个客户端上运行联" -"邦学习了。祝贺!" - -#: ../../source/example-mxnet-walk-through.rst:358 -msgid "" -"The full source code for this example: `MXNet: From Centralized To Federated " -"(Code) `_. Our example is of course somewhat over-" -"simplified because both clients load the exact same dataset, which isn't " -"realistic. You're now prepared to explore this topic further. How about " -"using a CNN or using a different dataset? How about adding more clients?" -msgstr "" -"此示例的完整源代码在:\"MXNet: From Centralized To Federated (Code) " -"`_。当然,我们的示例有些过于简单,因为两个客户端都加载了完全相同的" -"数据集,这并不真实。现在您已经准备好进一步探讨了。使用 CNN 或使用不同的数据集" -"会如何?添加更多客户端会如何?" +msgstr "现在,您已准备好进行更深一步探索了。例如使用更复杂的模型或使用不同的数据集会如何?增加更多客户端会如何?" #: ../../source/example-pytorch-from-centralized-to-federated.rst:2 msgid "Example: PyTorch - From Centralized To Federated" @@ -3379,44 +2856,41 @@ msgstr "实例: PyTorch - 从集中式到联邦式" #: ../../source/example-pytorch-from-centralized-to-federated.rst:4 msgid "" -"This tutorial will show you how to use Flower to build a federated version " -"of an existing machine learning workload. We are using PyTorch to train a " -"Convolutional Neural Network on the CIFAR-10 dataset. First, we introduce " -"this machine learning task with a centralized training approach based on the " -"`Deep Learning with PyTorch `_ tutorial. Then, we build upon the centralized " -"training code to run the training in a federated fashion." +"This tutorial will show you how to use Flower to build a federated " +"version of an existing machine learning workload. We are using PyTorch to" +" train a Convolutional Neural Network on the CIFAR-10 dataset. First, we " +"introduce this machine learning task with a centralized training approach" +" based on the `Deep Learning with PyTorch " +"`_ " +"tutorial. Then, we build upon the centralized training code to run the " +"training in a federated fashion." msgstr "" -"本教程将向您展示如何使用 Flower 构建现有机器学习工作的联邦版本。我们使用 " -"PyTorch 在 CIFAR-10 数据集上训练一个卷积神经网络。首先,我们基于 \"Deep " -"Learning with PyTorch `_\"教程,采用集中式训练方法介绍了这项机器学习任务。然" -"后,我们在集中式训练代码的基础上以联邦方式运行训练。" +"本教程将向您展示如何使用 Flower 构建现有机器学习工作的联邦版本。我们使用 PyTorch 在 CIFAR-10 " +"数据集上训练一个卷积神经网络。首先,我们基于 \"Deep Learning with PyTorch " +"`_\"教程,采用集中式训练方法介绍了这项机器学习任务。然后,我们在集中式训练代码的基础上以联邦方式运行训练。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:12 msgid "" -"We begin with a brief description of the centralized CNN training code. If " -"you want a more in-depth explanation of what's going on then have a look at " -"the official `PyTorch tutorial `_." +"We begin with a brief description of the centralized CNN training code. " +"If you want a more in-depth explanation of what's going on then have a " +"look at the official `PyTorch tutorial " +"`_." msgstr "" -"我们首先简要介绍一下集中式 CNN 训练代码。如果您想获得更深入的解释,请参阅 " -"PyTorch 官方教程`PyTorch tutorial `_。" +"我们首先简要介绍一下集中式 CNN 训练代码。如果您想获得更深入的解释,请参阅 PyTorch 官方教程`PyTorch tutorial " +"`_。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:15 msgid "" "Let's create a new file called :code:`cifar.py` with all the components " -"required for a traditional (centralized) training on CIFAR-10. First, all " -"required packages (such as :code:`torch` and :code:`torchvision`) need to be " -"imported. You can see that we do not import any package for federated " -"learning. You can keep all these imports as they are even when we add the " -"federated learning components at a later point." +"required for a traditional (centralized) training on CIFAR-10. First, all" +" required packages (such as :code:`torch` and :code:`torchvision`) need " +"to be imported. You can see that we do not import any package for " +"federated learning. You can keep all these imports as they are even when " +"we add the federated learning components at a later point." msgstr "" -"让我们创建一个名为 :code:`cifar.py` 的新文件,其中包含 CIFAR-10 传统(集中)" -"培训所需的所有组件。首先,需要导入所有必需的软件包(如 :code:`torch` 和 :" -"code:`torchvision`)。您可以看到,我们没有导入任何用于联邦学习的软件包。即使" -"在以后添加联邦学习组件时,也可以保留所有这些导入。" +"让我们创建一个名为 :code:`cifar.py` 的新文件,其中包含 CIFAR-10 " +"传统(集中)培训所需的所有组件。首先,需要导入所有必需的软件包(如 :code:`torch` 和 " +":code:`torchvision`)。您可以看到,我们没有导入任何用于联邦学习的软件包。即使在以后添加联邦学习组件时,也可以保留所有这些导入。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:32 msgid "" @@ -3424,178 +2898,166 @@ msgid "" "learning workload. The model architecture (a very simple Convolutional " "Neural Network) is defined in :code:`class Net()`." msgstr "" -"如前所述,我们将使用 CIFAR-10 数据集进行机器学习。模型架构(一个非常简单的卷" -"积神经网络)在 :code:`class Net()` 中定义。" +"如前所述,我们将使用 CIFAR-10 数据集进行机器学习。模型架构(一个非常简单的卷积神经网络)在 :code:`class Net()` " +"中定义。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:56 msgid "" -"The :code:`load_data()` function loads the CIFAR-10 training and test sets. " -"The :code:`transform` normalized the data after loading." +"The :code:`load_data()` function loads the CIFAR-10 training and test " +"sets. The :code:`transform` normalized the data after loading." msgstr "" -":code:`load_data()` 函数加载 CIFAR-10 训练集和测试集。加载数据后,:code:" -"`transform`函数对数据进行了归一化处理。" +":code:`load_data()` 函数加载 CIFAR-10 " +"训练集和测试集。加载数据后,:code:`transform`函数对数据进行了归一化处理。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:74 msgid "" -"We now need to define the training (function :code:`train()`) which loops " -"over the training set, measures the loss, backpropagates it, and then takes " -"one optimizer step for each batch of training examples." -msgstr "" -"现在,我们需要定义训练函数(:code:`train()`),该函数在训练集上循环训练,计算" -"损失值并反向传播,然后为每批训练数据在优化器上执行一个优化步骤。" +"We now need to define the training (function :code:`train()`) which loops" +" over the training set, measures the loss, backpropagates it, and then " +"takes one optimizer step for each batch of training examples." +msgstr "现在,我们需要定义训练函数(:code:`train()`),该函数在训练集上循环训练,计算损失值并反向传播,然后为每批训练数据在优化器上执行一个优化步骤。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:76 msgid "" -"The evaluation of the model is defined in the function :code:`test()`. The " -"function loops over all test samples and measures the loss of the model " -"based on the test dataset." -msgstr "" -"模型的评估在函数 :code:`test()` 中定义。该函数循环遍历所有测试样本,并计算测" -"试数据集的模型损失值。" +"The evaluation of the model is defined in the function :code:`test()`. " +"The function loops over all test samples and measures the loss of the " +"model based on the test dataset." +msgstr "模型的评估在函数 :code:`test()` 中定义。该函数循环遍历所有测试样本,并计算测试数据集的模型损失值。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:136 msgid "" "Having defined the data loading, model architecture, training, and " "evaluation we can put everything together and train our CNN on CIFAR-10." -msgstr "" -"在确定了数据加载、模型架构、训练和评估之后,我们就可以将所有整合在一起,在 " -"CIFAR-10 上训练我们的 CNN。" +msgstr "在确定了数据加载、模型架构、训练和评估之后,我们就可以将所有整合在一起,在 CIFAR-10 上训练我们的 CNN。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:163 msgid "" -"So far, this should all look fairly familiar if you've used PyTorch before. " -"Let's take the next step and use what we've built to create a simple " -"federated learning system consisting of one server and two clients." +"So far, this should all look fairly familiar if you've used PyTorch " +"before. Let's take the next step and use what we've built to create a " +"simple federated learning system consisting of one server and two " +"clients." msgstr "" -"到目前为止,如果你以前用过 PyTorch,这一切看起来应该相当熟悉。让我们进行下一" -"步,利用我们所构建的内容创建一个简单联邦学习系统(由一个服务器和两个客户端组" -"成)。" +"到目前为止,如果你以前用过 " +"PyTorch,这一切看起来应该相当熟悉。让我们进行下一步,利用我们所构建的内容创建一个简单联邦学习系统(由一个服务器和两个客户端组成)。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:169 msgid "" -"The simple machine learning project discussed in the previous section trains " -"the model on a single dataset (CIFAR-10), we call this centralized learning. " -"This concept of centralized learning, as shown in the previous section, is " -"probably known to most of you, and many of you have used it previously. " -"Normally, if you'd want to run machine learning workloads in a federated " -"fashion, then you'd have to change most of your code and set everything up " -"from scratch. This can be a considerable effort." -msgstr "" -"上一节讨论的简单机器学习项目在单一数据集(CIFAR-10)上训练模型,我们称之为集" -"中学习。如上一节所示,集中学习的概念可能为大多数人所熟知,而且很多人以前都使" -"用过。通常情况下,如果要以联邦方式运行机器学习工作,就必须更改大部分代码,并" -"从头开始设置一切。这可能是一个相当大的工作量。" +"The simple machine learning project discussed in the previous section " +"trains the model on a single dataset (CIFAR-10), we call this centralized" +" learning. This concept of centralized learning, as shown in the previous" +" section, is probably known to most of you, and many of you have used it " +"previously. Normally, if you'd want to run machine learning workloads in " +"a federated fashion, then you'd have to change most of your code and set " +"everything up from scratch. This can be a considerable effort." +msgstr "上一节讨论的简单机器学习项目在单一数据集(CIFAR-10)上训练模型,我们称之为集中学习。如上一节所示,集中学习的概念可能为大多数人所熟知,而且很多人以前都使用过。通常情况下,如果要以联邦方式运行机器学习工作,就必须更改大部分代码,并从头开始设置一切。这可能是一个相当大的工作量。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:173 msgid "" -"However, with Flower you can evolve your pre-existing code into a federated " -"learning setup without the need for a major rewrite." -msgstr "" -"不过,有了 Flower,您可以轻松地将已有的代码转变成联邦学习的模式,无需进行大量" -"重写。" +"However, with Flower you can evolve your pre-existing code into a " +"federated learning setup without the need for a major rewrite." +msgstr "不过,有了 Flower,您可以轻松地将已有的代码转变成联邦学习的模式,无需进行大量重写。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:175 msgid "" -"The concept is easy to understand. We have to start a *server* and then use " -"the code in :code:`cifar.py` for the *clients* that are connected to the " -"*server*. The *server* sends model parameters to the clients. The *clients* " -"run the training and update the parameters. The updated parameters are sent " -"back to the *server* which averages all received parameter updates. This " -"describes one round of the federated learning process and we repeat this for " -"multiple rounds." +"The concept is easy to understand. We have to start a *server* and then " +"use the code in :code:`cifar.py` for the *clients* that are connected to " +"the *server*. The *server* sends model parameters to the clients. The " +"*clients* run the training and update the parameters. The updated " +"parameters are sent back to the *server* which averages all received " +"parameter updates. This describes one round of the federated learning " +"process and we repeat this for multiple rounds." msgstr "" -"这个概念很容易理解。我们必须启动一个*服务器*,然后对连接到*服务器*的*客户端*" -"使用 :code:`cifar.py`中的代码。*服务器*向客户端发送模型参数,*客户端*运行训练" -"并更新参数。更新后的参数被发回*服务器*,然后会对所有收到的参数更新进行平均聚" -"合。以上描述的是一轮联邦学习过程,我们将重复进行多轮学习。" +"这个概念很容易理解。我们必须启动一个*服务器*,然后对连接到*服务器*的*客户端*使用 " +":code:`cifar.py`中的代码。*服务器*向客户端发送模型参数,*客户端*运行训练并更新参数。更新后的参数被发回*服务器*,然后会对所有收到的参数更新进行平均聚合。以上描述的是一轮联邦学习过程,我们将重复进行多轮学习。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:197 msgid "" -"Finally, we will define our *client* logic in :code:`client.py` and build " -"upon the previously defined centralized training in :code:`cifar.py`. Our " -"*client* needs to import :code:`flwr`, but also :code:`torch` to update the " -"parameters on our PyTorch model:" +"Finally, we will define our *client* logic in :code:`client.py` and build" +" upon the previously defined centralized training in :code:`cifar.py`. " +"Our *client* needs to import :code:`flwr`, but also :code:`torch` to " +"update the parameters on our PyTorch model:" msgstr "" -"最后,我们将在 :code:`client.py` 中定义我们的 *client* 逻辑,并以之前在 :" -"code:`cifar.py` 中定义的集中式训练为基础。我们的 *client* 不仅需要导入 :code:" -"`flwr`,还需要导入 :code:`torch`,以更新 PyTorch 模型的参数:" +"最后,我们将在 :code:`client.py` 中定义我们的 *client* 逻辑,并以之前在 :code:`cifar.py` " +"中定义的集中式训练为基础。我们的 *client* 不仅需要导入 :code:`flwr`,还需要导入 :code:`torch`,以更新 " +"PyTorch 模型的参数:" #: ../../source/example-pytorch-from-centralized-to-federated.rst:213 msgid "" -"Implementing a Flower *client* basically means implementing a subclass of " -"either :code:`flwr.client.Client` or :code:`flwr.client.NumPyClient`. Our " -"implementation will be based on :code:`flwr.client.NumPyClient` and we'll " -"call it :code:`CifarClient`. :code:`NumPyClient` is slightly easier to " -"implement than :code:`Client` if you use a framework with good NumPy " -"interoperability (like PyTorch or TensorFlow/Keras) because it avoids some " -"of the boilerplate that would otherwise be necessary. :code:`CifarClient` " -"needs to implement four methods, two methods for getting/setting model " -"parameters, one method for training the model, and one method for testing " -"the model:" -msgstr "" -"实现 Flower *client*基本上意味着实现 :code:`flwr.client.Client` 或 :code:" -"`flwr.client.NumPyClient` 的子类。我们的代码实现将基于 :code:`flwr.client." -"NumPyClient`,并将其命名为 :code:`CifarClient`。如果使用具有良好 NumPy 互操作" -"性的框架(如 PyTorch 或 TensorFlow/Keras),:code:`NumPyClient`的实现比 :" -"code:`Client`略微容易一些,因为它避免了一些不必要的操作。:code:`CifarClient` " -"需要实现四个方法,两个用于获取/设置模型参数,一个用于训练模型,一个用于测试模" -"型:" +"Implementing a Flower *client* basically means implementing a subclass of" +" either :code:`flwr.client.Client` or :code:`flwr.client.NumPyClient`. " +"Our implementation will be based on :code:`flwr.client.NumPyClient` and " +"we'll call it :code:`CifarClient`. :code:`NumPyClient` is slightly easier" +" to implement than :code:`Client` if you use a framework with good NumPy " +"interoperability (like PyTorch or TensorFlow/Keras) because it avoids " +"some of the boilerplate that would otherwise be necessary. " +":code:`CifarClient` needs to implement four methods, two methods for " +"getting/setting model parameters, one method for training the model, and " +"one method for testing the model:" +msgstr "" +"实现 Flower *client*基本上意味着实现 :code:`flwr.client.Client` 或 " +":code:`flwr.client.NumPyClient` 的子类。我们的代码实现将基于 " +":code:`flwr.client.NumPyClient`,并将其命名为 :code:`CifarClient`。如果使用具有良好 NumPy" +" 互操作性的框架(如 PyTorch 或 TensorFlow/Keras),:code:`NumPyClient`的实现比 " +":code:`Client`略微容易一些,因为它避免了一些不必要的操作。:code:`CifarClient` " +"需要实现四个方法,两个用于获取/设置模型参数,一个用于训练模型,一个用于测试模型:" #: ../../source/example-pytorch-from-centralized-to-federated.rst:219 msgid ":code:`set_parameters`" msgstr ":code:`set_parameters`" +#: ../../source/example-pytorch-from-centralized-to-federated.rst:226 +msgid "get the updated local model weights and return them to the server" +msgstr "获取更新后的本地模型参数并发送回服务器" + +#: ../../source/example-pytorch-from-centralized-to-federated.rst:230 +msgid "return the local loss and accuracy to the server" +msgstr "向服务器返回本地损失值和精确度" + #: ../../source/example-pytorch-from-centralized-to-federated.rst:232 msgid "" -"The two :code:`NumPyClient` methods :code:`fit` and :code:`evaluate` make " -"use of the functions :code:`train()` and :code:`test()` previously defined " -"in :code:`cifar.py`. So what we really do here is we tell Flower through " -"our :code:`NumPyClient` subclass which of our already defined functions to " -"call for training and evaluation. We included type annotations to give you a " -"better understanding of the data types that get passed around." +"The two :code:`NumPyClient` methods :code:`fit` and :code:`evaluate` make" +" use of the functions :code:`train()` and :code:`test()` previously " +"defined in :code:`cifar.py`. So what we really do here is we tell Flower " +"through our :code:`NumPyClient` subclass which of our already defined " +"functions to call for training and evaluation. We included type " +"annotations to give you a better understanding of the data types that get" +" passed around." msgstr "" -"这两个 :code:`NumPyClient` 中的方法 :code:`fit` 和 :code:`evaluate` 使用了之" -"前在 :code:`cifar.py` 中定义的函数 :code:`train()` 和 :code:`test()`。因此," -"我们在这里要做的就是通过 :code:`NumPyClient` 子类告知 Flower 在训练和评估时要" -"调用哪些已定义的函数。我们加入了类型注解,以便让你更好地理解传递的数据类型。" +"这两个 :code:`NumPyClient` 中的方法 :code:`fit` 和 :code:`evaluate` 使用了之前在 " +":code:`cifar.py` 中定义的函数 :code:`train()` 和 :code:`test()`。因此,我们在这里要做的就是通过 " +":code:`NumPyClient` 子类告知 Flower " +"在训练和评估时要调用哪些已定义的函数。我们加入了类型注解,以便让你更好地理解传递的数据类型。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:280 msgid "" "All that's left to do it to define a function that loads both model and " -"data, creates a :code:`CifarClient`, and starts this client. You load your " -"data and model by using :code:`cifar.py`. Start :code:`CifarClient` with the " -"function :code:`fl.client.start_client()` by pointing it at the same IP " -"address we used in :code:`server.py`:" -msgstr "" -"剩下的就是定义模型和数据加载函数了。创建一个:code:`CifarClient`类,并运行这个" -"客服端。您将通过:code:`cifar.py`加载数据和模型。另外,通过:code:`fl.client." -"start_client()`函数来运行客户端:code:`CifarClient`,需要保证IP地址和:code:" -"`server.py`中所使用的一致:" +"data, creates a :code:`CifarClient`, and starts this client. You load " +"your data and model by using :code:`cifar.py`. Start :code:`CifarClient` " +"with the function :code:`fl.client.start_client()` by pointing it at the " +"same IP address we used in :code:`server.py`:" +msgstr "剩下的就是定义模型和数据加载函数了。创建一个:code:`CifarClient`类,并运行这个客服端。您将通过:code:`cifar.py`加载数据和模型。另外,通过:code:`fl.client.start_client()`函数来运行客户端:code:`CifarClient`,需要保证IP地址和:code:`server.py`中所使用的一致:" #: ../../source/example-pytorch-from-centralized-to-federated.rst:307 msgid "" -"in each window (make sure that the server is running before you do so) and " -"see your (previously centralized) PyTorch project run federated learning " -"across two clients. Congratulations!" -msgstr "" -"确保服务器正在运行后,您就能看到您的 PyTorch 项目(之前是集中式的)在两个客户" -"端上运行联邦学习了。祝贺!" +"in each window (make sure that the server is running before you do so) " +"and see your (previously centralized) PyTorch project run federated " +"learning across two clients. Congratulations!" +msgstr "确保服务器正在运行后,您就能看到您的 PyTorch 项目(之前是集中式的)在两个客户端上运行联邦学习了。祝贺!" #: ../../source/example-pytorch-from-centralized-to-federated.rst:312 msgid "" "The full source code for this example: `PyTorch: From Centralized To " -"Federated (Code) `_. Our example is, of course, somewhat over-" -"simplified because both clients load the exact same dataset, which isn't " -"realistic. You're now prepared to explore this topic further. How about " -"using different subsets of CIFAR-10 on each client? How about adding more " -"clients?" -msgstr "" -"本示例的完整源代码为:`PyTorch: 从集中式到联合式 `_。当然,我" -"们的示例有些过于简单,因为两个客户端都加载了完全相同的数据集,这并不真实。现" -"在,您已经准备好进一步探讨这一主题了。比如在每个客户端使用不同的 CIFAR-10 子" -"集会如何?增加更多客户端会如何?" +"Federated (Code) `_. Our example is, of course, " +"somewhat over-simplified because both clients load the exact same " +"dataset, which isn't realistic. You're now prepared to explore this topic" +" further. How about using different subsets of CIFAR-10 on each client? " +"How about adding more clients?" +msgstr "" +"本示例的完整源代码为:`PyTorch: 从集中式到联合式 " +"`_。当然,我们的示例有些过于简单,因为两个客户端都加载了完全相同的数据集,这并不真实。现在,您已经准备好进一步探讨这一主题了。比如在每个客户端使用不同的" +" CIFAR-10 子集会如何?增加更多客户端会如何?" #: ../../source/explanation-differential-privacy.rst:2 #: ../../source/explanation-differential-privacy.rst:11 @@ -3606,24 +3068,21 @@ msgstr "差分隐私" #: ../../source/explanation-differential-privacy.rst:3 #, fuzzy msgid "" -"The information in datasets like healthcare, financial transactions, user " -"preferences, etc., is valuable and has the potential for scientific " -"breakthroughs and provides important business insights. However, such data " -"is also sensitive and there is a risk of compromising individual privacy." -msgstr "" -"医疗保健、金融交易、用户偏好等数据集中的信息非常宝贵,有可能带来科学突破并提" -"供重要的商业见解。然而,这些数据也是敏感数据,存在泄露个人隐私的风险。" +"The information in datasets like healthcare, financial transactions, user" +" preferences, etc., is valuable and has the potential for scientific " +"breakthroughs and provides important business insights. However, such " +"data is also sensitive and there is a risk of compromising individual " +"privacy." +msgstr "医疗保健、金融交易、用户偏好等数据集中的信息非常宝贵,有可能带来科学突破并提供重要的商业见解。然而,这些数据也是敏感数据,存在泄露个人隐私的风险。" #: ../../source/explanation-differential-privacy.rst:6 #, fuzzy msgid "" "Traditional methods like anonymization alone would not work because of " -"attacks like Re-identification and Data Linkage. That's where differential " -"privacy comes in. It provides the possibility of analyzing data while " -"ensuring the privacy of individuals." -msgstr "" -"单靠匿名等传统方法是行不通的,因为会受到重新识别和数据链接等攻击。这就是差异" -"化隐私的用武之地。它提供了在分析数据的同时确保个人隐私的可能性。" +"attacks like Re-identification and Data Linkage. That's where " +"differential privacy comes in. It provides the possibility of analyzing " +"data while ensuring the privacy of individuals." +msgstr "单靠匿名等传统方法是行不通的,因为会受到重新识别和数据链接等攻击。这就是差异化隐私的用武之地。它提供了在分析数据的同时确保个人隐私的可能性。" #: ../../source/explanation-differential-privacy.rst:12 #, fuzzy @@ -3632,13 +3091,12 @@ msgid "" "instance, Alice's data). Differential Privacy (DP) guarantees that any " "analysis (M), like calculating the average income, will produce nearly " "identical results for both datasets (O and O' would be similar). This " -"preserves group patterns while obscuring individual details, ensuring the " -"individual's information remains hidden in the crowd." +"preserves group patterns while obscuring individual details, ensuring the" +" individual's information remains hidden in the crowd." msgstr "" -"试想一下,两个数据集除了一条记录(例如 Alice 的数据)之外完全相同。差分隐私" -"(DP)可以保证任何分析(M),比如计算平均收入,对两个数据集都会产生几乎相同的" -"结果(O 和 O' 将是相似的)。这既保留了群体模式,又掩盖了个人细节,确保个人的" -"信息隐藏在人群中。" +"试想一下,两个数据集除了一条记录(例如 Alice " +"的数据)之外完全相同。差分隐私(DP)可以保证任何分析(M),比如计算平均收入,对两个数据集都会产生几乎相同的结果(O 和 O' " +"将是相似的)。这既保留了群体模式,又掩盖了个人细节,确保个人的信息隐藏在人群中。" #: ../../source/explanation-differential-privacy.rst:-1 #, fuzzy @@ -3650,10 +3108,9 @@ msgstr "DP 介绍" msgid "" "One of the most commonly used mechanisms to achieve DP is adding enough " "noise to the output of the analysis to mask the contribution of each " -"individual in the data while preserving the overall accuracy of the analysis." -msgstr "" -"实现 DP 的最常用机制之一是在分析输出中加入足够的噪音,以掩盖数据中每个个体的" -"贡献,同时保持分析的整体准确性。" +"individual in the data while preserving the overall accuracy of the " +"analysis." +msgstr "实现 DP 的最常用机制之一是在分析输出中加入足够的噪音,以掩盖数据中每个个体的贡献,同时保持分析的整体准确性。" #: ../../source/explanation-differential-privacy.rst:25 #, fuzzy @@ -3665,18 +3122,17 @@ msgstr "编译 ProtoBuf 定义" msgid "" "Differential Privacy (DP) provides statistical guarantees against the " "information an adversary can infer through the output of a randomized " -"algorithm. It provides an unconditional upper bound on the influence of a " -"single individual on the output of the algorithm by adding noise [1]. A " -"randomized mechanism M provides (:math:`\\epsilon`, :math:`\\delta`)-" -"differential privacy if for any two neighboring databases, D :sub:`1` and D :" -"sub:`2`, that differ in only a single record, and for all possible outputs S " -"⊆ Range(A):" -msgstr "" -"差分隐私(Differential Privacy,DP)针对对手通过随机算法的输出所能推断出的信" -"息提供统计保证。它为单个个体通过添加噪声对算法输出的影响提供了一个无条件的上" -"限[1]。如果任意两个相邻的数据库D :sub:`1`和D :sub:`2`只有一条记录不同,并且对" -"于所有可能的输出S ⊆ Range(A),随机化机制M提供(:math:`epsilon`,:math:" -"`\\delta`)差异隐私:" +"algorithm. It provides an unconditional upper bound on the influence of a" +" single individual on the output of the algorithm by adding noise [1]. A " +"randomized mechanism M provides (:math:`\\epsilon`, " +":math:`\\delta`)-differential privacy if for any two neighboring " +"databases, D :sub:`1` and D :sub:`2`, that differ in only a single " +"record, and for all possible outputs S ⊆ Range(A):" +msgstr "" +"差分隐私(Differential " +"Privacy,DP)针对对手通过随机算法的输出所能推断出的信息提供统计保证。它为单个个体通过添加噪声对算法输出的影响提供了一个无条件的上限[1]。如果任意两个相邻的数据库D" +" :sub:`1`和D :sub:`2`只有一条记录不同,并且对于所有可能的输出S ⊆ " +"Range(A),随机化机制M提供(:math:`epsilon`,:math:`\\delta`)差异隐私:" #: ../../source/explanation-differential-privacy.rst:32 #, fuzzy @@ -3693,16 +3149,14 @@ msgid "" "The :math:`\\epsilon` parameter, also known as the privacy budget, is a " "metric of privacy loss. It also controls the privacy-utility trade-off; " "lower :math:`\\epsilon` values indicate higher levels of privacy but are " -"likely to reduce utility as well. The :math:`\\delta` parameter accounts for " -"a small probability on which the upper bound :math:`\\epsilon` does not " -"hold. The amount of noise needed to achieve differential privacy is " -"proportional to the sensitivity of the output, which measures the maximum " -"change in the output due to the inclusion or removal of a single record." +"likely to reduce utility as well. The :math:`\\delta` parameter accounts " +"for a small probability on which the upper bound :math:`\\epsilon` does " +"not hold. The amount of noise needed to achieve differential privacy is " +"proportional to the sensitivity of the output, which measures the maximum" +" change in the output due to the inclusion or removal of a single record." msgstr "" -":math:`\\epsilon`参数也称为隐私预算,是衡量隐私损失的指标。较低的 :math:" -"`\\epsilon` 值表示较高的隐私级别,但也可能降低效用。:math:`\\delta`参数考虑" -"了:math:`\\epsilon`上限不成立的小概率。实现差异化隐私所需的噪声量与输出的灵敏" -"度成正比,而输出的灵敏度是指由于包含或删除一条记录而导致的输出的最大变化。" +":math:`\\epsilon`参数也称为隐私预算,是衡量隐私损失的指标。较低的 :math:`\\epsilon` " +"值表示较高的隐私级别,但也可能降低效用。:math:`\\delta`参数考虑了:math:`\\epsilon`上限不成立的小概率。实现差异化隐私所需的噪声量与输出的灵敏度成正比,而输出的灵敏度是指由于包含或删除一条记录而导致的输出的最大变化。" #: ../../source/explanation-differential-privacy.rst:45 #, fuzzy @@ -3714,20 +3168,20 @@ msgstr "差分隐私" msgid "" "DP can be utilized in machine learning to preserve the privacy of the " "training data. Differentially private machine learning algorithms are " -"designed in a way to prevent the algorithm to learn any specific information " -"about any individual data points and subsequently prevent the model from " -"revealing sensitive information. Depending on the stage at which noise is " -"introduced, various methods exist for applying DP to machine learning " -"algorithms. One approach involves adding noise to the training data (either " -"to the features or labels), while another method entails injecting noise " -"into the gradients of the loss function during model training. Additionally, " -"such noise can be incorporated into the model's output." -msgstr "" -"机器学习中可以利用 DP 来保护训练数据的隐私。差分保密机器学习算法的设计方式是" -"防止算法学习到任何单个数据点的任何特定信息,从而防止模型泄露敏感信息。根据引" -"入噪声的阶段,有多种方法可将 DP 应用于机器学习算法。一种方法是在训练数据(特" -"征或标签)中添加噪声,另一种方法是在模型训练过程中向损失函数的梯度注入噪声。" -"此外,这种噪声还可以被纳入模型的输出中。" +"designed in a way to prevent the algorithm to learn any specific " +"information about any individual data points and subsequently prevent the" +" model from revealing sensitive information. Depending on the stage at " +"which noise is introduced, various methods exist for applying DP to " +"machine learning algorithms. One approach involves adding noise to the " +"training data (either to the features or labels), while another method " +"entails injecting noise into the gradients of the loss function during " +"model training. Additionally, such noise can be incorporated into the " +"model's output." +msgstr "" +"机器学习中可以利用 DP " +"来保护训练数据的隐私。差分保密机器学习算法的设计方式是防止算法学习到任何单个数据点的任何特定信息,从而防止模型泄露敏感信息。根据引入噪声的阶段,有多种方法可将" +" DP " +"应用于机器学习算法。一种方法是在训练数据(特征或标签)中添加噪声,另一种方法是在模型训练过程中向损失函数的梯度注入噪声。此外,这种噪声还可以被纳入模型的输出中。" #: ../../source/explanation-differential-privacy.rst:53 #, fuzzy @@ -3740,56 +3194,47 @@ msgid "" "Federated learning is a data minimization approach that allows multiple " "parties to collaboratively train a model without sharing their raw data. " "However, federated learning also introduces new privacy challenges. The " -"model updates between parties and the central server can leak information " -"about the local data. These leaks can be exploited by attacks such as " +"model updates between parties and the central server can leak information" +" about the local data. These leaks can be exploited by attacks such as " "membership inference and property inference attacks, or model inversion " "attacks." -msgstr "" -"联合学习是一种数据最小化方法,允许多方在不共享原始数据的情况下合作训练一个模" -"型。然而,联合学习也带来了新的隐私挑战。各方与中央服务器之间的模型更新可能会" -"泄露本地数据信息。这些泄漏信息可能会被攻击利用,如成员推断攻击、属性推断攻击" -"或模型反转攻击。" +msgstr "联合学习是一种数据最小化方法,允许多方在不共享原始数据的情况下合作训练一个模型。然而,联合学习也带来了新的隐私挑战。各方与中央服务器之间的模型更新可能会泄露本地数据信息。这些泄漏信息可能会被攻击利用,如成员推断攻击、属性推断攻击或模型反转攻击。" #: ../../source/explanation-differential-privacy.rst:58 #, fuzzy msgid "" -"DP can play a crucial role in federated learning to provide privacy for the " -"clients' data." +"DP can play a crucial role in federated learning to provide privacy for " +"the clients' data." msgstr "DP 可以在联合学习中发挥重要作用,为客户数据提供隐私保护。" #: ../../source/explanation-differential-privacy.rst:60 #, fuzzy msgid "" -"Depending on the granularity of privacy provision or the location of noise " -"addition, different forms of DP exist in federated learning. In this " -"explainer, we focus on two approaches of DP utilization in federated " -"learning based on where the noise is added: at the server (also known as the " -"center) or at the client (also known as the local)." +"Depending on the granularity of privacy provision or the location of " +"noise addition, different forms of DP exist in federated learning. In " +"this explainer, we focus on two approaches of DP utilization in federated" +" learning based on where the noise is added: at the server (also known as" +" the center) or at the client (also known as the local)." msgstr "" -"根据提供隐私的粒度或添加噪声的位置,联合学习中存在不同形式的 DP。在本说明中," -"我们将根据添加噪声的位置,重点介绍联合学习中利用 DP 的两种方法:在服务器(也" -"称为中心)或客户端(也称为本地)。" +"根据提供隐私的粒度或添加噪声的位置,联合学习中存在不同形式的 DP。在本说明中,我们将根据添加噪声的位置,重点介绍联合学习中利用 DP " +"的两种方法:在服务器(也称为中心)或客户端(也称为本地)。" #: ../../source/explanation-differential-privacy.rst:63 #, fuzzy msgid "" -"**Central Differential Privacy**: DP is applied by the server and the goal " -"is to prevent the aggregated model from leaking information about each " -"client's data." -msgstr "" -"**中央差分隐私**: DP 由服务器应用,目标是防止聚合模型泄露每个客户的数据信" -"息。" +"**Central Differential Privacy**: DP is applied by the server and the " +"goal is to prevent the aggregated model from leaking information about " +"each client's data." +msgstr "**中央差分隐私**: DP 由服务器应用,目标是防止聚合模型泄露每个客户的数据信息。" #: ../../source/explanation-differential-privacy.rst:65 #, fuzzy msgid "" "**Local Differential Privacy**: DP is applied on the client side before " -"sending any information to the server and the goal is to prevent the updates " -"that are sent to the server from leaking any information about the client's " -"data." -msgstr "" -"**本地差分隐私**: 在向服务器发送任何信息之前,在客户端应用 DP,目的是防止向" -"服务器发送的更新泄露任何有关客户端数据的信息。" +"sending any information to the server and the goal is to prevent the " +"updates that are sent to the server from leaking any information about " +"the client's data." +msgstr "**本地差分隐私**: 在向服务器发送任何信息之前,在客户端应用 DP,目的是防止向服务器发送的更新泄露任何有关客户端数据的信息。" #: ../../source/explanation-differential-privacy.rst:-1 #: ../../source/explanation-differential-privacy.rst:68 @@ -3801,34 +3246,28 @@ msgstr "差分隐私" #: ../../source/explanation-differential-privacy.rst:69 #, fuzzy msgid "" -"In this approach, which is also known as user-level DP, the central server " -"is responsible for adding noise to the globally aggregated parameters. It " -"should be noted that trust in the server is required." -msgstr "" -"在这种方法(也称为用户级 DP)中,中央服务器负责在全局汇总参数中添加噪声。需要" -"注意的是,这需要对服务器的信任。" +"In this approach, which is also known as user-level DP, the central " +"server is responsible for adding noise to the globally aggregated " +"parameters. It should be noted that trust in the server is required." +msgstr "在这种方法(也称为用户级 DP)中,中央服务器负责在全局汇总参数中添加噪声。需要注意的是,这需要对服务器的信任。" #: ../../source/explanation-differential-privacy.rst:76 #, fuzzy msgid "" -"While there are various ways to implement central DP in federated learning, " -"we concentrate on the algorithms proposed by [2] and [3]. The overall " -"approach is to clip the model updates sent by the clients and add some " -"amount of noise to the aggregated model. In each iteration, a random set of " -"clients is chosen with a specific probability for training. Each client " -"performs local training on its own data. The update of each client is then " -"clipped by some value `S` (sensitivity `S`). This would limit the impact of " -"any individual client which is crucial for privacy and often beneficial for " -"robustness. A common approach to achieve this is by restricting the `L2` " -"norm of the clients' model updates, ensuring that larger updates are scaled " -"down to fit within the norm `S`." -msgstr "" -"虽然在联合学习中实现中央数据处理的方法有很多种,但我们将重点放在[2]和[3]提出" -"的算法上。总体方法是剪辑客户端发送的模型更新,并在聚合模型中添加一定量的噪" -"声。在每次迭代中,以特定概率随机选择一组客户端进行训练。每个客户端对自己的数" -"据进行局部训练。然后,每个客户端的更新会被某个值`S`(灵敏度`S`)剪切。这将限" -"制任何单个客户端的影响,这对隐私至关重要,通常也有利于稳健性。实现这一点的常" -"用方法是限制客户机模型更新的 `L2` 准则,确保较大的更新被缩减以适应 `S` 准则。" +"While there are various ways to implement central DP in federated " +"learning, we concentrate on the algorithms proposed by [2] and [3]. The " +"overall approach is to clip the model updates sent by the clients and add" +" some amount of noise to the aggregated model. In each iteration, a " +"random set of clients is chosen with a specific probability for training." +" Each client performs local training on its own data. The update of each " +"client is then clipped by some value `S` (sensitivity `S`). This would " +"limit the impact of any individual client which is crucial for privacy " +"and often beneficial for robustness. A common approach to achieve this is" +" by restricting the `L2` norm of the clients' model updates, ensuring " +"that larger updates are scaled down to fit within the norm `S`." +msgstr "" +"虽然在联合学习中实现中央数据处理的方法有很多种,但我们将重点放在[2]和[3]提出的算法上。总体方法是剪辑客户端发送的模型更新,并在聚合模型中添加一定量的噪声。在每次迭代中,以特定概率随机选择一组客户端进行训练。每个客户端对自己的数据进行局部训练。然后,每个客户端的更新会被某个值`S`(灵敏度`S`)剪切。这将限制任何单个客户端的影响,这对隐私至关重要,通常也有利于稳健性。实现这一点的常用方法是限制客户机模型更新的" +" `L2` 准则,确保较大的更新被缩减以适应 `S` 准则。" #: ../../source/explanation-differential-privacy.rst:-1 #, fuzzy @@ -3838,15 +3277,14 @@ msgstr "剪贴" #: ../../source/explanation-differential-privacy.rst:89 #, fuzzy msgid "" -"Afterwards, the Gaussian mechanism is used to add noise in order to distort " -"the sum of all clients' updates. The amount of noise is scaled to the " -"sensitivity value to obtain a privacy guarantee. The Gaussian mechanism is " -"used with a noise sampled from `N (0, σ²)` where `σ = ( noise_scale * S ) / " -"(number of sampled clients)`." +"Afterwards, the Gaussian mechanism is used to add noise in order to " +"distort the sum of all clients' updates. The amount of noise is scaled to" +" the sensitivity value to obtain a privacy guarantee. The Gaussian " +"mechanism is used with a noise sampled from `N (0, σ²)` where `σ = ( " +"noise_scale * S ) / (number of sampled clients)`." msgstr "" -"然后,使用高斯机制添加噪声,以扭曲所有客户端的更新总和。噪声量与灵敏度值成正" -"比,以获得隐私保证。高斯机制的噪声采样范围为 `N (0, σ²)` ,其中 σ = ( 噪声规" -"模 * S ) / (采样客户数)`。" +"然后,使用高斯机制添加噪声,以扭曲所有客户端的更新总和。噪声量与灵敏度值成正比,以获得隐私保证。高斯机制的噪声采样范围为 `N (0, σ²)` " +",其中 σ = ( 噪声规模 * S ) / (采样客户数)`。" #: ../../source/explanation-differential-privacy.rst:94 #, fuzzy @@ -3856,40 +3294,34 @@ msgstr "剪贴" #: ../../source/explanation-differential-privacy.rst:96 #, fuzzy msgid "" -"There are two forms of clipping commonly used in Central DP: Fixed Clipping " -"and Adaptive Clipping." +"There are two forms of clipping commonly used in Central DP: Fixed " +"Clipping and Adaptive Clipping." msgstr "中央处理器常用的剪切有两种形式:固定剪切和自适应剪切。" #: ../../source/explanation-differential-privacy.rst:98 #, fuzzy msgid "" -"**Fixed Clipping** : A predefined fix threshold is set for the magnitude of " -"clients' updates. Any update exceeding this threshold is clipped back to the " -"threshold value." -msgstr "" -"** 固定削波** : 为客户端更新的大小设置了一个预定义的固定阈值。任何超过该阈值" -"的更新都会被剪切回阈值。" +"**Fixed Clipping** : A predefined fix threshold is set for the magnitude " +"of clients' updates. Any update exceeding this threshold is clipped back " +"to the threshold value." +msgstr "** 固定削波** : 为客户端更新的大小设置了一个预定义的固定阈值。任何超过该阈值的更新都会被剪切回阈值。" #: ../../source/explanation-differential-privacy.rst:100 #, fuzzy msgid "" -"**Adaptive Clipping** : The clipping threshold dynamically adjusts based on " -"the observed update distribution [4]. It means that the clipping value is " -"tuned during the rounds with respect to the quantile of the update norm " -"distribution." -msgstr "" -"** 自适应削波** : 削波阈值根据观察到的更新分布动态调整[4]。这意味着,在各轮" -"中,会根据更新规范分布的量化值调整削波值。" +"**Adaptive Clipping** : The clipping threshold dynamically adjusts based " +"on the observed update distribution [4]. It means that the clipping value" +" is tuned during the rounds with respect to the quantile of the update " +"norm distribution." +msgstr "** 自适应削波** : 削波阈值根据观察到的更新分布动态调整[4]。这意味着,在各轮中,会根据更新规范分布的量化值调整削波值。" #: ../../source/explanation-differential-privacy.rst:102 #, fuzzy msgid "" -"The choice between fixed and adaptive clipping depends on various factors " -"such as privacy requirements, data distribution, model complexity, and " +"The choice between fixed and adaptive clipping depends on various factors" +" such as privacy requirements, data distribution, model complexity, and " "others." -msgstr "" -"在固定剪切和自适应剪切之间做出选择取决于各种因素,如隐私要求、数据分布、模型" -"复杂性等。" +msgstr "在固定剪切和自适应剪切之间做出选择取决于各种因素,如隐私要求、数据分布、模型复杂性等。" #: ../../source/explanation-differential-privacy.rst:-1 #: ../../source/explanation-differential-privacy.rst:105 @@ -3902,13 +3334,12 @@ msgstr "差分隐私" #, fuzzy msgid "" "In this approach, each client is responsible for performing DP. Local DP " -"avoids the need for a fully trusted aggregator, but it should be noted that " -"local DP leads to a decrease in accuracy but better privacy in comparison to " -"central DP." +"avoids the need for a fully trusted aggregator, but it should be noted " +"that local DP leads to a decrease in accuracy but better privacy in " +"comparison to central DP." msgstr "" -"在这种方法中,每个客户端都负责执行 DP。本地 DP 避免了对完全可信的聚合器的需" -"求,但需要注意的是,与中央 DP 相比,本地 DP 会降低准确性,但却能更好地保护隐" -"私。" +"在这种方法中,每个客户端都负责执行 DP。本地 DP 避免了对完全可信的聚合器的需求,但需要注意的是,与中央 DP 相比,本地 DP " +"会降低准确性,但却能更好地保护隐私。" #: ../../source/explanation-differential-privacy.rst:116 #, fuzzy @@ -3919,25 +3350,24 @@ msgstr "在本说明中,我们将重点介绍实现本地 DP 的两种形式 #, fuzzy msgid "" "Each client adds noise to the local updates before sending them to the " -"server. To achieve (:math:`\\epsilon`, :math:`\\delta`)-DP, considering the " -"sensitivity of the local model to be ∆, Gaussian noise is applied with a " -"noise scale of σ where:" +"server. To achieve (:math:`\\epsilon`, :math:`\\delta`)-DP, considering " +"the sensitivity of the local model to be ∆, Gaussian noise is applied " +"with a noise scale of σ where:" msgstr "" -"每个客户端在向服务器发送本地更新之前,都会在本地更新中加入噪声。为了实现(:" -"math:`\\epsilon`, :math:`\\delta`)-DP,考虑到本地模型的灵敏度为 ∆,应用了高" -"斯噪声,噪声尺度为 σ,其中:" +"每个客户端在向服务器发送本地更新之前,都会在本地更新中加入噪声。为了实现(:math:`\\epsilon`, " +":math:`\\delta`)-DP,考虑到本地模型的灵敏度为 ∆,应用了高斯噪声,噪声尺度为 σ,其中:" #: ../../source/explanation-differential-privacy.rst:120 #, fuzzy msgid "" "\\small\n" -"\\frac{∆ \\times \\sqrt{2 \\times \\log\\left(\\frac{1.25}{\\delta}\\right)}}" -"{\\epsilon}\n" +"\\frac{∆ \\times \\sqrt{2 \\times " +"\\log\\left(\\frac{1.25}{\\delta}\\right)}}{\\epsilon}\n" "\n" msgstr "" "\\small\n" -"\\frac{∆ \\times \\sqrt{2 \\times \\log\\left(\\frac{1.25}{\\delta}\\right)}}" -"{\\epsilon}\n" +"\\frac{∆ \\times \\sqrt{2 \\times " +"\\log\\left(\\frac{1.25}{\\delta}\\right)}}{\\epsilon}\n" "\n" #: ../../source/explanation-differential-privacy.rst:125 @@ -3946,9 +3376,7 @@ msgid "" "Each client adds noise to the gradients of the model during the local " "training (DP-SGD). More specifically, in this approach, gradients are " "clipped and an amount of calibrated noise is injected into the gradients." -msgstr "" -"在局部训练过程中,每个客户端都会向模型的梯度添加噪声(DP-SGD)。更具体地说," -"在这种方法中,梯度会被剪切,并在梯度中注入一定量的校准噪声。" +msgstr "在局部训练过程中,每个客户端都会向模型的梯度添加噪声(DP-SGD)。更具体地说,在这种方法中,梯度会被剪切,并在梯度中注入一定量的校准噪声。" #: ../../source/explanation-differential-privacy.rst:128 #, fuzzy @@ -3970,25 +3398,26 @@ msgstr "[1] Dwork 等:《差分隐私的算法基础》。" #: ../../source/explanation-differential-privacy.rst:135 #, fuzzy msgid "" -"[2] McMahan et al. Learning Differentially Private Recurrent Language Models." +"[2] McMahan et al. Learning Differentially Private Recurrent Language " +"Models." msgstr "" -"McMahan, H. Brendan等. \"Learning differentially private recurrent language " -"models.\" arXiv preprint arXiv:1710.06963 (2017)." +"McMahan, H. Brendan等. \"Learning differentially private recurrent " +"language models.\" arXiv preprint arXiv:1710.06963 (2017)." #: ../../source/explanation-differential-privacy.rst:137 #, fuzzy msgid "" -"[3] Geyer et al. Differentially Private Federated Learning: A Client Level " -"Perspective." +"[3] Geyer et al. Differentially Private Federated Learning: A Client " +"Level Perspective." msgstr "[3] Geyer 等人。差异化化私人联合学习:客户层面的视角。" #: ../../source/explanation-differential-privacy.rst:139 #, fuzzy -msgid "" -"[4] Galen et al. Differentially Private Learning with Adaptive Clipping." +msgid "[4] Galen et al. Differentially Private Learning with Adaptive Clipping." msgstr "" -"Andrew, Galen等. \"Differentially private learning with adaptive clipping.\" " -"Advances in Neural Information Processing Systems 34 (2021): 17455-17466." +"Andrew, Galen等. \"Differentially private learning with adaptive " +"clipping.\" Advances in Neural Information Processing Systems 34 (2021): " +"17455-17466." #: ../../source/explanation-federated-evaluation.rst:2 #: ../../source/tutorial-series-what-is-federated-learning.ipynb:292 @@ -3998,11 +3427,9 @@ msgstr "联邦学习评估" #: ../../source/explanation-federated-evaluation.rst:4 msgid "" "There are two main approaches to evaluating models in federated learning " -"systems: centralized (or server-side) evaluation and federated (or client-" -"side) evaluation." -msgstr "" -"评估联合学习系统中的模型主要有两种方法:集中(或服务器端)评估和联邦(或客户" -"端)评估。" +"systems: centralized (or server-side) evaluation and federated (or " +"client-side) evaluation." +msgstr "评估联合学习系统中的模型主要有两种方法:集中(或服务器端)评估和联邦(或客户端)评估。" #: ../../source/explanation-federated-evaluation.rst:8 msgid "Centralized Evaluation" @@ -4018,9 +3445,7 @@ msgid "" "evaluation function during initialization. An evaluation function is any " "function that can take the current global model parameters as input and " "return evaluation results:" -msgstr "" -"所有内置策略都通过在初始化过程中提供一个评估函数来支持集中评估。评估函数是任" -"何可以将当前全局模型参数作为输入并返回评估结果的函数:" +msgstr "所有内置策略都通过在初始化过程中提供一个评估函数来支持集中评估。评估函数是任何可以将当前全局模型参数作为输入并返回评估结果的函数:" #: ../../source/explanation-federated-evaluation.rst:58 msgid "Custom Strategies" @@ -4028,14 +3453,14 @@ msgstr "定制策略" #: ../../source/explanation-federated-evaluation.rst:60 msgid "" -"The :code:`Strategy` abstraction provides a method called :code:`evaluate` " -"that can directly be used to evaluate the current global model parameters. " -"The current server implementation calls :code:`evaluate` after parameter " -"aggregation and before federated evaluation (see next paragraph)." +"The :code:`Strategy` abstraction provides a method called " +":code:`evaluate` that can directly be used to evaluate the current global" +" model parameters. The current server implementation calls " +":code:`evaluate` after parameter aggregation and before federated " +"evaluation (see next paragraph)." msgstr "" -":code:`Strategy` 抽象提供了一个名为 :code:`evaluate` 的方法,可直接用于评估当" -"前的全局模型参数。服务器会在参数聚合后和联邦评估前调用 :code:`evaluate`(见下" -"段)。" +":code:`Strategy` 抽象提供了一个名为 :code:`evaluate` " +"的方法,可直接用于评估当前的全局模型参数。服务器会在参数聚合后和联邦评估前调用 :code:`evaluate`(见下段)。" #: ../../source/explanation-federated-evaluation.rst:65 msgid "Federated Evaluation" @@ -4047,10 +3472,9 @@ msgstr "实现联邦评估" #: ../../source/explanation-federated-evaluation.rst:70 msgid "" -"Client-side evaluation happens in the :code:`Client.evaluate` method and can " -"be configured from the server side." -msgstr "" -"客户端评估在 :code:`Client.evaluate` 方法中进行,并可从服务器端进行配置。" +"Client-side evaluation happens in the :code:`Client.evaluate` method and " +"can be configured from the server side." +msgstr "客户端评估在 :code:`Client.evaluate` 方法中进行,并可从服务器端进行配置。" #: ../../source/explanation-federated-evaluation.rst:101 msgid "Configuring Federated Evaluation" @@ -4064,55 +3488,51 @@ msgstr "联邦评估可从服务器端进行配置。内置策略支持以下参 #: ../../source/explanation-federated-evaluation.rst:105 msgid "" -":code:`fraction_evaluate`: a :code:`float` defining the fraction of clients " -"that will be selected for evaluation. If :code:`fraction_evaluate` is set " -"to :code:`0.1` and :code:`100` clients are connected to the server, then :" -"code:`10` will be randomly selected for evaluation. If :code:" -"`fraction_evaluate` is set to :code:`0.0`, federated evaluation will be " -"disabled." +":code:`fraction_evaluate`: a :code:`float` defining the fraction of " +"clients that will be selected for evaluation. If " +":code:`fraction_evaluate` is set to :code:`0.1` and :code:`100` clients " +"are connected to the server, then :code:`10` will be randomly selected " +"for evaluation. If :code:`fraction_evaluate` is set to :code:`0.0`, " +"federated evaluation will be disabled." msgstr "" -":code:`fraction_evaluate`: :code:`float`,定义了被选中进行评估的客户端的比" -"例。如果 :code:`fraction_evaluate` 设置为 :code:`0.1`,并且 :code:`100` 个客" -"户端连接到服务器,那么 :code:`10` 个客户端将被随机选中进行评估。如果 :code:" -"`fraction_evaluate` 设置为 :code:`0.0`,联邦评估将被禁用。" +":code:`fraction_evaluate`: :code:`float`,定义了被选中进行评估的客户端的比例。如果 " +":code:`fraction_evaluate` 设置为 :code:`0.1`,并且 :code:`100` 个客户端连接到服务器,那么 " +":code:`10` 个客户端将被随机选中进行评估。如果 :code:`fraction_evaluate` 设置为 " +":code:`0.0`,联邦评估将被禁用。" #: ../../source/explanation-federated-evaluation.rst:106 msgid "" -":code:`min_evaluate_clients`: an :code:`int`: the minimum number of clients " -"to be selected for evaluation. If :code:`fraction_evaluate` is set to :code:" -"`0.1`, :code:`min_evaluate_clients` is set to 20, and :code:`100` clients " -"are connected to the server, then :code:`20` clients will be selected for " -"evaluation." +":code:`min_evaluate_clients`: an :code:`int`: the minimum number of " +"clients to be selected for evaluation. If :code:`fraction_evaluate` is " +"set to :code:`0.1`, :code:`min_evaluate_clients` is set to 20, and " +":code:`100` clients are connected to the server, then :code:`20` clients " +"will be selected for evaluation." msgstr "" -":code:`min_evaluate_clients`:一个 :code:`int`,需要评估的客户的最小数量。如" -"果 :code:`fraction_evaluate` 设置为 :code:`0.1`,:code:" -"`min_evaluate_clients` 设置为 20,并且有 :code:`100` 个客户端已连接到服务器," -"那么 :code:`20` 个客户端将被选中进行评估。" +":code:`min_evaluate_clients`:一个 :code:`int`,需要评估的客户的最小数量。如果 " +":code:`fraction_evaluate` 设置为 :code:`0.1`,:code:`min_evaluate_clients` " +"设置为 20,并且有 :code:`100` 个客户端已连接到服务器,那么 :code:`20` 个客户端将被选中进行评估。" #: ../../source/explanation-federated-evaluation.rst:107 msgid "" ":code:`min_available_clients`: an :code:`int` that defines the minimum " -"number of clients which need to be connected to the server before a round of " -"federated evaluation can start. If fewer than :code:`min_available_clients` " -"are connected to the server, the server will wait until more clients are " -"connected before it continues to sample clients for evaluation." +"number of clients which need to be connected to the server before a round" +" of federated evaluation can start. If fewer than " +":code:`min_available_clients` are connected to the server, the server " +"will wait until more clients are connected before it continues to sample " +"clients for evaluation." msgstr "" -":code:`min_available_clients`: :code:`int`,定义了在一轮联邦评估开始之前,需" -"要连接到服务器的最小客户端数量。如果连接到服务器的客户端数量少于 :code:" -"`min_available_clients`,服务器将等待更多客户端连接后,才继续采样客户端进行评" -"估。" +":code:`min_available_clients`: " +":code:`int`,定义了在一轮联邦评估开始之前,需要连接到服务器的最小客户端数量。如果连接到服务器的客户端数量少于 " +":code:`min_available_clients`,服务器将等待更多客户端连接后,才继续采样客户端进行评估。" #: ../../source/explanation-federated-evaluation.rst:108 msgid "" ":code:`on_evaluate_config_fn`: a function that returns a configuration " -"dictionary which will be sent to the selected clients. The function will be " -"called during each round and provides a convenient way to customize client-" -"side evaluation from the server side, for example, to configure the number " -"of validation steps performed." -msgstr "" -"code:`on_evaluate_config_fn`:返回配置字典的函数,该字典将发送给选定的客户" -"端。该函数将在每一轮中被调用,并提供了一种方便的方法来从服务器端自定义客户端" -"评估,例如,配置执行的验证步骤数。" +"dictionary which will be sent to the selected clients. The function will " +"be called during each round and provides a convenient way to customize " +"client-side evaluation from the server side, for example, to configure " +"the number of validation steps performed." +msgstr "code:`on_evaluate_config_fn`:返回配置字典的函数,该字典将发送给选定的客户端。该函数将在每一轮中被调用,并提供了一种方便的方法来从服务器端自定义客户端评估,例如,配置执行的验证步骤数。" #: ../../source/explanation-federated-evaluation.rst:135 msgid "Evaluating Local Model Updates During Training" @@ -4120,11 +3540,10 @@ msgstr "评估训练期间的本地模型更新" #: ../../source/explanation-federated-evaluation.rst:137 msgid "" -"Model parameters can also be evaluated during training. :code:`Client.fit` " -"can return arbitrary evaluation results as a dictionary:" -msgstr "" -"模型参数也可在训练过程中进行评估。 :code:`Client.fit`可以字典形式返回任意评估" -"结果:" +"Model parameters can also be evaluated during training. " +":code:`Client.fit` can return arbitrary evaluation results as a " +"dictionary:" +msgstr "模型参数也可在训练过程中进行评估。 :code:`Client.fit`可以字典形式返回任意评估结果:" #: ../../source/explanation-federated-evaluation.rst:177 msgid "Full Code Example" @@ -4132,14 +3551,14 @@ msgstr "完整代码示例" #: ../../source/explanation-federated-evaluation.rst:179 msgid "" -"For a full code example that uses both centralized and federated evaluation, " -"see the *Advanced TensorFlow Example* (the same approach can be applied to " -"workloads implemented in any other framework): https://github.com/adap/" -"flower/tree/main/examples/advanced-tensorflow" +"For a full code example that uses both centralized and federated " +"evaluation, see the *Advanced TensorFlow Example* (the same approach can " +"be applied to workloads implemented in any other framework): " +"https://github.com/adap/flower/tree/main/examples/advanced-tensorflow" msgstr "" "有关同时使用集中评估和联邦评估的完整代码示例,请参阅 *Advanced TensorFlow " -"Example*(同样的方法也可应用于任何其他框架中): https://github.com/adap/" -"flower/tree/main/examples/advanced-tensorflow" +"Example*(同样的方法也可应用于任何其他框架中): " +"https://github.com/adap/flower/tree/main/examples/advanced-tensorflow" #: ../../source/fed/0000-20200102-fed-template.md:10 msgid "FED Template" @@ -4312,12 +3731,10 @@ msgstr "保留 GitHub 问题,用于跟踪进行中的工作" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:37 msgid "" -"ensure community participants can successfully drive changes to completion " -"across one or more releases while stakeholders are adequately represented " -"throughout the process" -msgstr "" -"确保社区参与者能够成功推动改动,完成一个或多个版本,同时利益相关者在整个过程" -"中得到充分展现" +"ensure community participants can successfully drive changes to " +"completion across one or more releases while stakeholders are adequately " +"represented throughout the process" +msgstr "确保社区参与者能够成功推动改动,完成一个或多个版本,同时利益相关者在整个过程中得到充分展现" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:39 msgid "Hence, an Enhancement Doc combines aspects of" @@ -4344,69 +3761,61 @@ msgstr "该文件是与社区合作逐步创建的。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:49 msgid "" "For far-fetching changes or features proposed to Flower, an abstraction " -"beyond a single GitHub issue or pull request is required to understand and " -"communicate upcoming changes to the project." +"beyond a single GitHub issue or pull request is required to understand " +"and communicate upcoming changes to the project." msgstr "" -"对于向 Flower 提出的远期变更或功能,需要一个超越单个 GitHub 问题或拉取请求" -"(pull request)的抽象概念,以了解和沟通项目即将发生的变更。" +"对于向 Flower 提出的远期变更或功能,需要一个超越单个 GitHub 问题或拉取请求(pull " +"request)的抽象概念,以了解和沟通项目即将发生的变更。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:51 msgid "" -"The purpose of this process is to reduce the amount of \"tribal knowledge\" " -"in our community. By moving decisions from Slack threads, video calls, and " -"hallway conversations into a well-tracked artifact, this process aims to " -"enhance communication and discoverability." +"The purpose of this process is to reduce the amount of \"tribal " +"knowledge\" in our community. By moving decisions from Slack threads, " +"video calls, and hallway conversations into a well-tracked artifact, this" +" process aims to enhance communication and discoverability." msgstr "" -"这一流程的目的是减少我们社区中 \"部落知识 \"的数量。通过将决策从 Slack 线程、" -"视频通话和走廊对话转移到一个跟踪良好的工作环境中,该流程旨在加强沟通和可发现" -"性。" +"这一流程的目的是减少我们社区中 \"部落知识 \"的数量。通过将决策从 Slack " +"线程、视频通话和走廊对话转移到一个跟踪良好的工作环境中,该流程旨在加强沟通和可发现性。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:55 msgid "" -"Roughly any larger, user-facing enhancement should follow the Enhancement " -"process. If an enhancement would be described in either written or verbal " -"communication to anyone besides the author or developer, then consider " -"creating an Enhancement Doc." -msgstr "" -"任何较大的、面向用户的增强都应遵循增强流程。如果要以书面或口头形式向作者或开" -"发人员以外的任何人描述增强功能,则应考虑创建改善文档。" +"Roughly any larger, user-facing enhancement should follow the Enhancement" +" process. If an enhancement would be described in either written or " +"verbal communication to anyone besides the author or developer, then " +"consider creating an Enhancement Doc." +msgstr "任何较大的、面向用户的增强都应遵循增强流程。如果要以书面或口头形式向作者或开发人员以外的任何人描述增强功能,则应考虑创建改善文档。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:57 msgid "" -"Similarly, any technical effort (refactoring, major architectural change) " -"that will impact a large section of the development community should also be " -"communicated widely. The Enhancement process is suited for this even if it " -"will have zero impact on the typical user or operator." -msgstr "" -"同样,任何会对开发社区的大部分人产生影响的技术工作(重构、重大架构变更)也应" -"广泛传播。即使对典型用户或操作员的影响为零,改进流程也适用于这种情况。" +"Similarly, any technical effort (refactoring, major architectural change)" +" that will impact a large section of the development community should " +"also be communicated widely. The Enhancement process is suited for this " +"even if it will have zero impact on the typical user or operator." +msgstr "同样,任何会对开发社区的大部分人产生影响的技术工作(重构、重大架构变更)也应广泛传播。即使对典型用户或操作员的影响为零,改进流程也适用于这种情况。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:61 msgid "" -"For small changes and additions, going through the Enhancement process would " -"be time-consuming and unnecessary. This includes, for example, adding new " -"Federated Learning algorithms, as these only add features without changing " -"how Flower works or is used." +"For small changes and additions, going through the Enhancement process " +"would be time-consuming and unnecessary. This includes, for example, " +"adding new Federated Learning algorithms, as these only add features " +"without changing how Flower works or is used." msgstr "" -"对于小的改动和添加,通过 \"改善\"程序既耗时又没有必要。例如,这包括添加新的联" -"邦学习算法,因为这只会增加功能,而不会改变 \"Flower \"的工作或使用方式。" +"对于小的改动和添加,通过 \"改善\"程序既耗时又没有必要。例如,这包括添加新的联邦学习算法,因为这只会增加功能,而不会改变 \"Flower " +"\"的工作或使用方式。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:63 msgid "" "Enhancements are different from feature requests, as they are already " -"providing a laid-out path for implementation and are championed by members " -"of the community." -msgstr "" -"增强功能与功能请求不同,因为它们已经提供了实施路径,并得到了社区成员的支持。" +"providing a laid-out path for implementation and are championed by " +"members of the community." +msgstr "增强功能与功能请求不同,因为它们已经提供了实施路径,并得到了社区成员的支持。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:67 msgid "" "An Enhancement is captured in a Markdown file that follows a defined " -"template and a workflow to review and store enhancement docs for reference " -"— the Enhancement Doc." -msgstr "" -"增强功能被记录在一个 Markdown 文件中,该文件遵循已定义的模板和工作流程,用于" -"审查和存储增强功能文档(即增强功能文档)以供参考。" +"template and a workflow to review and store enhancement docs for " +"reference — the Enhancement Doc." +msgstr "增强功能被记录在一个 Markdown 文件中,该文件遵循已定义的模板和工作流程,用于审查和存储增强功能文档(即增强功能文档)以供参考。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:69 msgid "Enhancement Doc Template" @@ -4457,11 +3866,9 @@ msgstr "描述数据" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:92 msgid "" -"**fed-number** (Required) The `fed-number` of the last Flower Enhancement " -"Doc + 1. With this number, it becomes easy to reference other proposals." -msgstr "" -"**fed-number**(必填)上一个Flower增强文件的 \"fed-number \"+1。有了这个编" -"号,就很容易参考其他提案。" +"**fed-number** (Required) The `fed-number` of the last Flower Enhancement" +" Doc + 1. With this number, it becomes easy to reference other proposals." +msgstr "**fed-number**(必填)上一个Flower增强文件的 \"fed-number \"+1。有了这个编号,就很容易参考其他提案。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:94 msgid "**title** (Required) The title of the proposal in plain language." @@ -4469,22 +3876,20 @@ msgstr "**标题** (必填)用简明语言写出提案的标题。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:96 msgid "" -"**status** (Required) The current status of the proposal. See [workflow]" -"(#workflow) for the possible states." -msgstr "" -"**status** (必填)提案的当前状态。有关可能的状态,请参阅 [工作流程]" -"(#workflow)。" +"**status** (Required) The current status of the proposal. See " +"[workflow](#workflow) for the possible states." +msgstr "**status** (必填)提案的当前状态。有关可能的状态,请参阅 [工作流程](#workflow)。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:98 msgid "" -"**authors** (Required) A list of authors of the proposal. This is simply the " -"GitHub ID." +"**authors** (Required) A list of authors of the proposal. This is simply " +"the GitHub ID." msgstr "**作者**(必填) 提案的作者列表。这只是 GitHub ID。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:100 msgid "" -"**creation-date** (Required) The date that the proposal was first submitted " -"in a PR." +"**creation-date** (Required) The date that the proposal was first " +"submitted in a PR." msgstr "**创建日期**(必填) 建议书在 PR 中首次提交的日期。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:102 @@ -4495,8 +3900,8 @@ msgstr "**最后更新** (可选)提案最后一次重大修改的日期。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:104 msgid "" -"**see-also** (Optional) A list of other proposals that are relevant to this " -"one." +"**see-also** (Optional) A list of other proposals that are relevant to " +"this one." msgstr "**另见** (可选)与本提案相关的其他提案清单。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:106 @@ -4504,8 +3909,7 @@ msgid "**replaces** (Optional) A list of proposals that this one replaces." msgstr "**取代**(可选) 这份提案所取代的提案列表。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:108 -msgid "" -"**superseded-by** (Optional) A list of proposals that this one supersedes." +msgid "**superseded-by** (Optional) A list of proposals that this one supersedes." msgstr "**被取代者** (可选) 此提案取代的提案列表。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:111 @@ -4515,52 +3919,46 @@ msgstr "工作流程" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:113 msgid "" "The idea forming the enhancement should already have been discussed or " -"pitched in the community. As such, it needs a champion, usually the author, " -"who shepherds the enhancement. This person also has to find committers to " -"Flower willing to review the proposal." -msgstr "" -"形成增强功能的想法应该已经在社区中讨论过或提出过。因此,它需要一个支持者(通" -"常是作者)来引导增强。这个人还必须找到愿意审核提案的提交者。" +"pitched in the community. As such, it needs a champion, usually the " +"author, who shepherds the enhancement. This person also has to find " +"committers to Flower willing to review the proposal." +msgstr "形成增强功能的想法应该已经在社区中讨论过或提出过。因此,它需要一个支持者(通常是作者)来引导增强。这个人还必须找到愿意审核提案的提交者。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:115 msgid "" "New enhancements are checked in with a file name in the form of `NNNN-" -"YYYYMMDD-enhancement-title.md`, with `NNNN` being the Flower Enhancement Doc " -"number, to `enhancements`. All enhancements start in `provisional` state as " -"part of a pull request. Discussions are done as part of the pull request " -"review." +"YYYYMMDD-enhancement-title.md`, with `NNNN` being the Flower Enhancement " +"Doc number, to `enhancements`. All enhancements start in `provisional` " +"state as part of a pull request. Discussions are done as part of the pull" +" request review." msgstr "" -"新的增强功能以 `NNNN-YYYYMMDD-enhancement-title.md` 的文件名签入,其中 " -"`NNNN` 是花朵增强文档的编号,并将其转入 `enhancements`。作为拉取请求(pull " -"request)的一部分,所有增强功能都从 `provisional` 状态开始。讨论是作为拉取请" -"求审查的一部分进行的。" +"新的增强功能以 `NNNN-YYYYMMDD-enhancement-title.md` 的文件名签入,其中 `NNNN` " +"是花朵增强文档的编号,并将其转入 `enhancements`。作为拉取请求(pull request)的一部分,所有增强功能都从 " +"`provisional` 状态开始。讨论是作为拉取请求审查的一部分进行的。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:117 msgid "" -"Once an enhancement has been reviewed and approved, its status is changed to " -"`implementable`. The actual implementation is then done in separate pull " -"requests. These pull requests should mention the respective enhancement as " -"part of their description. After the implementation is done, the proposal " -"status is changed to `implemented`." +"Once an enhancement has been reviewed and approved, its status is changed" +" to `implementable`. The actual implementation is then done in separate " +"pull requests. These pull requests should mention the respective " +"enhancement as part of their description. After the implementation is " +"done, the proposal status is changed to `implemented`." msgstr "" -"一旦增强功能通过审核和批准,其状态就会变为 `可实施`。实际的实施工作将在单独的" -"拉取请求中完成。这些拉取请求应在其描述中提及相应的增强功能。实施完成后,提案" -"状态将更改为 `已实施`。" +"一旦增强功能通过审核和批准,其状态就会变为 " +"`可实施`。实际的实施工作将在单独的拉取请求中完成。这些拉取请求应在其描述中提及相应的增强功能。实施完成后,提案状态将更改为 `已实施`。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:119 msgid "" -"Under certain conditions, other states are possible. An Enhancement has the " -"following states:" +"Under certain conditions, other states are possible. An Enhancement has " +"the following states:" msgstr "在某些条件下,还可能出现其他状态。增强提案具有以下状态:" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:121 msgid "" "`provisional`: The enhancement has been proposed and is actively being " -"defined. This is the starting state while the proposal is being fleshed out " -"and actively defined and discussed." -msgstr "" -"`暂定`: 已提出改进建议并正在积极定义。这是在提案得到充实、积极定义和讨论时的" -"起始状态。" +"defined. This is the starting state while the proposal is being fleshed " +"out and actively defined and discussed." +msgstr "`暂定`: 已提出改进建议并正在积极定义。这是在提案得到充实、积极定义和讨论时的起始状态。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:122 msgid "`implementable`: The enhancement has been reviewed and approved." @@ -4573,14 +3971,13 @@ msgid "" msgstr "`已实施`: 增强功能已实施,不再主动更改。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:124 -msgid "" -"`deferred`: The enhancement is proposed but not actively being worked on." +msgid "`deferred`: The enhancement is proposed but not actively being worked on." msgstr "`推迟`: 已提出改进建议,但尚未积极开展工作。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:125 msgid "" -"`rejected`: The authors and reviewers have decided that this enhancement is " -"not moving forward." +"`rejected`: The authors and reviewers have decided that this enhancement " +"is not moving forward." msgstr "`拒绝`: 作者和审稿人已决定不再推进该增强功能。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:126 @@ -4593,21 +3990,17 @@ msgstr "`已替换`: 增强功能已被新的增强功能取代。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:131 msgid "" -"Adding an additional process to the ones already provided by GitHub (Issues " -"and Pull Requests) adds more complexity and can be a barrier for potential " -"first-time contributors." -msgstr "" -"在 GitHub 已提供的流程(问题和拉取请求)之外再增加一个流程,会增加复杂性,并" -"可能成为潜在首次贡献者的障碍。" +"Adding an additional process to the ones already provided by GitHub " +"(Issues and Pull Requests) adds more complexity and can be a barrier for " +"potential first-time contributors." +msgstr "在 GitHub 已提供的流程(问题和拉取请求)之外再增加一个流程,会增加复杂性,并可能成为潜在首次贡献者的障碍。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:133 msgid "" "Expanding the proposal template beyond the single-sentence description " -"currently required in the features issue template may be a heavy burden for " -"non-native English speakers." -msgstr "" -"对于英语非母语者来说,将提案模板扩展到目前要求的单句描述之外可能是一个沉重的" -"负担。" +"currently required in the features issue template may be a heavy burden " +"for non-native English speakers." +msgstr "对于英语非母语者来说,将提案模板扩展到目前要求的单句描述之外可能是一个沉重的负担。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:137 msgid "GitHub Issues" @@ -4616,17 +4009,16 @@ msgstr "GitHub 问题" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:139 msgid "" "Using GitHub Issues for these kinds of enhancements is doable. One could " -"use, for example, tags, to differentiate and filter them from other issues. " -"The main issue is in discussing and reviewing an enhancement: GitHub issues " -"only have a single thread for comments. Enhancements usually have multiple " -"threads of discussion at the same time for various parts of the doc. " -"Managing these multiple discussions can be confusing when using GitHub " -"Issues." -msgstr "" -"使用 GitHub Issues 进行此类改进是可行的。例如,我们可以使用标签来区分和过滤这" -"些问题。主要的问题在于讨论和审查增强功能: GitHub 问题只有一个评论线程。而增" -"强功能通常会同时有多个讨论线程,针对文档的不同部分。在使用 GitHub 问题时,管" -"理这些多重讨论会很混乱。" +"use, for example, tags, to differentiate and filter them from other " +"issues. The main issue is in discussing and reviewing an enhancement: " +"GitHub issues only have a single thread for comments. Enhancements " +"usually have multiple threads of discussion at the same time for various " +"parts of the doc. Managing these multiple discussions can be confusing " +"when using GitHub Issues." +msgstr "" +"使用 GitHub Issues 进行此类改进是可行的。例如,我们可以使用标签来区分和过滤这些问题。主要的问题在于讨论和审查增强功能: " +"GitHub 问题只有一个评论线程。而增强功能通常会同时有多个讨论线程,针对文档的不同部分。在使用 GitHub " +"问题时,管理这些多重讨论会很混乱。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:141 msgid "Google Docs" @@ -4634,15 +4026,15 @@ msgstr "谷歌文档" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:143 msgid "" -"Google Docs allow for multiple threads of discussions. But as Google Docs " -"are hosted outside the project, their discoverability by the community needs " -"to be taken care of. A list of links to all proposals has to be managed and " -"made available for the community. Compared to shipping proposals as part of " -"Flower's repository, the potential for missing links is much higher." +"Google Docs allow for multiple threads of discussions. But as Google Docs" +" are hosted outside the project, their discoverability by the community " +"needs to be taken care of. A list of links to all proposals has to be " +"managed and made available for the community. Compared to shipping " +"proposals as part of Flower's repository, the potential for missing links" +" is much higher." msgstr "" -"谷歌文档允许多线程讨论。但是,由于谷歌文档是在项目之外托管的,因此需要注意它" -"们是否能被社区发现。我们必须管理所有提案的链接列表,并提供给社区使用。与作为 " -"Flower 资源库一部分的提案相比,丢失链接的可能性要大得多。" +"谷歌文档允许多线程讨论。但是,由于谷歌文档是在项目之外托管的,因此需要注意它们是否能被社区发现。我们必须管理所有提案的链接列表,并提供给社区使用。与作为" +" Flower 资源库一部分的提案相比,丢失链接的可能性要大得多。" #: ../../source/fed/index.md:1 msgid "FED - Flower Enhancement Doc" @@ -4654,10 +4046,9 @@ msgstr "整合评估结果" #: ../../source/how-to-aggregate-evaluation-results.rst:4 msgid "" -"The Flower server does not prescribe a way to aggregate evaluation results, " -"but it enables the user to fully customize result aggregation." -msgstr "" -"Flower 服务器没有规定整合评估结果的方法,但用户可以完全自定义如何整合。" +"The Flower server does not prescribe a way to aggregate evaluation " +"results, but it enables the user to fully customize result aggregation." +msgstr "Flower 服务器没有规定整合评估结果的方法,但用户可以完全自定义如何整合。" #: ../../source/how-to-aggregate-evaluation-results.rst:8 msgid "Aggregate Custom Evaluation Results" @@ -4665,12 +4056,10 @@ msgstr "自定义整合评估结果" #: ../../source/how-to-aggregate-evaluation-results.rst:10 msgid "" -"The same :code:`Strategy`-customization approach can be used to aggregate " -"custom evaluation results coming from individual clients. Clients can return " -"custom metrics to the server by returning a dictionary:" -msgstr "" -"同样的 :code:`Strategy` 定制方法也可用于汇总来自单个客户端的自定义评估结果。" -"客户端可以通过返回字典的方式向服务器返回自定义指标:" +"The same :code:`Strategy`-customization approach can be used to aggregate" +" custom evaluation results coming from individual clients. Clients can " +"return custom metrics to the server by returning a dictionary:" +msgstr "同样的 :code:`Strategy` 定制方法也可用于汇总来自单个客户端的自定义评估结果。客户端可以通过返回字典的方式向服务器返回自定义指标:" #: ../../source/how-to-aggregate-evaluation-results.rst:36 msgid "" @@ -4678,6 +4067,165 @@ msgid "" "provided in these dictionaries:" msgstr "然后,服务器可以使用定制的策略来汇总这些字典中提供的指标:" +#: ../../source/how-to-authenticate-supernodes.rst:2 +msgid "Authenticate SuperNodes" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:4 +msgid "" +"Flower has built-in support for authenticated SuperNodes that you can use" +" to verify the identities of each SuperNode connecting to a SuperLink. " +"Flower node authentication works similar to how GitHub SSH authentication" +" works:" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:7 +msgid "SuperLink (server) stores a list of known (client) node public keys" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:8 +msgid "" +"Using ECDH, both SuperNode and SuperLink independently derive a shared " +"secret" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:9 +msgid "" +"Shared secret is used to compute the HMAC value of the message sent from " +"SuperNode to SuperLink as a token" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:10 +msgid "SuperLink verifies the token" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:12 +#, fuzzy +msgid "" +"We recommend you to check out the complete `code example " +"`_ demonstrating federated learning with Flower in an " +"authenticated setting." +msgstr "" +"请参阅`完整代码示例 " +"`_了解更多信息。" + +#: ../../source/how-to-authenticate-supernodes.rst:15 +msgid "" +"This guide covers a preview feature that might change in future versions " +"of Flower." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:18 +msgid "" +"For increased security, node authentication can only be used when " +"encrypted connections (SSL/TLS) are enabled." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:21 +msgid "Enable node authentication in :code:`SuperLink`" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:23 +msgid "" +"To enable node authentication, first you need to configure SSL/TLS " +"connections to secure the SuperLink<>SuperNode communication. You can " +"find the complete guide `here `_. After configuring secure connections, you" +" can enable client authentication in a long-running Flower " +":code:`SuperLink`. Use the following terminal command to start a Flower " +":code:`SuperNode` that has both secure connections and node " +"authentication enabled:" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:36 +msgid "Let's break down the authentication flags:" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:38 +msgid "" +"The first flag :code:`--auth-list-public-keys` expects a path to a CSV " +"file storing all known node public keys. You need to store all known node" +" public keys that are allowed to participate in a federation in one CSV " +"file (:code:`.csv`)." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:40 +msgid "" +"A valid CSV file storing known node public keys should list the keys in " +"OpenSSH format, separated by commas and without any comments. For an " +"example, refer to our code sample, which contains a CSV file with two " +"known node public keys." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:42 +msgid "" +"The second and third flags :code:`--auth-superlink-private-key` and :code" +":`--auth-superlink-public-key` expect paths to the server's private and " +"public keys. For development purposes, you can generate a private and " +"public key pair using :code:`ssh-keygen -t ecdsa -b 384`." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:45 +msgid "" +"In Flower 1.9, there is no support for dynamically removing, editing, or " +"adding known node public keys to the SuperLink. To change the set of " +"known nodes, you need to shut the server down, edit the CSV file, and " +"start the server again. Support for dynamically changing the set of known" +" nodes is on the roadmap to be released in Flower 1.10 (ETA: June)." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:51 +msgid "Enable node authentication in :code:`SuperNode`" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:53 +msgid "" +"Similar to the long-running Flower server (:code:`SuperLink`), you can " +"easily enable node authentication in the long-running Flower client " +"(:code:`SuperNode`). Use the following terminal command to start an " +"authenticated :code:`SuperNode`:" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:64 +msgid "" +"The :code:`--auth-supernode-private-key` flag expects a path to the " +"node's private key file and the :code:`--auth-supernode-public-key` flag " +"expects a path to the node's public key file. For development purposes, " +"you can generate a private and public key pair using :code:`ssh-keygen -t" +" ecdsa -b 384`." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:68 +msgid "Security notice" +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:70 +msgid "" +"The system's security relies on the credentials of the SuperLink and each" +" SuperNode. Therefore, it is imperative to safeguard and safely store the" +" credentials to avoid security risks such as Public Key Infrastructure " +"(PKI) impersonation attacks. The node authentication mechanism also " +"involves human interaction, so please ensure that all of the " +"communication is done in a secure manner, using trusted communication " +"methods." +msgstr "" + +#: ../../source/how-to-authenticate-supernodes.rst:75 +#: ../../source/how-to-enable-ssl-connections.rst:65 +#: ../../source/how-to-use-built-in-mods.rst:85 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:287 +msgid "Conclusion" +msgstr "总结" + +#: ../../source/how-to-authenticate-supernodes.rst:77 +msgid "" +"You should now have learned how to start a long-running Flower server " +"(:code:`SuperLink`) and client (:code:`SuperNode`) with node " +"authentication enabled. You should also know the significance of the " +"private key and store it safely to minimize security risks." +msgstr "" + #: ../../source/how-to-configure-clients.rst:2 msgid "Configure clients" msgstr "配置客户端" @@ -4685,12 +4233,10 @@ msgstr "配置客户端" #: ../../source/how-to-configure-clients.rst:4 msgid "" "Along with model parameters, Flower can send configuration values to " -"clients. Configuration values can be used for various purposes. They are, " -"for example, a popular way to control client-side hyperparameters from the " -"server." -msgstr "" -"除了模型参数,Flower 还可以向客户端发送配置值。配置值有多种用途。它们是一种从" -"服务器控制客户端超参数的常用方法。" +"clients. Configuration values can be used for various purposes. They are," +" for example, a popular way to control client-side hyperparameters from " +"the server." +msgstr "除了模型参数,Flower 还可以向客户端发送配置值。配置值有多种用途。它们是一种从服务器控制客户端超参数的常用方法。" #: ../../source/how-to-configure-clients.rst:7 msgid "Configuration values" @@ -4698,45 +4244,41 @@ msgstr "配置值" #: ../../source/how-to-configure-clients.rst:9 msgid "" -"Configuration values are represented as a dictionary with ``str`` keys and " -"values of type ``bool``, ``bytes``, ``double`` (64-bit precision float), " -"``int``, or ``str`` (or equivalent types in different languages). Here is an " -"example of a configuration dictionary in Python:" +"Configuration values are represented as a dictionary with ``str`` keys " +"and values of type ``bool``, ``bytes``, ``double`` (64-bit precision " +"float), ``int``, or ``str`` (or equivalent types in different languages)." +" Here is an example of a configuration dictionary in Python:" msgstr "" -"配置值以字典的形式表示,字典的键为 ``str``,值的类型为 ``bool``、``bytes``、" -"``double``(64 位精度浮点型)、``int``或 ``str`(或不同语言中的等效类型)。下" -"面是一个 Python 配置字典的示例:" +"配置值以字典的形式表示,字典的键为 ``str``,值的类型为 ``bool``、``bytes``、``double``(64 " +"位精度浮点型)、``int``或 ``str`(或不同语言中的等效类型)。下面是一个 Python 配置字典的示例:" #: ../../source/how-to-configure-clients.rst:20 msgid "" "Flower serializes these configuration dictionaries (or *config dict* for " -"short) to their ProtoBuf representation, transports them to the client using " -"gRPC, and then deserializes them back to Python dictionaries." +"short) to their ProtoBuf representation, transports them to the client " +"using gRPC, and then deserializes them back to Python dictionaries." msgstr "" -"Flower 将这些配置字典(简称 *config dict*)序列化为 ProtoBuf 表示形式,使用 " -"gRPC 将其传输到客户端,然后再反序列化为 Python 字典。" +"Flower 将这些配置字典(简称 *config dict*)序列化为 ProtoBuf 表示形式,使用 gRPC " +"将其传输到客户端,然后再反序列化为 Python 字典。" #: ../../source/how-to-configure-clients.rst:24 msgid "" -"Currently, there is no support for directly sending collection types (e.g., " -"``Set``, ``List``, ``Map``) as values in configuration dictionaries. There " -"are several workarounds to send collections as values by converting them to " -"one of the supported value types (and converting them back on the client-" -"side)." +"Currently, there is no support for directly sending collection types " +"(e.g., ``Set``, ``List``, ``Map``) as values in configuration " +"dictionaries. There are several workarounds to send collections as values" +" by converting them to one of the supported value types (and converting " +"them back on the client-side)." msgstr "" "目前,还不支持在配置字典中直接发送作为值的集合类型(例如,`Set``, `List`, " -"`Map``)。有几种变通方法可将集合转换为支持的值类型之一(并在客户端将其转换" -"回),从而将集合作为值发送。" +"`Map``)。有几种变通方法可将集合转换为支持的值类型之一(并在客户端将其转换回),从而将集合作为值发送。" #: ../../source/how-to-configure-clients.rst:26 msgid "" "One can, for example, convert a list of floating-point numbers to a JSON " -"string, then send the JSON string using the configuration dictionary, and " -"then convert the JSON string back to a list of floating-point numbers on the " -"client." -msgstr "" -"例如,可以将浮点数列表转换为 JSON 字符串,然后使用配置字典发送 JSON 字符串," -"再在客户端将 JSON 字符串转换回浮点数列表。" +"string, then send the JSON string using the configuration dictionary, and" +" then convert the JSON string back to a list of floating-point numbers on" +" the client." +msgstr "例如,可以将浮点数列表转换为 JSON 字符串,然后使用配置字典发送 JSON 字符串,再在客户端将 JSON 字符串转换回浮点数列表。" #: ../../source/how-to-configure-clients.rst:30 msgid "Configuration through built-in strategies" @@ -4744,65 +4286,57 @@ msgstr "通过内置策略进行配置" #: ../../source/how-to-configure-clients.rst:32 msgid "" -"The easiest way to send configuration values to clients is to use a built-in " -"strategy like :code:`FedAvg`. Built-in strategies support so-called " -"configuration functions. A configuration function is a function that the " -"built-in strategy calls to get the configuration dictionary for the current " -"round. It then forwards the configuration dictionary to all the clients " -"selected during that round." +"The easiest way to send configuration values to clients is to use a " +"built-in strategy like :code:`FedAvg`. Built-in strategies support so-" +"called configuration functions. A configuration function is a function " +"that the built-in strategy calls to get the configuration dictionary for " +"the current round. It then forwards the configuration dictionary to all " +"the clients selected during that round." msgstr "" -"向客户端发送配置值的最简单方法是使用内置策略,如 :code:`FedAvg`。内置策略支持" -"所谓的配置函数。配置函数是内置策略调用的函数,用于获取当前轮的配置字典。然" -"后,它会将配置字典转发给该轮中选择的所有客户端。" +"向客户端发送配置值的最简单方法是使用内置策略,如 " +":code:`FedAvg`。内置策略支持所谓的配置函数。配置函数是内置策略调用的函数,用于获取当前轮的配置字典。然后,它会将配置字典转发给该轮中选择的所有客户端。" #: ../../source/how-to-configure-clients.rst:34 msgid "" "Let's start with a simple example. Imagine we want to send (a) the batch " -"size that the client should use, (b) the current global round of federated " -"learning, and (c) the number of epochs to train on the client-side. Our " -"configuration function could look like this:" -msgstr "" -"让我们从一个简单的例子开始。想象一下,我们想要发送给客户端(a)应该使用的批次" -"大小,(b)当前联邦学习的全局轮次,以及(c)客户端训练的遍历数。我们的配置函" -"数可以是这样的:" +"size that the client should use, (b) the current global round of " +"federated learning, and (c) the number of epochs to train on the client-" +"side. Our configuration function could look like this:" +msgstr "让我们从一个简单的例子开始。想象一下,我们想要发送给客户端(a)应该使用的批次大小,(b)当前联邦学习的全局轮次,以及(c)客户端训练的遍历数。我们的配置函数可以是这样的:" #: ../../source/how-to-configure-clients.rst:47 msgid "" "To make the built-in strategies use this function, we can pass it to " -"``FedAvg`` during initialization using the parameter :code:" -"`on_fit_config_fn`:" -msgstr "" -"为了让内置策略使用这个函数,我们可以在初始化时使用参数 :code:" -"`on_fit_config_fn` 将它传递给 ``FedAvg`` :" +"``FedAvg`` during initialization using the parameter " +":code:`on_fit_config_fn`:" +msgstr "为了让内置策略使用这个函数,我们可以在初始化时使用参数 :code:`on_fit_config_fn` 将它传递给 ``FedAvg`` :" #: ../../source/how-to-configure-clients.rst:56 -msgid "" -"One the client side, we receive the configuration dictionary in ``fit``:" +msgid "One the client side, we receive the configuration dictionary in ``fit``:" msgstr "在客户端,我们在 ``fit`` 中接收配置字典:" #: ../../source/how-to-configure-clients.rst:67 msgid "" "There is also an `on_evaluate_config_fn` to configure evaluation, which " -"works the same way. They are separate functions because one might want to " -"send different configuration values to `evaluate` (for example, to use a " -"different batch size)." +"works the same way. They are separate functions because one might want to" +" send different configuration values to `evaluate` (for example, to use a" +" different batch size)." msgstr "" -"还有一个 `on_evaluate_config_fn` 用于配置评估,其工作方式相同。它们是不同的函" -"数,因为可能需要向 `evaluate` 发送不同的配置值(例如,使用不同的批量大小)。" +"还有一个 `on_evaluate_config_fn` 用于配置评估,其工作方式相同。它们是不同的函数,因为可能需要向 `evaluate` " +"发送不同的配置值(例如,使用不同的批量大小)。" #: ../../source/how-to-configure-clients.rst:69 msgid "" -"The built-in strategies call this function every round (that is, every time " -"`Strategy.configure_fit` or `Strategy.configure_evaluate` runs). Calling " -"`on_evaluate_config_fn` every round allows us to vary/change the config dict " -"over consecutive rounds. If we wanted to implement a hyperparameter " -"schedule, for example, to increase the number of local epochs during later " -"rounds, we could do the following:" +"The built-in strategies call this function every round (that is, every " +"time `Strategy.configure_fit` or `Strategy.configure_evaluate` runs). " +"Calling `on_evaluate_config_fn` every round allows us to vary/change the " +"config dict over consecutive rounds. If we wanted to implement a " +"hyperparameter schedule, for example, to increase the number of local " +"epochs during later rounds, we could do the following:" msgstr "" -"内置策略每轮都会调用此函数(即每次运行 `Strategy.configure_fit` 或 `Strategy." -"configure_evaluate` 时)。每轮调用 `on_evaluate_config_fn` 允许我们在连续几轮" -"中改变配置指令。例如,如果我们想实现一个超参数时间表,以增加后几轮的本地遍历" -"次数,我们可以这样做:" +"内置策略每轮都会调用此函数(即每次运行 `Strategy.configure_fit` 或 " +"`Strategy.configure_evaluate` 时)。每轮调用 `on_evaluate_config_fn` " +"允许我们在连续几轮中改变配置指令。例如,如果我们想实现一个超参数时间表,以增加后几轮的本地遍历次数,我们可以这样做:" #: ../../source/how-to-configure-clients.rst:82 msgid "The :code:`FedAvg` strategy will call this function *every round*." @@ -4821,18 +4355,19 @@ msgstr "在某些情况下,有必要向不同的客户端发送不同的配置 #: ../../source/how-to-configure-clients.rst:89 #, fuzzy msgid "" -"This can be achieved by customizing an existing strategy or by :doc:" -"`implementing a custom strategy from scratch `. " -"Here's a nonsensical example that customizes :code:`FedAvg` by adding a " -"custom ``\"hello\": \"world\"`` configuration key/value pair to the config " -"dict of a *single client* (only the first client in the list, the other " -"clients in this round to not receive this \"special\" config value):" +"This can be achieved by customizing an existing strategy or by " +":doc:`implementing a custom strategy from scratch `. Here's a nonsensical example that customizes :code:`FedAvg`" +" by adding a custom ``\"hello\": \"world\"`` configuration key/value pair" +" to the config dict of a *single client* (only the first client in the " +"list, the other clients in this round to not receive this \"special\" " +"config value):" msgstr "" -"这可以通过定制现有策略或 `从头开始实施一个定制策略 `_来实现。下面是一个无厘头的例子," -"`FedAvg`通过在*单个客户端*的配置指令(config dict)中添加自定义的" -"``\"hello\": \"world\"``配置键/值对添加到此的配置 dict 中(仅列表中的第一个" -"客户端,本轮中的其他客户端不会收到此 \"特殊 \"配置值):" +"这可以通过定制现有策略或 `从头开始实施一个定制策略 `_来实现。下面是一个无厘头的例子,`FedAvg`通过在*单个客户端*的配置指令(config " +"dict)中添加自定义的``\"hello\": \"world\"``配置键/值对添加到此的配置 dict " +"中(仅列表中的第一个客户端,本轮中的其他客户端不会收到此 \"特殊 \"配置值):" #: ../../source/how-to-configure-logging.rst:2 msgid "Configure logging" @@ -4841,21 +4376,19 @@ msgstr "配置日志记录" #: ../../source/how-to-configure-logging.rst:4 msgid "" "The Flower logger keeps track of all core events that take place in " -"federated learning workloads. It presents information by default following a " -"standard message format:" -msgstr "" -"Flower 日志记录器会跟踪联邦学习工作负载中发生的所有核心事件。它默认按照标准信" -"息格式提供信息:" +"federated learning workloads. It presents information by default " +"following a standard message format:" +msgstr "Flower 日志记录器会跟踪联邦学习工作负载中发生的所有核心事件。它默认按照标准信息格式提供信息:" #: ../../source/how-to-configure-logging.rst:13 msgid "" -"containing relevant information including: log message level (e.g. :code:" -"`INFO`, :code:`DEBUG`), a timestamp, the line where the logging took place " -"from, as well as the log message itself. In this way, the logger would " -"typically display information on your terminal as follows:" +"containing relevant information including: log message level (e.g. " +":code:`INFO`, :code:`DEBUG`), a timestamp, the line where the logging " +"took place from, as well as the log message itself. In this way, the " +"logger would typically display information on your terminal as follows:" msgstr "" -"相关信息包括:日志信息级别(例如 :code:`INFO`、:code:`DEBUG`)、时间戳、日志" -"记录的行以及日志信息本身。这样,日志记录器通常会在终端上显示如下信息:" +"相关信息包括:日志信息级别(例如 " +":code:`INFO`、:code:`DEBUG`)、时间戳、日志记录的行以及日志信息本身。这样,日志记录器通常会在终端上显示如下信息:" #: ../../source/how-to-configure-logging.rst:34 msgid "Saving log to file" @@ -4865,30 +4398,31 @@ msgstr "将日志保存到文件" msgid "" "By default, the Flower log is outputted to the terminal where you launch " "your Federated Learning workload from. This applies for both gRPC-based " -"federation (i.e. when you do :code:`fl.server.start_server`) and when using " -"the :code:`VirtualClientEngine` (i.e. when you do :code:`fl.simulation." -"start_simulation`). In some situations you might want to save this log to " -"disk. You can do so by calling the `fl.common.logger.configure() `_ function. " -"For example:" -msgstr "" -"默认情况下,Flower 日志会输出到启动联邦学习工作负载的终端。这既适用于基于 " -"gRPC 的联邦学习(即执行 :code:`fl.server.start_server` 时),也适用于使用 :" -"code:`VirtualClientEngine` 时(即执行 :code:`fl.simulation.start_simulation` " -"时)。在某些情况下,您可能希望将此日志保存到磁盘。为此,您可以调用 `fl." -"common.logger.configure() `_ 函数。例如:" +"federation (i.e. when you do :code:`fl.server.start_server`) and when " +"using the :code:`VirtualClientEngine` (i.e. when you do " +":code:`fl.simulation.start_simulation`). In some situations you might " +"want to save this log to disk. You can do so by calling the " +"`fl.common.logger.configure() " +"`_" +" function. For example:" +msgstr "" +"默认情况下,Flower 日志会输出到启动联邦学习工作负载的终端。这既适用于基于 gRPC 的联邦学习(即执行 " +":code:`fl.server.start_server` 时),也适用于使用 :code:`VirtualClientEngine` " +"时(即执行 :code:`fl.simulation.start_simulation` " +"时)。在某些情况下,您可能希望将此日志保存到磁盘。为此,您可以调用 `fl.common.logger.configure() " +"`_" +" 函数。例如:" #: ../../source/how-to-configure-logging.rst:53 msgid "" -"With the above, Flower will record the log you see on your terminal to :code:" -"`log.txt`. This file will be created in the same directory as were you are " -"running the code from. If we inspect we see the log above is also recorded " -"but prefixing with :code:`identifier` each line:" +"With the above, Flower will record the log you see on your terminal to " +":code:`log.txt`. This file will be created in the same directory as were " +"you are running the code from. If we inspect we see the log above is also" +" recorded but prefixing with :code:`identifier` each line:" msgstr "" -"通过上述操作,Flower 会将您在终端上看到的日志记录到 :code:`log.txt`。该文件将" -"创建在运行代码的同一目录下。如果我们检查一下,就会发现上面的日志也被记录了下" -"来,但每一行都以 :code:`identifier` 作为前缀:" +"通过上述操作,Flower 会将您在终端上看到的日志记录到 " +":code:`log.txt`。该文件将创建在运行代码的同一目录下。如果我们检查一下,就会发现上面的日志也被记录了下来,但每一行都以 " +":code:`identifier` 作为前缀:" #: ../../source/how-to-configure-logging.rst:74 msgid "Log your own messages" @@ -4896,19 +4430,16 @@ msgstr "记录自己的信息" #: ../../source/how-to-configure-logging.rst:76 msgid "" -"You might expand the information shown by default with the Flower logger by " -"adding more messages relevant to your application. You can achieve this " -"easily as follows." -msgstr "" -"您可以通过添加更多与应用程序相关的信息来扩展 Flower 日志记录器默认显示的信" -"息。您可以通过以下方法轻松实现这一目标。" +"You might expand the information shown by default with the Flower logger " +"by adding more messages relevant to your application. You can achieve " +"this easily as follows." +msgstr "您可以通过添加更多与应用程序相关的信息来扩展 Flower 日志记录器默认显示的信息。您可以通过以下方法轻松实现这一目标。" #: ../../source/how-to-configure-logging.rst:102 msgid "" -"In this way your logger will show, in addition to the default messages, the " -"ones introduced by the clients as specified above." -msgstr "" -"这样,除默认信息外,您的日志记录器还将显示由客户引入的信息,如上文所述。" +"In this way your logger will show, in addition to the default messages, " +"the ones introduced by the clients as specified above." +msgstr "这样,除默认信息外,您的日志记录器还将显示由客户引入的信息,如上文所述。" #: ../../source/how-to-configure-logging.rst:128 msgid "Log to a remote service" @@ -4916,167 +4447,149 @@ msgstr "登录远程服务" #: ../../source/how-to-configure-logging.rst:130 msgid "" -"The :code:`fl.common.logger.configure` function, also allows specifying a " -"host to which logs can be pushed (via :code:`POST`) through a native Python :" -"code:`logging.handler.HTTPHandler`. This is a particularly useful feature " -"in :code:`gRPC`-based Federated Learning workloads where otherwise gathering " -"logs from all entities (i.e. the server and the clients) might be " -"cumbersome. Note that in Flower simulation, the server automatically " -"displays all logs. You can still specify a :code:`HTTPHandler` should you " -"wish to backup or analyze the logs somewhere else." -msgstr "" -"此外,:code:`fl.common.logger.configure`函数还允许指定主机,通过本地 Python :" -"code:`logging.handler.HTTPHandler`,向该主机推送日志(通过 :code:`POST`)。在" -"基于 :code:`gRPC` 的联邦学习工作负载中,这是一个特别有用的功能,否则从所有实" -"体(即服务器和客户端)收集日志可能会很麻烦。请注意,在 Flower 模拟器中,服务" -"器会自动显示所有日志。如果希望在其他地方备份或分析日志,仍可指定 :code:" -"`HTTPHandler`。" +"The :code:`fl.common.logger.configure` function, also allows specifying a" +" host to which logs can be pushed (via :code:`POST`) through a native " +"Python :code:`logging.handler.HTTPHandler`. This is a particularly useful" +" feature in :code:`gRPC`-based Federated Learning workloads where " +"otherwise gathering logs from all entities (i.e. the server and the " +"clients) might be cumbersome. Note that in Flower simulation, the server " +"automatically displays all logs. You can still specify a " +":code:`HTTPHandler` should you wish to backup or analyze the logs " +"somewhere else." +msgstr "" +"此外,:code:`fl.common.logger.configure`函数还允许指定主机,通过本地 Python " +":code:`logging.handler.HTTPHandler`,向该主机推送日志(通过 :code:`POST`)。在基于 " +":code:`gRPC` 的联邦学习工作负载中,这是一个特别有用的功能,否则从所有实体(即服务器和客户端)收集日志可能会很麻烦。请注意,在 " +"Flower 模拟器中,服务器会自动显示所有日志。如果希望在其他地方备份或分析日志,仍可指定 :code:`HTTPHandler`。" #: ../../source/how-to-enable-ssl-connections.rst:2 msgid "Enable SSL connections" msgstr "启用 SSL 连接" #: ../../source/how-to-enable-ssl-connections.rst:4 +#, fuzzy msgid "" -"This guide describes how to a SSL-enabled secure Flower server can be " -"started and how a Flower client can establish a secure connections to it." -msgstr "" -"本指南介绍如何启动启用 SSL 的安全 Flower 服务器,以及 Flower 客户端如何与其建" -"立安全连接。" +"This guide describes how to a SSL-enabled secure Flower server " +"(:code:`SuperLink`) can be started and how a Flower client " +"(:code:`SuperNode`) can establish a secure connections to it." +msgstr "本指南介绍如何启动启用 SSL 的安全 Flower 服务器,以及 Flower 客户端如何与其建立安全连接。" #: ../../source/how-to-enable-ssl-connections.rst:7 msgid "" -"A complete code example demonstrating a secure connection can be found `here " -"`_." +"A complete code example demonstrating a secure connection can be found " +"`here `_." msgstr "" -"有关安全连接的完整代码示例,请参见 `_ 。" +"有关安全连接的完整代码示例,请参见 `_ 。" #: ../../source/how-to-enable-ssl-connections.rst:10 +#, fuzzy msgid "" -"The code example comes with a README.md file which will explain how to start " -"it. Although it is already SSL-enabled, it might be less descriptive on how. " -"Stick to this guide for a deeper introduction to the topic." -msgstr "" -"代码示例附带的 README.md 文件将解释如何启动它。虽然它已经启用了 SSL,但对如何" -"启用可能描述较少。请参考本指南,了解更深入的相关介绍。" +"The code example comes with a :code:`README.md` file which explains how " +"to start it. Although it is already SSL-enabled, it might be less " +"descriptive on how it does so. Stick to this guide for a deeper " +"introduction to the topic." +msgstr "代码示例附带的 README.md 文件将解释如何启动它。虽然它已经启用了 SSL,但对如何启用可能描述较少。请参考本指南,了解更深入的相关介绍。" #: ../../source/how-to-enable-ssl-connections.rst:16 msgid "Certificates" msgstr "证书" #: ../../source/how-to-enable-ssl-connections.rst:18 +#, fuzzy msgid "" "Using SSL-enabled connections requires certificates to be passed to the " -"server and client. For the purpose of this guide we are going to generate " -"self-signed certificates. As this can become quite complex we are going to " -"ask you to run the script in :code:`examples/advanced-tensorflow/" -"certificates/generate.sh`" +"server and client. For the purpose of this guide we are going to generate" +" self-signed certificates. As this can become quite complex we are going " +"to ask you to run the script in :code:`examples/advanced-" +"tensorflow/certificates/generate.sh` with the following command sequence:" msgstr "" -"使用支持 SSL 的连接需要向服务器和客户端传递证书。在本指南中,我们将生成自签名" -"证书。由于这可能会变得相当复杂,我们将要求你运行 :code:`examples/advanced-" -"tensorflow/certificates/generate.sh` 中的脚本" - -#: ../../source/how-to-enable-ssl-connections.rst:23 -msgid "with the following command sequence:" -msgstr "使用以下命令序列:" +"使用支持 SSL 的连接需要向服务器和客户端传递证书。在本指南中,我们将生成自签名证书。由于这可能会变得相当复杂,我们将要求你运行 " +":code:`examples/advanced-tensorflow/certificates/generate.sh` 中的脚本" -#: ../../source/how-to-enable-ssl-connections.rst:30 +#: ../../source/how-to-enable-ssl-connections.rst:29 msgid "" -"This will generate the certificates in :code:`examples/advanced-tensorflow/." -"cache/certificates`." -msgstr "" -"这将在 :code:`examples/advanced-tensorflow/.cache/certificates` 中生成证书。" +"This will generate the certificates in :code:`examples/advanced-" +"tensorflow/.cache/certificates`." +msgstr "这将在 :code:`examples/advanced-tensorflow/.cache/certificates` 中生成证书。" -#: ../../source/how-to-enable-ssl-connections.rst:32 +#: ../../source/how-to-enable-ssl-connections.rst:31 +#, fuzzy msgid "" -"The approach how the SSL certificates are generated in this example can " -"serve as an inspiration and starting point but should not be taken as " -"complete for production environments. Please refer to other sources " -"regarding the issue of correctly generating certificates for production " -"environments." -msgstr "" -"本示例中生成 SSL 证书的方法可作为启发和起点,但不应被视为生产环境的完整方法。" -"有关在生产环境中正确生成证书的问题,请参考其他资料。" +"The approach for generating SSL certificates in the context of this " +"example can serve as an inspiration and starting point, but it should not" +" be used as a reference for production environments. Please refer to " +"other sources regarding the issue of correctly generating certificates " +"for production environments. For non-critical prototyping or research " +"projects, it might be sufficient to use the self-signed certificates " +"generated using the scripts mentioned in this guide." +msgstr "本示例中生成 SSL 证书的方法可作为启发和起点,但不应被视为生产环境的完整方法。有关在生产环境中正确生成证书的问题,请参考其他资料。" -#: ../../source/how-to-enable-ssl-connections.rst:36 -msgid "" -"In case you are a researcher you might be just fine using the self-signed " -"certificates generated using the scripts which are part of this guide." -msgstr "如果你是一名研究人员,使用本指南中的脚本生成的自签名证书就可以了。" +#: ../../source/how-to-enable-ssl-connections.rst:39 +#, fuzzy +msgid "Server (SuperLink)" +msgstr "flower-superlink" #: ../../source/how-to-enable-ssl-connections.rst:41 -#: ../../source/ref-api/flwr.server.Server.rst:2 -msgid "Server" -msgstr "服务器" - -#: ../../source/how-to-enable-ssl-connections.rst:43 +#, fuzzy msgid "" -"We are now going to show how to write a sever which uses the previously " -"generated scripts." -msgstr "现在,我们将展示如何编写一个使用先前生成的脚本的服务器。" +"Use the following terminal command to start a sever (SuperLink) that uses" +" the previously generated certificates:" +msgstr "现在我们将演示如何编写一个客户端,使用之前生成的脚本:" -#: ../../source/how-to-enable-ssl-connections.rst:61 +#: ../../source/how-to-enable-ssl-connections.rst:47 +#, fuzzy msgid "" "When providing certificates, the server expects a tuple of three " -"certificates. :code:`Path` can be used to easily read the contents of those " -"files into byte strings, which is the data type :code:`start_server` expects." -msgstr "" -"在提供证书时,服务器希望得到由三个证书组成的元组。 :code:`Path` 可用于轻松地" -"将这些文件的内容读取为字节字符串,这就是 :code:`start_server` 期望的数据类" -"型。" +"certificates paths: CA certificate, server certificate and server private" +" key." +msgstr "要启用 SSL,需要 CA 证书、服务器证书和服务器私钥。" -#: ../../source/how-to-enable-ssl-connections.rst:65 -#: ../../source/how-to-upgrade-to-flower-1.0.rst:37 -#: ../../source/ref-api/flwr.client.Client.rst:2 -msgid "Client" -msgstr "客户端" +#: ../../source/how-to-enable-ssl-connections.rst:51 +#, fuzzy +msgid "Client (SuperNode)" +msgstr "客户端状态代码。" -#: ../../source/how-to-enable-ssl-connections.rst:67 +#: ../../source/how-to-enable-ssl-connections.rst:53 +#, fuzzy msgid "" -"We are now going to show how to write a client which uses the previously " -"generated scripts:" +"Use the following terminal command to start a client (SuperNode) that " +"uses the previously generated certificates:" msgstr "现在我们将演示如何编写一个客户端,使用之前生成的脚本:" -#: ../../source/how-to-enable-ssl-connections.rst:84 +#: ../../source/how-to-enable-ssl-connections.rst:61 +#, fuzzy msgid "" -"When setting :code:`root_certificates`, the client expects the PEM-encoded " -"root certificates as a byte string. We are again using :code:`Path` to " -"simplify reading those as byte strings." +"When setting :code:`root_certificates`, the client expects a file path to" +" PEM-encoded root certificates." msgstr "" -"当设置 :code:`root_certificates` 时,客户端希望 PEM 编码的根证书是字节字符" -"串。我们再次使用 :code:`Path` 来简化以字节字符串形式读取证书的过程。" - -#: ../../source/how-to-enable-ssl-connections.rst:89 -#: ../../source/how-to-use-built-in-mods.rst:85 -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:287 -msgid "Conclusion" -msgstr "总结" +"当设置 :code:`root_certificates` 时,客户端希望 PEM 编码的根证书是字节字符串。我们再次使用 " +":code:`Path` 来简化以字节字符串形式读取证书的过程。" -#: ../../source/how-to-enable-ssl-connections.rst:91 +#: ../../source/how-to-enable-ssl-connections.rst:67 +#, fuzzy msgid "" -"You should now have learned how to generate self-signed certificates using " -"the given script, start a SSL-enabled server, and have a client establish a " -"secure connection to it." -msgstr "" -"现在,你应该已经学会了如何使用给定的脚本生成自签名证书、启动启用 SSL 的服务器" -"并让客户端与其建立安全连接。" +"You should now have learned how to generate self-signed certificates " +"using the given script, start an SSL-enabled server and have a client " +"establish a secure connection to it." +msgstr "现在,你应该已经学会了如何使用给定的脚本生成自签名证书、启动启用 SSL 的服务器并让客户端与其建立安全连接。" -#: ../../source/how-to-enable-ssl-connections.rst:96 +#: ../../source/how-to-enable-ssl-connections.rst:72 msgid "Additional resources" msgstr "补充资源" -#: ../../source/how-to-enable-ssl-connections.rst:98 +#: ../../source/how-to-enable-ssl-connections.rst:74 msgid "" -"These additional sources might be relevant if you would like to dive deeper " -"into the topic of certificates:" +"These additional sources might be relevant if you would like to dive " +"deeper into the topic of certificates:" msgstr "如果您想更深入地了解证书主题,这些额外的资料来源可能有帮助:" -#: ../../source/how-to-enable-ssl-connections.rst:100 +#: ../../source/how-to-enable-ssl-connections.rst:76 msgid "`Let's Encrypt `_" msgstr "`让我们加密 `_" -#: ../../source/how-to-enable-ssl-connections.rst:101 +#: ../../source/how-to-enable-ssl-connections.rst:77 msgid "`certbot `_" msgstr "`certbot `_" @@ -5086,16 +4599,15 @@ msgstr "实施策略" #: ../../source/how-to-implement-strategies.rst:4 msgid "" -"The strategy abstraction enables implementation of fully custom strategies. " -"A strategy is basically the federated learning algorithm that runs on the " -"server. Strategies decide how to sample clients, how to configure clients " -"for training, how to aggregate updates, and how to evaluate models. Flower " -"provides a few built-in strategies which are based on the same API described " -"below." +"The strategy abstraction enables implementation of fully custom " +"strategies. A strategy is basically the federated learning algorithm that" +" runs on the server. Strategies decide how to sample clients, how to " +"configure clients for training, how to aggregate updates, and how to " +"evaluate models. Flower provides a few built-in strategies which are " +"based on the same API described below." msgstr "" -"策略抽象类可以实现完全定制的策略。策略基本上就是在服务器上运行的联邦学习算" -"法。策略决定如何对客户端进行采样、如何配置客户端进行训练、如何聚合参数更新以" -"及如何评估模型。Flower 提供了一些内置策略,这些策略基于下文所述的相同 API。" +"策略抽象类可以实现完全定制的策略。策略基本上就是在服务器上运行的联邦学习算法。策略决定如何对客户端进行采样、如何配置客户端进行训练、如何聚合参数更新以及如何评估模型。Flower" +" 提供了一些内置策略,这些策略基于下文所述的相同 API。" #: ../../source/how-to-implement-strategies.rst:11 msgid "The :code:`Strategy` abstraction" @@ -5103,13 +4615,14 @@ msgstr ":code:`策略 ` 抽象类" #: ../../source/how-to-implement-strategies.rst:13 msgid "" -"All strategy implementation are derived from the abstract base class :code:" -"`flwr.server.strategy.Strategy`, both built-in implementations and third " -"party implementations. This means that custom strategy implementations have " -"the exact same capabilities at their disposal as built-in ones." +"All strategy implementation are derived from the abstract base class " +":code:`flwr.server.strategy.Strategy`, both built-in implementations and " +"third party implementations. This means that custom strategy " +"implementations have the exact same capabilities at their disposal as " +"built-in ones." msgstr "" -"所有策略实现均源自抽象基类 :code:`flwr.server.strategy.Strategy`,包括内置实" -"现和第三方实现。这意味着自定义策略实现与内置实现具有完全相同的功能。" +"所有策略实现均源自抽象基类 " +":code:`flwr.server.strategy.Strategy`,包括内置实现和第三方实现。这意味着自定义策略实现与内置实现具有完全相同的功能。" #: ../../source/how-to-implement-strategies.rst:18 msgid "" @@ -5119,12 +4632,10 @@ msgstr "策略抽象定义了一些需要实现的抽象方法:" #: ../../source/how-to-implement-strategies.rst:75 msgid "" -"Creating a new strategy means implementing a new :code:`class` (derived from " -"the abstract base class :code:`Strategy`) that implements for the previously " -"shown abstract methods:" -msgstr "" -"创建一个新策略意味着要实现一个新的 :code:`class`(从抽象基类 :code:" -"`Strategy` 派生),该类要实现前面显示的抽象方法:" +"Creating a new strategy means implementing a new :code:`class` (derived " +"from the abstract base class :code:`Strategy`) that implements for the " +"previously shown abstract methods:" +msgstr "创建一个新策略意味着要实现一个新的 :code:`class`(从抽象基类 :code:`Strategy` 派生),该类要实现前面显示的抽象方法:" #: ../../source/how-to-implement-strategies.rst:100 msgid "The Flower server calls these methods in the following order:" @@ -5140,48 +4651,44 @@ msgstr ":code:`初始化参数` 方法" #: ../../source/how-to-implement-strategies.rst:182 msgid "" -":code:`initialize_parameters` is called only once, at the very beginning of " -"an execution. It is responsible for providing the initial global model " -"parameters in a serialized form (i.e., as a :code:`Parameters` object)." +":code:`initialize_parameters` is called only once, at the very beginning " +"of an execution. It is responsible for providing the initial global model" +" parameters in a serialized form (i.e., as a :code:`Parameters` object)." msgstr "" -":code:`initialize_parameters` 只调用一次,即在执行开始时。它负责以序列化形式" -"(即 :code:`Parameters` 对象)提供初始全局模型参数。" +":code:`initialize_parameters` 只调用一次,即在执行开始时。它负责以序列化形式(即 " +":code:`Parameters` 对象)提供初始全局模型参数。" #: ../../source/how-to-implement-strategies.rst:184 msgid "" -"Built-in strategies return user-provided initial parameters. The following " -"example shows how initial parameters can be passed to :code:`FedAvg`:" -msgstr "" -"内置策略会返回用户提供的初始参数。下面的示例展示了如何将初始参数传递给 :code:" -"`FedAvg`:" +"Built-in strategies return user-provided initial parameters. The " +"following example shows how initial parameters can be passed to " +":code:`FedAvg`:" +msgstr "内置策略会返回用户提供的初始参数。下面的示例展示了如何将初始参数传递给 :code:`FedAvg`:" #: ../../source/how-to-implement-strategies.rst:209 msgid "" "The Flower server will call :code:`initialize_parameters`, which either " -"returns the parameters that were passed to :code:`initial_parameters`, or :" -"code:`None`. If no parameters are returned from :code:" -"`initialize_parameters` (i.e., :code:`None`), the server will randomly " -"select one client and ask it to provide its parameters. This is a " -"convenience feature and not recommended in practice, but it can be useful " -"for prototyping. In practice, it is recommended to always use server-side " -"parameter initialization." -msgstr "" -"Flower 服务器将调用 :code:`initialize_parameters`,返回传给 :code:" -"`initial_parameters` 的参数或 :code:`None`。如果 :code:" -"`initialize_parameters` 没有返回任何参数(即 :code:`None`),服务器将随机选择" -"一个客户端并要求其提供参数。这只是一个便捷的功能,在实际应用中并不推荐使用," -"但在原型开发中可能很有用。在实践中,建议始终使用服务器端参数初始化。" +"returns the parameters that were passed to :code:`initial_parameters`, or" +" :code:`None`. If no parameters are returned from " +":code:`initialize_parameters` (i.e., :code:`None`), the server will " +"randomly select one client and ask it to provide its parameters. This is " +"a convenience feature and not recommended in practice, but it can be " +"useful for prototyping. In practice, it is recommended to always use " +"server-side parameter initialization." +msgstr "" +"Flower 服务器将调用 :code:`initialize_parameters`,返回传给 " +":code:`initial_parameters` 的参数或 :code:`None`。如果 " +":code:`initialize_parameters` 没有返回任何参数(即 " +":code:`None`),服务器将随机选择一个客户端并要求其提供参数。这只是一个便捷的功能,在实际应用中并不推荐使用,但在原型开发中可能很有用。在实践中,建议始终使用服务器端参数初始化。" #: ../../source/how-to-implement-strategies.rst:213 msgid "" "Server-side parameter initialization is a powerful mechanism. It can be " -"used, for example, to resume training from a previously saved checkpoint. It " -"is also the fundamental capability needed to implement hybrid approaches, " -"for example, to fine-tune a pre-trained model using federated learning." -msgstr "" -"服务器端参数初始化是一种强大的机制。例如,它可以用来从先前保存的检查点恢复训" -"练。它也是实现混合方法所需的基本能力,例如,使用联邦学习对预先训练好的模型进" -"行微调。" +"used, for example, to resume training from a previously saved checkpoint." +" It is also the fundamental capability needed to implement hybrid " +"approaches, for example, to fine-tune a pre-trained model using federated" +" learning." +msgstr "服务器端参数初始化是一种强大的机制。例如,它可以用来从先前保存的检查点恢复训练。它也是实现混合方法所需的基本能力,例如,使用联邦学习对预先训练好的模型进行微调。" #: ../../source/how-to-implement-strategies.rst:216 msgid "The :code:`configure_fit` method" @@ -5189,23 +4696,21 @@ msgstr ":code:`configure_fit`方法" #: ../../source/how-to-implement-strategies.rst:218 msgid "" -":code:`configure_fit` is responsible for configuring the upcoming round of " -"training. What does *configure* mean in this context? Configuring a round " -"means selecting clients and deciding what instructions to send to these " -"clients. The signature of :code:`configure_fit` makes this clear:" +":code:`configure_fit` is responsible for configuring the upcoming round " +"of training. What does *configure* mean in this context? Configuring a " +"round means selecting clients and deciding what instructions to send to " +"these clients. The signature of :code:`configure_fit` makes this clear:" msgstr "" -":code:`configure_fit` 负责配置即将开始的一轮训练。*配置*在这里是什么意思?配" -"置一轮训练意味着选择客户并决定向这些客户发送什么指令。:code:`configure_fit` " -"说明了这一点:" +":code:`configure_fit` " +"负责配置即将开始的一轮训练。*配置*在这里是什么意思?配置一轮训练意味着选择客户并决定向这些客户发送什么指令。:code:`configure_fit`" +" 说明了这一点:" #: ../../source/how-to-implement-strategies.rst:231 msgid "" "The return value is a list of tuples, each representing the instructions " -"that will be sent to a particular client. Strategy implementations usually " -"perform the following steps in :code:`configure_fit`:" -msgstr "" -"返回值是一个元组列表,每个元组代表将发送到特定客户端的指令。策略实现通常在 :" -"code:`configure_fit` 中执行以下步骤:" +"that will be sent to a particular client. Strategy implementations " +"usually perform the following steps in :code:`configure_fit`:" +msgstr "返回值是一个元组列表,每个元组代表将发送到特定客户端的指令。策略实现通常在 :code:`configure_fit` 中执行以下步骤:" #: ../../source/how-to-implement-strategies.rst:233 #: ../../source/how-to-implement-strategies.rst:280 @@ -5213,40 +4718,39 @@ msgid "" "Use the :code:`client_manager` to randomly sample all (or a subset of) " "available clients (each represented as a :code:`ClientProxy` object)" msgstr "" -"使用 :code:`client_manager` 随机抽样所有(或部分)可用客户端(每个客户端都表" -"示为 :code:`ClientProxy` 对象)" +"使用 :code:`client_manager` 随机抽样所有(或部分)可用客户端(每个客户端都表示为 :code:`ClientProxy` " +"对象)" #: ../../source/how-to-implement-strategies.rst:234 msgid "" "Pair each :code:`ClientProxy` with the same :code:`FitIns` holding the " "current global model :code:`parameters` and :code:`config` dict" msgstr "" -"将每个 :code:`ClientProxy` 与持有当前全局模型 :code:`parameters` 和 :code:" -"`config` dict 的 :code:`FitIns` 配对" +"将每个 :code:`ClientProxy` 与持有当前全局模型 :code:`parameters` 和 :code:`config` " +"dict 的 :code:`FitIns` 配对" #: ../../source/how-to-implement-strategies.rst:236 #, fuzzy msgid "" "More sophisticated implementations can use :code:`configure_fit` to " -"implement custom client selection logic. A client will only participate in a " -"round if the corresponding :code:`ClientProxy` is included in the list " -"returned from :code:`configure_fit`." +"implement custom client selection logic. A client will only participate " +"in a round if the corresponding :code:`ClientProxy` is included in the " +"list returned from :code:`configure_fit`." msgstr "" -"更复杂的实现可以使用 :code:`configure_fit` 来实现自定义的客户端选择逻辑。只有" -"当相应的 :code:`ClientProxy` 包含在 :code:`configure_fit` 返回的列表中时,客" -"户端才会参与进来。" +"更复杂的实现可以使用 :code:`configure_fit` 来实现自定义的客户端选择逻辑。只有当相应的 " +":code:`ClientProxy` 包含在 :code:`configure_fit` 返回的列表中时,客户端才会参与进来。" #: ../../source/how-to-implement-strategies.rst:240 msgid "" "The structure of this return value provides a lot of flexibility to the " "user. Since instructions are defined on a per-client basis, different " -"instructions can be sent to each client. This enables custom strategies to " -"train, for example, different models on different clients, or use different " -"hyperparameters on different clients (via the :code:`config` dict)." +"instructions can be sent to each client. This enables custom strategies " +"to train, for example, different models on different clients, or use " +"different hyperparameters on different clients (via the :code:`config` " +"dict)." msgstr "" -"该返回值的结构为用户提供了很大的灵活性。由于指令是按客户端定义的,因此可以向" -"每个客户端发送不同的指令。这使得自定义策略成为可能,例如在不同的客户端上训练" -"不同的模型,或在不同的客户端上使用不同的超参数(通过 :code:`config` dict)。" +"该返回值的结构为用户提供了很大的灵活性。由于指令是按客户端定义的,因此可以向每个客户端发送不同的指令。这使得自定义策略成为可能,例如在不同的客户端上训练不同的模型,或在不同的客户端上使用不同的超参数(通过" +" :code:`config` dict)。" #: ../../source/how-to-implement-strategies.rst:243 msgid "The :code:`aggregate_fit` method" @@ -5254,33 +4758,31 @@ msgstr ":code:`aggregate_fit` 方法" #: ../../source/how-to-implement-strategies.rst:245 msgid "" -":code:`aggregate_fit` is responsible for aggregating the results returned by " -"the clients that were selected and asked to train in :code:`configure_fit`." -msgstr "" -":code:`aggregate_fit` 负责汇总在 :code:`configure_fit` 中选择并要求训练的客户" -"端所返回的结果。" +":code:`aggregate_fit` is responsible for aggregating the results returned" +" by the clients that were selected and asked to train in " +":code:`configure_fit`." +msgstr ":code:`aggregate_fit` 负责汇总在 :code:`configure_fit` 中选择并要求训练的客户端所返回的结果。" #: ../../source/how-to-implement-strategies.rst:258 msgid "" "Of course, failures can happen, so there is no guarantee that the server " -"will get results from all the clients it sent instructions to (via :code:" -"`configure_fit`). :code:`aggregate_fit` therefore receives a list of :code:" -"`results`, but also a list of :code:`failures`." +"will get results from all the clients it sent instructions to (via " +":code:`configure_fit`). :code:`aggregate_fit` therefore receives a list " +"of :code:`results`, but also a list of :code:`failures`." msgstr "" -"当然,失败是有可能发生的,因此无法保证服务器会从它发送指令(通过 :code:" -"`configure_fit`)的所有客户端获得结果。因此 :code:`aggregate_fit` 会收到 :" -"code:`results` 的列表,但也会收到 :code:`failures` 的列表。" +"当然,失败是有可能发生的,因此无法保证服务器会从它发送指令(通过 :code:`configure_fit`)的所有客户端获得结果。因此 " +":code:`aggregate_fit` 会收到 :code:`results` 的列表,但也会收到 :code:`failures` 的列表。" #: ../../source/how-to-implement-strategies.rst:260 msgid "" -":code:`aggregate_fit` returns an optional :code:`Parameters` object and a " -"dictionary of aggregated metrics. The :code:`Parameters` return value is " -"optional because :code:`aggregate_fit` might decide that the results " +":code:`aggregate_fit` returns an optional :code:`Parameters` object and a" +" dictionary of aggregated metrics. The :code:`Parameters` return value is" +" optional because :code:`aggregate_fit` might decide that the results " "provided are not sufficient for aggregation (e.g., too many failures)." msgstr "" -":code:`aggregate_fit` 返回一个可选的 :code:`Parameters` 对象和一个聚合度量的" -"字典。:code:`Parameters` 返回值是可选的,因为 :code:`aggregate_fit` 可能会认" -"为所提供的结果不足以进行聚合(例如,失败次数过多)。" +":code:`aggregate_fit` 返回一个可选的 :code:`Parameters` " +"对象和一个聚合度量的字典。:code:`Parameters` 返回值是可选的,因为 :code:`aggregate_fit` " +"可能会认为所提供的结果不足以进行聚合(例如,失败次数过多)。" #: ../../source/how-to-implement-strategies.rst:263 msgid "The :code:`configure_evaluate` method" @@ -5288,55 +4790,53 @@ msgstr ":code:`configure_evaluate`方法" #: ../../source/how-to-implement-strategies.rst:265 msgid "" -":code:`configure_evaluate` is responsible for configuring the upcoming round " -"of evaluation. What does *configure* mean in this context? Configuring a " -"round means selecting clients and deciding what instructions to send to " -"these clients. The signature of :code:`configure_evaluate` makes this clear:" +":code:`configure_evaluate` is responsible for configuring the upcoming " +"round of evaluation. What does *configure* mean in this context? " +"Configuring a round means selecting clients and deciding what " +"instructions to send to these clients. The signature of " +":code:`configure_evaluate` makes this clear:" msgstr "" -":code:`configure_evaluate` 负责配置下一轮评估。*配置*在这里是什么意思?配置一" -"轮评估意味着选择客户端并决定向这些客户端发送什么指令。:code:" -"`configure_evaluate` 说明了这一点:" +":code:`configure_evaluate` " +"负责配置下一轮评估。*配置*在这里是什么意思?配置一轮评估意味着选择客户端并决定向这些客户端发送什么指令。:code:`configure_evaluate`" +" 说明了这一点:" #: ../../source/how-to-implement-strategies.rst:278 msgid "" "The return value is a list of tuples, each representing the instructions " -"that will be sent to a particular client. Strategy implementations usually " -"perform the following steps in :code:`configure_evaluate`:" -msgstr "" -"返回值是一个元组列表,每个元组代表将发送到特定客户端的指令。策略实现通常在 :" -"code:`configure_evaluate` 中执行以下步骤:" +"that will be sent to a particular client. Strategy implementations " +"usually perform the following steps in :code:`configure_evaluate`:" +msgstr "返回值是一个元组列表,每个元组代表将发送到特定客户端的指令。策略实现通常在 :code:`configure_evaluate` 中执行以下步骤:" #: ../../source/how-to-implement-strategies.rst:281 msgid "" -"Pair each :code:`ClientProxy` with the same :code:`EvaluateIns` holding the " -"current global model :code:`parameters` and :code:`config` dict" +"Pair each :code:`ClientProxy` with the same :code:`EvaluateIns` holding " +"the current global model :code:`parameters` and :code:`config` dict" msgstr "" -"将每个 :code:`ClientProxy` 与持有当前全局模型 :code:`parameters` 和 :code:" -"`config` dict 的 :code:`EvaluateIns` 配对" +"将每个 :code:`ClientProxy` 与持有当前全局模型 :code:`parameters` 和 :code:`config` " +"dict 的 :code:`EvaluateIns` 配对" #: ../../source/how-to-implement-strategies.rst:283 #, fuzzy msgid "" "More sophisticated implementations can use :code:`configure_evaluate` to " -"implement custom client selection logic. A client will only participate in a " -"round if the corresponding :code:`ClientProxy` is included in the list " -"returned from :code:`configure_evaluate`." +"implement custom client selection logic. A client will only participate " +"in a round if the corresponding :code:`ClientProxy` is included in the " +"list returned from :code:`configure_evaluate`." msgstr "" -"更复杂的实现可以使用 :code:`configure_evaluate` 来实现自定义的客户端选择逻" -"辑。只有当相应的 :code:`ClientProxy` 包含在 :code:`configure_evaluate` 返回的" -"列表中时,客户端才会参与进来。" +"更复杂的实现可以使用 :code:`configure_evaluate` 来实现自定义的客户端选择逻辑。只有当相应的 " +":code:`ClientProxy` 包含在 :code:`configure_evaluate` 返回的列表中时,客户端才会参与进来。" #: ../../source/how-to-implement-strategies.rst:287 msgid "" "The structure of this return value provides a lot of flexibility to the " "user. Since instructions are defined on a per-client basis, different " -"instructions can be sent to each client. This enables custom strategies to " -"evaluate, for example, different models on different clients, or use " -"different hyperparameters on different clients (via the :code:`config` dict)." +"instructions can be sent to each client. This enables custom strategies " +"to evaluate, for example, different models on different clients, or use " +"different hyperparameters on different clients (via the :code:`config` " +"dict)." msgstr "" -"该返回值的结构为用户提供了很大的灵活性。由于指令是按客户端定义的,因此可以向" -"每个客户端发送不同的指令。这使得自定义策略可以在不同客户端上评估不同的模型," -"或在不同客户端上使用不同的超参数(通过 :code:`config` dict)。" +"该返回值的结构为用户提供了很大的灵活性。由于指令是按客户端定义的,因此可以向每个客户端发送不同的指令。这使得自定义策略可以在不同客户端上评估不同的模型,或在不同客户端上使用不同的超参数(通过" +" :code:`config` dict)。" #: ../../source/how-to-implement-strategies.rst:291 msgid "The :code:`aggregate_evaluate` method" @@ -5345,34 +4845,33 @@ msgstr ":code:`aggregate_evaluate` 方法" #: ../../source/how-to-implement-strategies.rst:293 msgid "" ":code:`aggregate_evaluate` is responsible for aggregating the results " -"returned by the clients that were selected and asked to evaluate in :code:" -"`configure_evaluate`." +"returned by the clients that were selected and asked to evaluate in " +":code:`configure_evaluate`." msgstr "" -":code:`aggregate_evaluate` 负责汇总在 :code:`configure_evaluate` 中选择并要求" -"评估的客户端返回的结果。" +":code:`aggregate_evaluate` 负责汇总在 :code:`configure_evaluate` " +"中选择并要求评估的客户端返回的结果。" #: ../../source/how-to-implement-strategies.rst:306 msgid "" "Of course, failures can happen, so there is no guarantee that the server " -"will get results from all the clients it sent instructions to (via :code:" -"`configure_evaluate`). :code:`aggregate_evaluate` therefore receives a list " -"of :code:`results`, but also a list of :code:`failures`." +"will get results from all the clients it sent instructions to (via " +":code:`configure_evaluate`). :code:`aggregate_evaluate` therefore " +"receives a list of :code:`results`, but also a list of :code:`failures`." msgstr "" -"当然,失败是有可能发生的,因此无法保证服务器会从它发送指令(通过 :code:" -"`configure_evaluate`)的所有客户端获得结果。因此, :code:" -"`aggregate_evaluate` 会接收 :code:`results` 的列表,但也会接收 :code:" -"`failures` 的列表。" +"当然,失败是有可能发生的,因此无法保证服务器会从它发送指令(通过 " +":code:`configure_evaluate`)的所有客户端获得结果。因此, :code:`aggregate_evaluate` 会接收 " +":code:`results` 的列表,但也会接收 :code:`failures` 的列表。" #: ../../source/how-to-implement-strategies.rst:308 msgid "" -":code:`aggregate_evaluate` returns an optional :code:`float` (loss) and a " -"dictionary of aggregated metrics. The :code:`float` return value is optional " -"because :code:`aggregate_evaluate` might decide that the results provided " -"are not sufficient for aggregation (e.g., too many failures)." +":code:`aggregate_evaluate` returns an optional :code:`float` (loss) and a" +" dictionary of aggregated metrics. The :code:`float` return value is " +"optional because :code:`aggregate_evaluate` might decide that the results" +" provided are not sufficient for aggregation (e.g., too many failures)." msgstr "" -":code:`aggregate_evaluate` 返回一个可选的 :code:`float`(损失值)和一个聚合指" -"标字典。:code:`float` 返回值是可选的,因为 :code:`aggregate_evaluate` 可能会" -"认为所提供的结果不足以进行聚合(例如,失败次数过多)。" +":code:`aggregate_evaluate` 返回一个可选的 " +":code:`float`(损失值)和一个聚合指标字典。:code:`float` 返回值是可选的,因为 " +":code:`aggregate_evaluate` 可能会认为所提供的结果不足以进行聚合(例如,失败次数过多)。" #: ../../source/how-to-implement-strategies.rst:311 msgid "The :code:`evaluate` method" @@ -5381,24 +4880,23 @@ msgstr ":code:`evaluate`方法" #: ../../source/how-to-implement-strategies.rst:313 msgid "" ":code:`evaluate` is responsible for evaluating model parameters on the " -"server-side. Having :code:`evaluate` in addition to :code:" -"`configure_evaluate`/:code:`aggregate_evaluate` enables strategies to " -"perform both servers-side and client-side (federated) evaluation." +"server-side. Having :code:`evaluate` in addition to " +":code:`configure_evaluate`/:code:`aggregate_evaluate` enables strategies " +"to perform both servers-side and client-side (federated) evaluation." msgstr "" -":code:`evaluate` 负责在服务器端评估模型参数。除了 :code:" -"`configure_evaluate`/:code:`aggregate_evaluate` 之外,:code:`evaluate` 可以使" -"策略同时执行服务器端和客户端(联邦)评估。" +":code:`evaluate` 负责在服务器端评估模型参数。除了 " +":code:`configure_evaluate`/:code:`aggregate_evaluate` 之外,:code:`evaluate`" +" 可以使策略同时执行服务器端和客户端(联邦)评估。" #: ../../source/how-to-implement-strategies.rst:323 msgid "" -"The return value is again optional because the strategy might not need to " -"implement server-side evaluation or because the user-defined :code:" -"`evaluate` method might not complete successfully (e.g., it might fail to " -"load the server-side evaluation data)." +"The return value is again optional because the strategy might not need to" +" implement server-side evaluation or because the user-defined " +":code:`evaluate` method might not complete successfully (e.g., it might " +"fail to load the server-side evaluation data)." msgstr "" -"返回值也是可选的,因为策略可能不需要执行服务器端评估,或者因为用户定义的 :" -"code:`evaluate` 方法可能无法成功完成(例如,它可能无法加载服务器端评估数" -"据)。" +"返回值也是可选的,因为策略可能不需要执行服务器端评估,或者因为用户定义的 :code:`evaluate` " +"方法可能无法成功完成(例如,它可能无法加载服务器端评估数据)。" #: ../../source/how-to-install-flower.rst:2 msgid "Install Flower" @@ -5413,13 +4911,15 @@ msgid "Install stable release" msgstr "安装稳定版" #: ../../source/how-to-install-flower.rst:15 +#: ../../source/how-to-upgrade-to-flower-next.rst:46 #, fuzzy msgid "Using pip" msgstr "使用 pip" #: ../../source/how-to-install-flower.rst:17 msgid "" -"Stable releases are available on `PyPI `_::" +"Stable releases are available on `PyPI " +"`_::" msgstr "稳定版本可在 `PyPI `_::" #: ../../source/how-to-install-flower.rst:21 @@ -5441,15 +4941,15 @@ msgstr "Flower 也可以从 ``conda-forge`` 频道安装。" #: ../../source/how-to-install-flower.rst:31 #, fuzzy msgid "" -"If you have not added ``conda-forge`` to your channels, you will first need " -"to run the following::" +"If you have not added ``conda-forge`` to your channels, you will first " +"need to run the following::" msgstr "如果您尚未在频道中添加 ``conda-forge``,则首先需要运行以下程序::" #: ../../source/how-to-install-flower.rst:36 #, fuzzy msgid "" -"Once the ``conda-forge`` channel has been enabled, ``flwr`` can be installed " -"with ``conda``::" +"Once the ``conda-forge`` channel has been enabled, ``flwr`` can be " +"installed with ``conda``::" msgstr "一旦启用了 ``conda-forge`` 频道,就可以使用 ``conda``: 安装 ``flwr``:" #: ../../source/how-to-install-flower.rst:40 @@ -5465,11 +4965,9 @@ msgstr "验证安装" #, fuzzy msgid "" "The following command can be used to verify if Flower was successfully " -"installed. If everything worked, it should print the version of Flower to " -"the command line::" -msgstr "" -"可以使用以下命令来验证 Flower 是否安装成功。如果一切正常,它将在命令行中打印 " -"Flower 的版本::" +"installed. If everything worked, it should print the version of Flower to" +" the command line::" +msgstr "可以使用以下命令来验证 Flower 是否安装成功。如果一切正常,它将在命令行中打印 Flower 的版本::" #: ../../source/how-to-install-flower.rst:55 msgid "Advanced installation options" @@ -5493,19 +4991,16 @@ msgstr "安装预发布版本" #: ../../source/how-to-install-flower.rst:65 msgid "" -"New (possibly unstable) versions of Flower are sometimes available as pre-" -"release versions (alpha, beta, release candidate) before the stable release " -"happens::" -msgstr "" -"在稳定版发布之前,Flower 的新版本(可能是不稳定版)有时会作为预发布版本" -"(alpha、beta、候选发布版本)提供::" +"New (possibly unstable) versions of Flower are sometimes available as " +"pre-release versions (alpha, beta, release candidate) before the stable " +"release happens::" +msgstr "在稳定版发布之前,Flower 的新版本(可能是不稳定版)有时会作为预发布版本(alpha、beta、候选发布版本)提供::" #: ../../source/how-to-install-flower.rst:69 msgid "" -"For simulations that use the Virtual Client Engine, ``flwr`` pre-releases " -"should be installed with the ``simulation`` extra::" -msgstr "" -"对于使用虚拟客户端引擎的模拟,`flwr``预发行版应与`simulation``一起安装:" +"For simulations that use the Virtual Client Engine, ``flwr`` pre-releases" +" should be installed with the ``simulation`` extra::" +msgstr "对于使用虚拟客户端引擎的模拟,`flwr``预发行版应与`simulation``一起安装:" #: ../../source/how-to-install-flower.rst:74 msgid "Install nightly release" @@ -5513,14 +5008,14 @@ msgstr "安装隔夜版本" #: ../../source/how-to-install-flower.rst:76 msgid "" -"The latest (potentially unstable) changes in Flower are available as nightly " -"releases::" +"The latest (potentially unstable) changes in Flower are available as " +"nightly releases::" msgstr "Flower 中最新(可能不稳定)的更改以隔夜发布的形式提供::" #: ../../source/how-to-install-flower.rst:80 msgid "" -"For simulations that use the Virtual Client Engine, ``flwr-nightly`` should " -"be installed with the ``simulation`` extra::" +"For simulations that use the Virtual Client Engine, ``flwr-nightly`` " +"should be installed with the ``simulation`` extra::" msgstr "对于使用虚拟客户端引擎的模拟,`flwr-nightly`应与`simulation`一起安装:" #: ../../source/how-to-monitor-simulation.rst:2 @@ -5529,23 +5024,20 @@ msgstr "监控模拟" #: ../../source/how-to-monitor-simulation.rst:4 msgid "" -"Flower allows you to monitor system resources while running your simulation. " -"Moreover, the Flower simulation engine is powerful and enables you to decide " -"how to allocate resources per client manner and constrain the total usage. " -"Insights from resource consumption can help you make smarter decisions and " -"speed up the execution time." +"Flower allows you to monitor system resources while running your " +"simulation. Moreover, the Flower simulation engine is powerful and " +"enables you to decide how to allocate resources per client manner and " +"constrain the total usage. Insights from resource consumption can help " +"you make smarter decisions and speed up the execution time." msgstr "" -"Flower 允许您在运行模拟时监控系统资源。此外,Flower 仿真引擎功能强大,能让您" -"决定如何按客户端方式分配资源并限制总使用量。从资源消耗中获得的观察可以帮助您" -"做出更明智的决策,并加快执行时间。" +"Flower 允许您在运行模拟时监控系统资源。此外,Flower " +"仿真引擎功能强大,能让您决定如何按客户端方式分配资源并限制总使用量。从资源消耗中获得的观察可以帮助您做出更明智的决策,并加快执行时间。" #: ../../source/how-to-monitor-simulation.rst:6 msgid "" -"The specific instructions assume you are using macOS and have the `Homebrew " -"`_ package manager installed." -msgstr "" -"具体说明假定你使用的是 macOS,并且安装了 `Homebrew `_ 软件" -"包管理器。" +"The specific instructions assume you are using macOS and have the " +"`Homebrew `_ package manager installed." +msgstr "具体说明假定你使用的是 macOS,并且安装了 `Homebrew `_ 软件包管理器。" #: ../../source/how-to-monitor-simulation.rst:10 msgid "Downloads" @@ -5553,14 +5045,14 @@ msgstr "下载" #: ../../source/how-to-monitor-simulation.rst:16 msgid "" -"`Prometheus `_ is used for data collection, while " -"`Grafana `_ will enable you to visualize the collected " -"data. They are both well integrated with `Ray `_ which " -"Flower uses under the hood." +"`Prometheus `_ is used for data collection, while" +" `Grafana `_ will enable you to visualize the " +"collected data. They are both well integrated with `Ray " +"`_ which Flower uses under the hood." msgstr "" -"`Prometheus `_ 用于收集数据,而 `Grafana `_ 则能让你将收集到的数据可视化。它们都与 Flower 在引擎下使用的 " -"`Ray `_ 紧密集成。" +"`Prometheus `_ 用于收集数据,而 `Grafana " +"`_ 则能让你将收集到的数据可视化。它们都与 Flower 在引擎下使用的 `Ray " +"`_ 紧密集成。" #: ../../source/how-to-monitor-simulation.rst:18 msgid "" @@ -5578,26 +5070,23 @@ msgstr "在上一代英特尔 Mac 设备上,应该是这样:" #: ../../source/how-to-monitor-simulation.rst:34 msgid "" -"Open the respective configuration files and change them. Depending on your " -"device, use one of the two following commands:" +"Open the respective configuration files and change them. Depending on " +"your device, use one of the two following commands:" msgstr "打开相应的配置文件并修改它们。根据设备情况,使用以下两个命令之一:" #: ../../source/how-to-monitor-simulation.rst:44 msgid "" -"and then delete all the text in the file and paste a new Prometheus config " -"you see below. You may adjust the time intervals to your requirements:" -msgstr "" -"然后删除文件中的所有文本,粘贴一个新的 Prometheus 配置文件,如下所示。您可以" -"根据需要调整时间间隔:" +"and then delete all the text in the file and paste a new Prometheus " +"config you see below. You may adjust the time intervals to your " +"requirements:" +msgstr "然后删除文件中的所有文本,粘贴一个新的 Prometheus 配置文件,如下所示。您可以根据需要调整时间间隔:" #: ../../source/how-to-monitor-simulation.rst:59 msgid "" -"Now after you have edited the Prometheus configuration, do the same with the " -"Grafana configuration files. Open those using one of the following commands " -"as before:" -msgstr "" -"编辑完 Prometheus 配置后,请对 Grafana 配置文件执行同样的操作。与之前一样,使" -"用以下命令之一打开这些文件:" +"Now after you have edited the Prometheus configuration, do the same with " +"the Grafana configuration files. Open those using one of the following " +"commands as before:" +msgstr "编辑完 Prometheus 配置后,请对 Grafana 配置文件执行同样的操作。与之前一样,使用以下命令之一打开这些文件:" #: ../../source/how-to-monitor-simulation.rst:69 msgid "" @@ -5607,8 +5096,8 @@ msgstr "您的终端编辑器应该会打开,并允许您像之前一样应用 #: ../../source/how-to-monitor-simulation.rst:84 msgid "" -"Congratulations, you just downloaded all the necessary software needed for " -"metrics tracking. Now, let’s start it." +"Congratulations, you just downloaded all the necessary software needed " +"for metrics tracking. Now, let’s start it." msgstr "恭喜您,您刚刚下载了指标跟踪所需的所有软件。现在,让我们开始吧。" #: ../../source/how-to-monitor-simulation.rst:88 @@ -5623,8 +5112,8 @@ msgstr "在运行 Flower 模拟之前,您必须启动刚刚安装和配置的 #: ../../source/how-to-monitor-simulation.rst:97 msgid "" -"Please include the following argument in your Python code when starting a " -"simulation." +"Please include the following argument in your Python code when starting a" +" simulation." msgstr "开始模拟时,请在 Python 代码中加入以下参数。" #: ../../source/how-to-monitor-simulation.rst:108 @@ -5633,8 +5122,8 @@ msgstr "现在,您可以开始工作了。" #: ../../source/how-to-monitor-simulation.rst:110 msgid "" -"Shortly after the simulation starts, you should see the following logs in " -"your terminal:" +"Shortly after the simulation starts, you should see the following logs in" +" your terminal:" msgstr "模拟启动后不久,您就会在终端中看到以下日志:" #: ../../source/how-to-monitor-simulation.rst:117 @@ -5643,31 +5132,28 @@ msgstr "您可以在 ``_ 查看所有内容。" #: ../../source/how-to-monitor-simulation.rst:119 msgid "" -"It's a Ray Dashboard. You can navigate to Metrics (on the left panel, the " -"lowest option)." -msgstr "" -"这是一个 Ray Dashboard。您可以导航到 \"度量标准\"(左侧面板,最低选项)。" +"It's a Ray Dashboard. You can navigate to Metrics (on the left panel, the" +" lowest option)." +msgstr "这是一个 Ray Dashboard。您可以导航到 \"度量标准\"(左侧面板,最低选项)。" #: ../../source/how-to-monitor-simulation.rst:121 msgid "" -"Or alternatively, you can just see them in Grafana by clicking on the right-" -"up corner, “View in Grafana”. Please note that the Ray dashboard is only " -"accessible during the simulation. After the simulation ends, you can only " -"use Grafana to explore the metrics. You can start Grafana by going to " -"``http://localhost:3000/``." +"Or alternatively, you can just see them in Grafana by clicking on the " +"right-up corner, “View in Grafana”. Please note that the Ray dashboard is" +" only accessible during the simulation. After the simulation ends, you " +"can only use Grafana to explore the metrics. You can start Grafana by " +"going to ``http://localhost:3000/``." msgstr "" -"或者,您也可以点击右上角的 \"在 Grafana 中查看\",在 Grafana 中查看它们。请注" -"意,Ray 仪表盘只能在模拟期间访问。模拟结束后,您只能使用 Grafana 浏览指标。您" -"可以访问 ``http://localhost:3000/``启动 Grafana。" +"或者,您也可以点击右上角的 \"在 Grafana 中查看\",在 Grafana 中查看它们。请注意,Ray " +"仪表盘只能在模拟期间访问。模拟结束后,您只能使用 Grafana 浏览指标。您可以访问 ``http://localhost:3000/``启动 " +"Grafana。" #: ../../source/how-to-monitor-simulation.rst:123 msgid "" "After you finish the visualization, stop Prometheus and Grafana. This is " "important as they will otherwise block, for example port :code:`3000` on " "your machine as long as they are running." -msgstr "" -"完成可视化后,请停止 Prometheus 和 Grafana。这一点很重要,否则只要它们在运" -"行,就会阻塞机器上的端口 :code:`3000`。" +msgstr "完成可视化后,请停止 Prometheus 和 Grafana。这一点很重要,否则只要它们在运行,就会阻塞机器上的端口 :code:`3000`。" #: ../../source/how-to-monitor-simulation.rst:132 msgid "Resource allocation" @@ -5675,24 +5161,21 @@ msgstr "资源分配" #: ../../source/how-to-monitor-simulation.rst:134 msgid "" -"You must understand how the Ray library works to efficiently allocate system " -"resources to simulation clients on your own." -msgstr "" -"您必须了解 Ray 库是如何工作的,才能有效地为自己的仿真客户端分配系统资源。" +"You must understand how the Ray library works to efficiently allocate " +"system resources to simulation clients on your own." +msgstr "您必须了解 Ray 库是如何工作的,才能有效地为自己的仿真客户端分配系统资源。" #: ../../source/how-to-monitor-simulation.rst:136 msgid "" "Initially, the simulation (which Ray handles under the hood) starts by " "default with all the available resources on the system, which it shares " -"among the clients. It doesn't mean it divides it equally among all of them, " -"nor that the model training happens at all of them simultaneously. You will " -"learn more about that in the later part of this blog. You can check the " -"system resources by running the following:" +"among the clients. It doesn't mean it divides it equally among all of " +"them, nor that the model training happens at all of them simultaneously. " +"You will learn more about that in the later part of this blog. You can " +"check the system resources by running the following:" msgstr "" -"最初,模拟(由 Ray 在引擎下处理)默认使用系统上的所有可用资源启动,并在客户端" -"之间共享。但这并不意味着它会将资源平均分配给所有客户端,也不意味着模型训练会" -"在所有客户端同时进行。您将在本博客的后半部分了解到更多相关信息。您可以运行以" -"下命令检查系统资源:" +"最初,模拟(由 Ray " +"在引擎下处理)默认使用系统上的所有可用资源启动,并在客户端之间共享。但这并不意味着它会将资源平均分配给所有客户端,也不意味着模型训练会在所有客户端同时进行。您将在本博客的后半部分了解到更多相关信息。您可以运行以下命令检查系统资源:" #: ../../source/how-to-monitor-simulation.rst:143 msgid "In Google Colab, the result you see might be similar to this:" @@ -5700,8 +5183,8 @@ msgstr "在 Google Colab 中,您看到的结果可能与此类似:" #: ../../source/how-to-monitor-simulation.rst:155 msgid "" -"However, you can overwrite the defaults. When starting a simulation, do the " -"following (you don't need to overwrite all of them):" +"However, you can overwrite the defaults. When starting a simulation, do " +"the following (you don't need to overwrite all of them):" msgstr "不过,您可以覆盖默认值。开始模拟时,请执行以下操作(不必全部覆盖):" #: ../../source/how-to-monitor-simulation.rst:175 @@ -5710,26 +5193,23 @@ msgstr "我们还可以为单个客户指定资源。" #: ../../source/how-to-monitor-simulation.rst:205 msgid "" -"Now comes the crucial part. Ray will start a new client only when it has all " -"the required resources (such that they run in parallel) when the resources " -"allow." -msgstr "" -"现在到了关键部分。只有在资源允许的情况下,Ray 才会在拥有所有所需资源(如并行" -"运行)时启动新客户端。" +"Now comes the crucial part. Ray will start a new client only when it has " +"all the required resources (such that they run in parallel) when the " +"resources allow." +msgstr "现在到了关键部分。只有在资源允许的情况下,Ray 才会在拥有所有所需资源(如并行运行)时启动新客户端。" #: ../../source/how-to-monitor-simulation.rst:207 msgid "" -"In the example above, only one client will be run, so your clients won't run " -"concurrently. Setting :code:`client_num_gpus = 0.5` would allow running two " -"clients and therefore enable them to run concurrently. Be careful not to " -"require more resources than available. If you specified :code:" -"`client_num_gpus = 2`, the simulation wouldn't start (even if you had 2 GPUs " -"but decided to set 1 in :code:`ray_init_args`)." +"In the example above, only one client will be run, so your clients won't " +"run concurrently. Setting :code:`client_num_gpus = 0.5` would allow " +"running two clients and therefore enable them to run concurrently. Be " +"careful not to require more resources than available. If you specified " +":code:`client_num_gpus = 2`, the simulation wouldn't start (even if you " +"had 2 GPUs but decided to set 1 in :code:`ray_init_args`)." msgstr "" -"在上面的示例中,将只运行一个客户端,因此您的客户端不会并发运行。设置 :code:" -"`client_num_gpus = 0.5` 将允许运行两个客户端,从而使它们能够并发运行。请注" -"意,所需的资源不要超过可用资源。如果您指定 :code:`client_num_gpus = 2`,模拟" -"将无法启动(即使您有 2 个 GPU,但决定在 :code:`ray_init_args` 中设置为 1)。" +"在上面的示例中,将只运行一个客户端,因此您的客户端不会并发运行。设置 :code:`client_num_gpus = 0.5` " +"将允许运行两个客户端,从而使它们能够并发运行。请注意,所需的资源不要超过可用资源。如果您指定 :code:`client_num_gpus = " +"2`,模拟将无法启动(即使您有 2 个 GPU,但决定在 :code:`ray_init_args` 中设置为 1)。" #: ../../source/how-to-monitor-simulation.rst:212 ../../source/ref-faq.rst:2 msgid "FAQ" @@ -5741,25 +5221,22 @@ msgstr "问:我没有看到任何指标记录。" #: ../../source/how-to-monitor-simulation.rst:216 msgid "" -"A: The timeframe might not be properly set. The setting is in the top right " -"corner (\"Last 30 minutes\" by default). Please change the timeframe to " -"reflect the period when the simulation was running." -msgstr "" -"答:时间范围可能没有正确设置。设置在右上角(默认为 \"最后 30 分钟\")。请更改" -"时间框架,以反映模拟运行的时间段。" +"A: The timeframe might not be properly set. The setting is in the top " +"right corner (\"Last 30 minutes\" by default). Please change the " +"timeframe to reflect the period when the simulation was running." +msgstr "答:时间范围可能没有正确设置。设置在右上角(默认为 \"最后 30 分钟\")。请更改时间框架,以反映模拟运行的时间段。" #: ../../source/how-to-monitor-simulation.rst:218 msgid "" -"Q: I see “Grafana server not detected. Please make sure the Grafana server " -"is running and refresh this page” after going to the Metrics tab in Ray " -"Dashboard." -msgstr "" -"问:我看到 \"未检测到 Grafana 服务器。请确保 Grafana 服务器正在运行并刷新此页" -"面\"。" +"Q: I see “Grafana server not detected. Please make sure the Grafana " +"server is running and refresh this page” after going to the Metrics tab " +"in Ray Dashboard." +msgstr "问:我看到 \"未检测到 Grafana 服务器。请确保 Grafana 服务器正在运行并刷新此页面\"。" #: ../../source/how-to-monitor-simulation.rst:220 msgid "" -"A: You probably don't have Grafana running. Please check the running services" +"A: You probably don't have Grafana running. Please check the running " +"services" msgstr "答:您可能没有运行 Grafana。请检查正在运行的服务" #: ../../source/how-to-monitor-simulation.rst:226 @@ -5770,8 +5247,8 @@ msgstr "问:在访问 ``_时,我看到 \"无法访问 #: ../../source/how-to-monitor-simulation.rst:228 msgid "" -"A: Either the simulation has already finished, or you still need to start " -"Prometheus." +"A: Either the simulation has already finished, or you still need to start" +" Prometheus." msgstr "答:要么模拟已经完成,要么您还需要启动Prometheus。" #: ../../source/how-to-monitor-simulation.rst:232 @@ -5783,15 +5260,14 @@ msgstr "资源" msgid "" "Ray Dashboard: ``_" -msgstr "" -"Ray 仪表盘: ``_" +msgstr "Ray 仪表盘: ``_" #: ../../source/how-to-monitor-simulation.rst:236 #, fuzzy msgid "Ray Metrics: ``_" msgstr "" -"Ray 指标: ``_" +"Ray 指标: ``_" #: ../../source/how-to-run-flower-using-docker.rst:2 #, fuzzy @@ -5801,9 +5277,9 @@ msgstr "使用 Docker 运行 Flower" #: ../../source/how-to-run-flower-using-docker.rst:4 #, fuzzy msgid "" -"The simplest way to get started with Flower is by using the pre-made Docker " -"images, which you can find on `Docker Hub `_." +"The simplest way to get started with Flower is by using the pre-made " +"Docker images, which you can find on `Docker Hub " +"`__." msgstr "" "开始使用 Flower 的最简单方法是使用预制的 Docker 镜像,您可以在 `Docker Hub " "`_ 上找到这些镜像。" @@ -5816,221 +5292,540 @@ msgstr "开始之前,请确保 Docker 守护进程正在运行:" #: ../../source/how-to-run-flower-using-docker.rst:14 #, fuzzy msgid "" -"If you do not see the version of Docker but instead get an error saying that " -"the command was not found, you will need to install Docker first. You can " -"find installation instruction `here `_." +"If you do not see the version of Docker but instead get an error saying " +"that the command was not found, you will need to install Docker first. " +"You can find installation instruction `here `_." msgstr "" -"如果没有看到 Docker 的版本,而是出现找不到命令的错误,则需要先安装 Docker。你" -"可以在 `_ 找到安装说明。" +"如果没有看到 Docker 的版本,而是出现找不到命令的错误,则需要先安装 Docker。你可以在 " +"`_ 找到安装说明。" #: ../../source/how-to-run-flower-using-docker.rst:20 #, fuzzy msgid "" -"On Linux, Docker commands require ``sudo`` privilege. If you want to avoid " -"using ``sudo``, you can follow the `Post-installation steps `_ on the official Docker " -"website." +"On Linux, Docker commands require ``sudo`` privilege. If you want to " +"avoid using ``sudo``, you can follow the `Post-installation steps " +"`_ on the " +"official Docker website." +msgstr "" +"在 Linux 上,Docker 命令需要 ``sudo`` 权限。如果你想避免使用 ``sudo``,可以按照 Docker 官方网站上的 " +"`安装后步骤 `_进行操作。" + +#: ../../source/how-to-run-flower-using-docker.rst:26 +msgid "" +"To ensure optimal performance and compatibility, the SuperLink, SuperNode" +" and ServerApp image must have the same version when running together. " +"This guarantees seamless integration and avoids potential conflicts or " +"issues that may arise from using different versions." msgstr "" -"在 Linux 上,Docker 命令需要 ``sudo`` 权限。如果你想避免使用 ``sudo``,可以按" -"照 Docker 官方网站上的 `安装后步骤 `_进行操作。" -#: ../../source/how-to-run-flower-using-docker.rst:25 +#: ../../source/how-to-run-flower-using-docker.rst:31 #, fuzzy -msgid "Flower server" -msgstr "Flower 服务器" +msgid "Flower SuperLink" +msgstr "flower-superlink" -#: ../../source/how-to-run-flower-using-docker.rst:28 +#: ../../source/how-to-run-flower-using-docker.rst:34 #, fuzzy msgid "Quickstart" msgstr "快速入门 JAX" -#: ../../source/how-to-run-flower-using-docker.rst:30 +#: ../../source/how-to-run-flower-using-docker.rst:36 #, fuzzy msgid "If you're looking to try out Flower, you can use the following command:" msgstr "如果您想试用 Flower,可以使用以下命令:" -#: ../../source/how-to-run-flower-using-docker.rst:37 +#: ../../source/how-to-run-flower-using-docker.rst:42 #, fuzzy msgid "" -"The command will pull the Docker image with the tag ``1.7.0-py3.11-" -"ubuntu22.04`` from Docker Hub. The tag contains the information which " -"Flower, Python and Ubuntu is used. In this case, it uses Flower 1.7.0, " -"Python 3.11 and Ubuntu 22.04. The ``--rm`` flag tells Docker to remove the " -"container after it exits." +"The command pulls the Docker image with the tag ``1.8.0`` from Docker " +"Hub. The tag specifies the Flower version. In this case, Flower 1.8.0. " +"The ``--rm`` flag tells Docker to remove the container after it exits." msgstr "" -"该命令将从 Docker Hub 提取标签为``1.7.0-py3.11-ubuntu22.04``的 Docker 镜像。" -"标签包含使用 Flower、Python 和 Ubuntu 的信息。在本例中,它使用了 Flower " -"1.7.0、Python 3.11 和 Ubuntu 22.04。rm \"标记告诉 Docker 在退出后移除容器。" +"该命令将从 Docker Hub 提取标签为``1.7.0-py3.11-ubuntu22.04``的 Docker 镜像。标签包含使用 " +"Flower、Python 和 Ubuntu 的信息。在本例中,它使用了 Flower 1.7.0、Python 3.11 和 Ubuntu " +"22.04。rm \"标记告诉 Docker 在退出后移除容器。" -#: ../../source/how-to-run-flower-using-docker.rst:44 +#: ../../source/how-to-run-flower-using-docker.rst:48 #, fuzzy msgid "" -"By default, the Flower server keeps state in-memory. When using the Docker " -"flag ``--rm``, the state is not persisted between container starts. We will " -"show below how to save the state in a file on your host system." +"By default, the Flower SuperLink keeps state in-memory. When using the " +"Docker flag ``--rm``, the state is not persisted between container " +"starts. We will show below how to save the state in a file on your host " +"system." msgstr "" -"默认情况下,Flower 服务器会将状态保存在内存中。使用 Docker 标志 ``--rm`` 时," -"状态不会在容器启动之间持久化。下面我们将展示如何将状态保存到主机系统上的文件" -"中。" +"默认情况下,Flower 服务器会将状态保存在内存中。使用 Docker 标志 ``--rm`` " +"时,状态不会在容器启动之间持久化。下面我们将展示如何将状态保存到主机系统上的文件中。" -#: ../../source/how-to-run-flower-using-docker.rst:48 +#: ../../source/how-to-run-flower-using-docker.rst:52 #, fuzzy msgid "" -"The ``-p :`` flag tells Docker to map the ports ``9091``/" -"``9092`` of the host to ``9091``/``9092`` of the container, allowing you to " -"access the Driver API on ``http://localhost:9091`` and the Fleet API on " -"``http://localhost:9092``. Lastly, any flag that comes after the tag is " -"passed to the Flower server. Here, we are passing the flag ``--insecure``." +"The ``-p :`` flag tells Docker to map the ports " +"``9091``/``9092`` of the host to ``9091``/``9092`` of the container, " +"allowing you to access the Driver API on ``http://localhost:9091`` and " +"the Fleet API on ``http://localhost:9092``. Lastly, any flag that comes " +"after the tag is passed to the Flower SuperLink. Here, we are passing the" +" flag ``--insecure``." msgstr "" -"``-p :`` 标记会告诉 Docker 将主机的端口 ``9091``/``9092`` 映" -"射到容器的端口 ``9091``/`9092``,这样你就可以在 ``http://localhost:9091`` 上" -"访问 Driver API,在 ``http://localhost:9092`` 上访问 Fleet API。最后,标签后" -"面的任何标志都会传递给 Flower 服务器。在这里,我们传递的标志是 ``--" -"insecure`` 。" +"``-p :`` 标记会告诉 Docker 将主机的端口 ``9091``/``9092`` 映射到容器的端口 " +"``9091``/`9092``,这样你就可以在 ``http://localhost:9091`` 上访问 Driver API,在 " +"``http://localhost:9092`` 上访问 Fleet API。最后,标签后面的任何标志都会传递给 Flower " +"服务器。在这里,我们传递的标志是 ``--insecure`` 。" -#: ../../source/how-to-run-flower-using-docker.rst:55 +#: ../../source/how-to-run-flower-using-docker.rst:59 +#: ../../source/how-to-run-flower-using-docker.rst:238 +#: ../../source/how-to-run-flower-using-docker.rst:354 #, fuzzy msgid "" "The ``--insecure`` flag enables insecure communication (using HTTP, not " -"HTTPS) and should only be used for testing purposes. We strongly recommend " -"enabling `SSL `_ when deploying to a " -"production environment." +"HTTPS) and should only be used for testing purposes. We strongly " +"recommend enabling `SSL `__ when " +"deploying to a production environment." msgstr "" -"不安全 \"标志启用不安全通信(使用 HTTP,而非 HTTPS),只能用于测试目的。我们" -"强烈建议在部署到生产环境时启用 `SSL `_。" +"不安全 \"标志启用不安全通信(使用 HTTP,而非 HTTPS),只能用于测试目的。我们强烈建议在部署到生产环境时启用 `SSL " +"`_。" -#: ../../source/how-to-run-flower-using-docker.rst:60 +#: ../../source/how-to-run-flower-using-docker.rst:64 #, fuzzy msgid "" -"You can use ``--help`` to view all available flags that the server supports:" +"You can use ``--help`` to view all available flags that the SuperLink " +"supports:" msgstr "您可以使用 ``--help`` 查看服务器支持的所有可用标记:" -#: ../../source/how-to-run-flower-using-docker.rst:67 +#: ../../source/how-to-run-flower-using-docker.rst:71 #, fuzzy msgid "Mounting a volume to store the state on the host system" msgstr "在主机系统上挂载卷以存储状态" -#: ../../source/how-to-run-flower-using-docker.rst:69 +#: ../../source/how-to-run-flower-using-docker.rst:73 #, fuzzy msgid "" -"If you want to persist the state of the server on your host system, all you " -"need to do is specify a path where you want to save the file on your host " -"system and a name for the database file. In the example below, we tell " -"Docker via the flag ``-v`` to mount the user's home directory (``~/`` on " -"your host) into the ``/app/`` directory of the container. Furthermore, we " -"use the flag ``--database`` to specify the name of the database file." +"If you want to persist the state of the SuperLink on your host system, " +"all you need to do is specify a path where you want to save the file on " +"your host system and a name for the database file. In the example below, " +"we tell Docker via the flag ``--volume`` to mount the user's home " +"directory (``~/`` on your host) into the ``/app/`` directory of the " +"container. Furthermore, we use the flag ``--database`` to specify the " +"name of the database file." msgstr "" -"如果想在主机系统上持久保存服务器的状态,只需在主机系统上指定保存文件的路径和" -"数据库文件的名称即可。在下面的示例中,我们通过标志 ``-v`` 告诉 Docker 将用户" -"的主目录(主机上的 ``~/``)挂载到容器的 ``/app/`` 目录中。此外,我们使用标志 " -"``--database`` 来指定数据库文件的名称。" +"如果想在主机系统上持久保存服务器的状态,只需在主机系统上指定保存文件的路径和数据库文件的名称即可。在下面的示例中,我们通过标志 ``-v`` 告诉" +" Docker 将用户的主目录(主机上的 ``~/``)挂载到容器的 ``/app/`` 目录中。此外,我们使用标志 ``--database``" +" 来指定数据库文件的名称。" -#: ../../source/how-to-run-flower-using-docker.rst:82 +#: ../../source/how-to-run-flower-using-docker.rst:86 #, fuzzy msgid "" -"As soon as the server starts, the file ``state.db`` is created in the user's " -"home directory on your host system. If the file already exists, the server " -"tries to restore the state from the file. To start the server with an empty " -"database, simply remove the ``state.db`` file." +"As soon as the SuperLink starts, the file ``state.db`` is created in the " +"user's home directory on your host system. If the file already exists, " +"the SuperLink tries to restore the state from the file. To start the " +"SuperLink with an empty database, simply remove the ``state.db`` file." msgstr "" -"服务器一启动,就会在主机系统的用户主目录下创建文件 ``state.db``。如果该文件已" -"经存在,服务器会尝试从该文件恢复状态。要以空数据库启动服务器,只需删除 " -"``state.db`` 文件即可。" +"服务器一启动,就会在主机系统的用户主目录下创建文件 " +"``state.db``。如果该文件已经存在,服务器会尝试从该文件恢复状态。要以空数据库启动服务器,只需删除 ``state.db`` 文件即可。" -#: ../../source/how-to-run-flower-using-docker.rst:87 +#: ../../source/how-to-run-flower-using-docker.rst:91 +#: ../../source/how-to-run-flower-using-docker.rst:260 +#: ../../source/how-to-run-flower-using-docker.rst:375 #, fuzzy msgid "Enabling SSL for secure connections" msgstr "启用 SSL 连接" -#: ../../source/how-to-run-flower-using-docker.rst:89 +#: ../../source/how-to-run-flower-using-docker.rst:93 #, fuzzy msgid "" -"To enable SSL, you will need a CA certificate, a server certificate and a " -"server private key." +"To enable SSL, you will need a PEM-encoded root certificate, a PEM-" +"encoded private key and a PEM-encoded certificate chain." msgstr "要启用 SSL,需要 CA 证书、服务器证书和服务器私钥。" -#: ../../source/how-to-run-flower-using-docker.rst:92 +#: ../../source/how-to-run-flower-using-docker.rst:97 +#, fuzzy +msgid "" +"For testing purposes, you can generate your own self-signed certificates." +" The `Enable SSL connections `__ page contains a section that" +" will guide you through the process." +msgstr "" +"出于测试目的,你可以生成自己的自签名证书。启用 SSL 连接 `_ 页面中有一个部分将指导你完成这一过程。" + +#: ../../source/how-to-run-flower-using-docker.rst:101 +#, fuzzy +msgid "" +"Assuming all files we need are in the local ``certificates`` directory, " +"we can use the flag ``--volume`` to mount the local directory into the " +"``/app/`` directory of the container. This allows the SuperLink to access" +" the files within the container. Finally, we pass the names of the " +"certificates to the SuperLink with the ``--certificates`` flag." +msgstr "" +"假设我们需要的所有文件都在本地的 ``certificates`` 目录中,我们可以使用标记 ``-v`` 将本地目录挂载到容器的 " +"``/app/`` 目录中。这样,服务器就可以访问容器内的文件。最后,我们使用 ``--certificates`` 标志将证书名称传递给服务器。" + +#: ../../source/how-to-run-flower-using-docker.rst:113 +#, fuzzy +msgid "Flower SuperNode" +msgstr "Flower 服务器" + +#: ../../source/how-to-run-flower-using-docker.rst:115 +msgid "" +"The SuperNode Docker image comes with a pre-installed version of Flower " +"and serves as a base for building your own SuperNode image." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:120 +msgid "" +"The SuperNode Docker image currently works only with the 1.9.0-nightly " +"release. A stable version will be available when Flower 1.9.0 (stable) " +"gets released (ETA: May). A SuperNode nightly image must be paired with " +"the corresponding SuperLink and ServerApp nightly images released on the " +"same day. To ensure the versions are in sync, using the concrete tag, " +"e.g., ``1.9.0.dev20240501`` instead of ``nightly`` is recommended." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:126 +msgid "" +"We will use the ``quickstart-pytorch`` example, which you can find in the" +" Flower repository, to illustrate how you can dockerize your ClientApp." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:134 +#, fuzzy +msgid "" +"Before we can start, we need to meet a few prerequisites in our local " +"development environment. You can skip the first part if you want to run " +"your ClientApp instead of the ``quickstart-pytorch`` example." +msgstr "在开始之前,我们需要在本地开发环境中满足一些先决条件。" + +#: ../../source/how-to-run-flower-using-docker.rst:138 +#, fuzzy +msgid "Clone the Flower repository." +msgstr "**叉花仓库**" + +#: ../../source/how-to-run-flower-using-docker.rst:152 +msgid "Creating a SuperNode Dockerfile" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:154 +#: ../../source/how-to-run-flower-using-docker.rst:289 +msgid "Let's assume the following project layout:" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:163 +msgid "" +"First, we need to create a ``requirements.txt`` file in the directory " +"where the ``ClientApp`` code is located. In the file, we list all the " +"dependencies that the ClientApp requires." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:175 +msgid "" +"Note that `flwr `__ is already installed " +"in the ``flwr/supernode`` base image, so you only need to include other " +"package dependencies in your ``requirements.txt``, such as ``torch``, " +"``tensorflow``, etc." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:179 +msgid "" +"Next, we create a Dockerfile. If you use the ``quickstart-pytorch`` " +"example, create a new file called ``Dockerfile.supernode`` in ``examples" +"/quickstart-pytorch``." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:182 +msgid "" +"The ``Dockerfile.supernode`` contains the instructions that assemble the " +"SuperNode image." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:196 +msgid "" +"In the first two lines, we instruct Docker to use the SuperNode image " +"tagged ``nightly`` as a base image and set our working directory to " +"``/app``. The following instructions will now be executed in the ``/app``" +" directory. Next, we install the ClientApp dependencies by copying the " +"``requirements.txt`` file into the image and run ``pip install``. In the " +"last two lines, we copy the ``client.py`` module into the image and set " +"the entry point to ``flower-client-app`` with the argument " +"``client:app``. The argument is the object reference of the ClientApp " +"(``:``) that will be run inside the ClientApp." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:205 +#, fuzzy +msgid "Building the SuperNode Docker image" +msgstr "启动服务器" + +#: ../../source/how-to-run-flower-using-docker.rst:207 +msgid "" +"Next, we build the SuperNode Docker image by running the following " +"command in the directory where Dockerfile and ClientApp code are located." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:214 +msgid "" +"We gave the image the name ``flwr_supernode``, and the tag ``0.0.1``. " +"Remember that the here chosen values only serve as an example. You can " +"change them to your needs." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:219 +#, fuzzy +msgid "Running the SuperNode Docker image" +msgstr "启动服务器" + +#: ../../source/how-to-run-flower-using-docker.rst:221 +msgid "Now that we have built the SuperNode image, we can finally run it." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:229 +#: ../../source/how-to-run-flower-using-docker.rst:345 +msgid "Let's break down each part of this command:" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:231 +#: ../../source/how-to-run-flower-using-docker.rst:347 +msgid "``docker run``: This is the command to run a new Docker container." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:232 +#: ../../source/how-to-run-flower-using-docker.rst:348 +msgid "" +"``--rm``: This option specifies that the container should be " +"automatically removed when it stops." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:233 +msgid "``flwr_supernode:0.0.1``: The name the tag of the Docker image to use." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:234 +#: ../../source/how-to-run-flower-using-docker.rst:350 +msgid "``--insecure``: This option enables insecure communication." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst +msgid "" +"``--server 192.168.1.100:9092``: This option specifies the address of the" +" SuperLinks Fleet" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst +msgid "API to connect to. Remember to update it with your SuperLink IP." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:248 +msgid "" +"To test running Flower locally, you can create a `bridge network " +"`__, use the ``--network`` argument and pass the " +"name of the Docker network to run your SuperNodes." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:252 +msgid "" +"Any argument that comes after the tag is passed to the Flower SuperNode " +"binary. To see all available flags that the SuperNode supports, run:" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:262 +msgid "" +"To enable SSL, we will need to mount a PEM-encoded root certificate into " +"your SuperNode container." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:264 +#, fuzzy +msgid "" +"Assuming the certificate already exists locally, we can use the flag " +"``--volume`` to mount the local certificate into the container's " +"``/app/`` directory. This allows the SuperNode to access the certificate " +"within the container. Use the ``--certificates`` flag when starting the " +"container." +msgstr "" +"假设我们需要的所有文件都在本地的 ``certificates`` 目录中,我们可以使用标记 ``-v`` 将本地目录挂载到容器的 " +"``/app/`` 目录中。这样,服务器就可以访问容器内的文件。最后,我们使用 ``--certificates`` 标志将证书名称传递给服务器。" + +#: ../../source/how-to-run-flower-using-docker.rst:275 +#, fuzzy +msgid "Flower ServerApp" +msgstr "Flower 服务器。" + +#: ../../source/how-to-run-flower-using-docker.rst:277 +msgid "" +"The procedure for building and running a ServerApp image is almost " +"identical to the SuperNode image." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:279 +msgid "" +"Similar to the SuperNode image, the ServerApp Docker image comes with a " +"pre-installed version of Flower and serves as a base for building your " +"own ServerApp image." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:282 +msgid "" +"We will use the same ``quickstart-pytorch`` example as we do in the " +"Flower SuperNode section. If you have not already done so, please follow " +"the `SuperNode Prerequisites`_ before proceeding." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:287 +msgid "Creating a ServerApp Dockerfile" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:298 +msgid "" +"First, we need to create a Dockerfile in the directory where the " +"``ServerApp`` code is located. If you use the ``quickstart-pytorch`` " +"example, create a new file called ``Dockerfile.serverapp`` in ``examples" +"/quickstart-pytorch``." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:302 +msgid "" +"The ``Dockerfile.serverapp`` contains the instructions that assemble the " +"ServerApp image." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:313 +msgid "" +"In the first two lines, we instruct Docker to use the ServerApp image " +"tagged ``1.8.0`` as a base image and set our working directory to " +"``/app``. The following instructions will now be executed in the ``/app``" +" directory. In the last two lines, we copy the ``server.py`` module into " +"the image and set the entry point to ``flower-server-app`` with the " +"argument ``server:app``. The argument is the object reference of the " +"ServerApp (``:``) that will be run inside the " +"ServerApp container." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:321 +#, fuzzy +msgid "Building the ServerApp Docker image" +msgstr "启动服务器" + +#: ../../source/how-to-run-flower-using-docker.rst:323 +msgid "" +"Next, we build the ServerApp Docker image by running the following " +"command in the directory where Dockerfile and ServerApp code are located." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:330 +msgid "" +"We gave the image the name ``flwr_serverapp``, and the tag ``0.0.1``. " +"Remember that the here chosen values only serve as an example. You can " +"change them to your needs." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:335 #, fuzzy +msgid "Running the ServerApp Docker image" +msgstr "启动服务器" + +#: ../../source/how-to-run-flower-using-docker.rst:337 +msgid "Now that we have built the ServerApp image, we can finally run it." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:349 +msgid "``flwr_serverapp:0.0.1``: The name the tag of the Docker image to use." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst +msgid "" +"``--server 192.168.1.100:9091``: This option specifies the address of the" +" SuperLinks Driver" +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:363 +msgid "" +"To test running Flower locally, you can create a `bridge network " +"`__, use the ``--network`` argument and pass the " +"name of the Docker network to run your ServerApps." +msgstr "" + +#: ../../source/how-to-run-flower-using-docker.rst:367 msgid "" -"For testing purposes, you can generate your own self-signed certificates. " -"The `Enable SSL connections `_ page contains a section that will guide " -"you through the process." +"Any argument that comes after the tag is passed to the Flower ServerApp " +"binary. To see all available flags that the ServerApp supports, run:" msgstr "" -"出于测试目的,你可以生成自己的自签名证书。启用 SSL 连接 `_ 页面中有一" -"个部分将指导你完成这一过程。" -#: ../../source/how-to-run-flower-using-docker.rst:96 +#: ../../source/how-to-run-flower-using-docker.rst:377 #, fuzzy msgid "" -"Assuming all files we need are in the local ``certificates`` directory, we " -"can use the flag ``-v`` to mount the local directory into the ``/app/`` " -"directory of the container. This allows the server to access the files " -"within the container. Finally, we pass the names of the certificates to the " -"server with the ``--certificates`` flag." +"To enable SSL, we will need to mount a PEM-encoded root certificate into " +"your ServerApp container." +msgstr "要启用 SSL,需要 CA 证书、服务器证书和服务器私钥。" + +#: ../../source/how-to-run-flower-using-docker.rst:379 +#, fuzzy +msgid "" +"Assuming the certificate already exists locally, we can use the flag " +"``--volume`` to mount the local certificate into the container's " +"``/app/`` directory. This allows the ServerApp to access the certificate " +"within the container. Use the ``--certificates`` flag when starting the " +"container." msgstr "" -"假设我们需要的所有文件都在本地的 ``certificates`` 目录中,我们可以使用标记 " -"``-v`` 将本地目录挂载到容器的 ``/app/`` 目录中。这样,服务器就可以访问容器内" -"的文件。最后,我们使用 ``--certificates`` 标志将证书名称传递给服务器。" +"假设我们需要的所有文件都在本地的 ``certificates`` 目录中,我们可以使用标记 ``-v`` 将本地目录挂载到容器的 " +"``/app/`` 目录中。这样,服务器就可以访问容器内的文件。最后,我们使用 ``--certificates`` 标志将证书名称传递给服务器。" -#: ../../source/how-to-run-flower-using-docker.rst:108 +#: ../../source/how-to-run-flower-using-docker.rst:390 #, fuzzy -msgid "Using a different Flower or Python version" +msgid "Advanced Docker options" +msgstr "高级安装选项" + +#: ../../source/how-to-run-flower-using-docker.rst:393 +#, fuzzy +msgid "Using a different Flower version" msgstr "使用不同的 Flower 或 Python 版本" -#: ../../source/how-to-run-flower-using-docker.rst:110 +#: ../../source/how-to-run-flower-using-docker.rst:395 #, fuzzy msgid "" -"If you want to use a different version of Flower or Python, you can do so by " -"changing the tag. All versions we provide are available on `Docker Hub " -"`_." +"If you want to use a different version of Flower, for example Flower " +"nightly, you can do so by changing the tag. All available versions are on" +" `Docker Hub `__." msgstr "" -"如果您想使用不同版本的 Flower 或 Python,可以通过更改标签来实现。我们提供的所" -"有版本都可以在 `Docker Hub `_ 上找" -"到。" +"如果您想使用不同版本的 Flower 或 Python,可以通过更改标签来实现。我们提供的所有版本都可以在 `Docker Hub " +"`_ 上找到。" -#: ../../source/how-to-run-flower-using-docker.rst:114 +#: ../../source/how-to-run-flower-using-docker.rst:400 #, fuzzy msgid "Pinning a Docker image to a specific version" msgstr "将 Docker 映像固定到特定版本" -#: ../../source/how-to-run-flower-using-docker.rst:116 +#: ../../source/how-to-run-flower-using-docker.rst:402 #, fuzzy msgid "" "It may happen that we update the images behind the tags. Such updates " "usually include security updates of system dependencies that should not " -"change the functionality of Flower. However, if you want to ensure that you " -"always use the same image, you can specify the hash of the image instead of " -"the tag." +"change the functionality of Flower. However, if you want to ensure that " +"you always use the same image, you can specify the hash of the image " +"instead of the tag." msgstr "" -"我们可能会更新标签后面的图像。此类更新通常包括系统依赖项的安全更新,不会改变 " -"Flower 的功能。不过,如果您想确保始终使用同一张图片,可以指定图片的哈希值而不" -"是标签。" +"我们可能会更新标签后面的图像。此类更新通常包括系统依赖项的安全更新,不会改变 Flower " +"的功能。不过,如果您想确保始终使用同一张图片,可以指定图片的哈希值而不是标签。" -#: ../../source/how-to-run-flower-using-docker.rst:121 +#: ../../source/how-to-run-flower-using-docker.rst:407 #, fuzzy msgid "" "The following command returns the current image hash referenced by the " -"``server:1.7.0-py3.11-ubuntu22.04`` tag:" -msgstr "" -"下面的命令将返回由 ``server:1.7.0-py3.11-ubuntu22.04`` 标记引用的当前图像哈希" -"值:" +"``superlink:1.8.0`` tag:" +msgstr "下面的命令将返回由 ``server:1.7.0-py3.11-ubuntu22.04`` 标记引用的当前图像哈希值:" -#: ../../source/how-to-run-flower-using-docker.rst:128 +#: ../../source/how-to-run-flower-using-docker.rst:414 #, fuzzy -msgid "Next, we can pin the hash when running a new server container:" +msgid "Next, we can pin the hash when running a new SuperLink container:" msgstr "接下来,我们可以在运行新服务器容器时将哈希值固定下来:" -#: ../../source/how-to-run-flower-using-docker.rst:137 +#: ../../source/how-to-run-flower-using-docker.rst:423 #, fuzzy msgid "Setting environment variables" msgstr "设置编码环境" -#: ../../source/how-to-run-flower-using-docker.rst:139 +#: ../../source/how-to-run-flower-using-docker.rst:425 #, fuzzy msgid "" "To set a variable inside a Docker container, you can use the ``-e " @@ -6044,85 +5839,81 @@ msgstr "运行模拟" #: ../../source/how-to-run-simulations.rst:8 msgid "" "Simulating Federated Learning workloads is useful for a multitude of use-" -"cases: you might want to run your workload on a large cohort of clients but " -"without having to source, configure and mange a large number of physical " -"devices; you might want to run your FL workloads as fast as possible on the " -"compute systems you have access to without having to go through a complex " -"setup process; you might want to validate your algorithm on different " -"scenarios at varying levels of data and system heterogeneity, client " -"availability, privacy budgets, etc. These are among some of the use-cases " -"where simulating FL workloads makes sense. Flower can accommodate these " -"scenarios by means of its `VirtualClientEngine `_ or VCE." -msgstr "" -"模拟联邦学习工作负载可用于多种案例:您可能希望在大量客户端上运行您的工作负" -"载,但无需采购、配置和管理大量物理设备;您可能希望在您可以访问的计算系统上尽" -"可能快地运行您的 FL 工作负载,而无需经过复杂的设置过程;您可能希望在不同数据" -"和系统异构性、客户端可用性、隐私预算等不同水平的场景中验证您的算法。这些都是" -"模拟 FL 工作负载的一些案例。Flower 可以通过其 \"虚拟客户端引擎" -"\"(VirtualClientEngine)_或 VCE 来匹配这些情况。" +"cases: you might want to run your workload on a large cohort of clients " +"but without having to source, configure and mange a large number of " +"physical devices; you might want to run your FL workloads as fast as " +"possible on the compute systems you have access to without having to go " +"through a complex setup process; you might want to validate your " +"algorithm on different scenarios at varying levels of data and system " +"heterogeneity, client availability, privacy budgets, etc. These are among" +" some of the use-cases where simulating FL workloads makes sense. Flower " +"can accommodate these scenarios by means of its `VirtualClientEngine " +"`_ or " +"VCE." +msgstr "" +"模拟联邦学习工作负载可用于多种案例:您可能希望在大量客户端上运行您的工作负载,但无需采购、配置和管理大量物理设备;您可能希望在您可以访问的计算系统上尽可能快地运行您的" +" FL 工作负载,而无需经过复杂的设置过程;您可能希望在不同数据和系统异构性、客户端可用性、隐私预算等不同水平的场景中验证您的算法。这些都是模拟 " +"FL 工作负载的一些案例。Flower 可以通过其 \"虚拟客户端引擎\"(VirtualClientEngine)_或 VCE 来匹配这些情况。" #: ../../source/how-to-run-simulations.rst:10 msgid "" -"The :code:`VirtualClientEngine` schedules, launches and manages `virtual` " -"clients. These clients are identical to `non-virtual` clients (i.e. the ones " -"you launch via the command `flwr.client.start_client `_) in the sense that they can be configure by creating a " -"class inheriting, for example, from `flwr.client.NumPyClient `_ and therefore behave in an identical way. In " -"addition to that, clients managed by the :code:`VirtualClientEngine` are:" +"The :code:`VirtualClientEngine` schedules, launches and manages `virtual`" +" clients. These clients are identical to `non-virtual` clients (i.e. the " +"ones you launch via the command `flwr.client.start_client `_) in the sense that they can be configure by " +"creating a class inheriting, for example, from `flwr.client.NumPyClient " +"`_ and therefore behave in an " +"identical way. In addition to that, clients managed by the " +":code:`VirtualClientEngine` are:" msgstr "" -":code:`VirtualClientEngine`用来规划,启动和管理`虚拟`客户端。这些客户端跟`非" -"虚拟`客户端是一样的(即为您通过`flwr.client.start_client `_启动的客户端),因为它们可以通过创建一个继承自 `flwr." -"client.NumPyClient `_ 的类进行配" -"置,因此其行为方式相同。另外,由 `VirtualClientEngine` 管理的客户端有:" +":code:`VirtualClientEngine`用来规划,启动和管理`虚拟`客户端。这些客户端跟`非虚拟`客户端是一样的(即为您通过`flwr.client.start_client" +" `_启动的客户端),因为它们可以通过创建一个继承自 " +"`flwr.client.NumPyClient `_ " +"的类进行配置,因此其行为方式相同。另外,由 `VirtualClientEngine` 管理的客户端有:" #: ../../source/how-to-run-simulations.rst:12 msgid "" -"resource-aware: this means that each client gets assigned a portion of the " -"compute and memory on your system. You as a user can control this at the " -"beginning of the simulation and allows you to control the degree of " +"resource-aware: this means that each client gets assigned a portion of " +"the compute and memory on your system. You as a user can control this at " +"the beginning of the simulation and allows you to control the degree of " "parallelism of your Flower FL simulation. The fewer the resources per " "client, the more clients can run concurrently on the same hardware." msgstr "" -"资源感知:这意味着每个客户端都会分配到系统中的一部分计算和内存。作为用户,您" -"可以在模拟开始时对其进行控制,从而控制 Flower FL 模拟的并行程度。每个客户端的" -"资源越少,在同一硬件上并发运行的客户端就越多。" +"资源感知:这意味着每个客户端都会分配到系统中的一部分计算和内存。作为用户,您可以在模拟开始时对其进行控制,从而控制 Flower FL " +"模拟的并行程度。每个客户端的资源越少,在同一硬件上并发运行的客户端就越多。" #: ../../source/how-to-run-simulations.rst:13 msgid "" -"self-managed: this means that you as a user do not need to launch clients " -"manually, instead this gets delegated to :code:`VirtualClientEngine`'s " +"self-managed: this means that you as a user do not need to launch clients" +" manually, instead this gets delegated to :code:`VirtualClientEngine`'s " "internals." -msgstr "" -"自管理:这意味着用户无需手动启动客户端,而是由 :code:`VirtualClientEngine` 负" -"责。" +msgstr "自管理:这意味着用户无需手动启动客户端,而是由 :code:`VirtualClientEngine` 负责。" #: ../../source/how-to-run-simulations.rst:14 msgid "" -"ephemeral: this means that a client is only materialized when it is required " -"in the FL process (e.g. to do `fit() `_). The object is destroyed afterwards, releasing the resources it was " -"assigned and allowing in this way other clients to participate." +"ephemeral: this means that a client is only materialized when it is " +"required in the FL process (e.g. to do `fit() `_). The object is destroyed afterwards," +" releasing the resources it was assigned and allowing in this way other " +"clients to participate." msgstr "" -"即时性:这意味着客户端只有在 FL 进程中需要它时才会被实体化(例如执行 `fit() " -"`_ )。之后该对象将被销毁,释放分配" -"给它的资源,并允许其他客户端以这种方式参与。" +"即时性:这意味着客户端只有在 FL 进程中需要它时才会被实体化(例如执行 `fit() `_ " +")。之后该对象将被销毁,释放分配给它的资源,并允许其他客户端以这种方式参与。" #: ../../source/how-to-run-simulations.rst:16 msgid "" "The :code:`VirtualClientEngine` implements `virtual` clients using `Ray " "`_, an open-source framework for scalable Python " -"workloads. In particular, Flower's :code:`VirtualClientEngine` makes use of " -"`Actors `_ to spawn " -"`virtual` clients and run their workload." +"workloads. In particular, Flower's :code:`VirtualClientEngine` makes use " +"of `Actors `_ to " +"spawn `virtual` clients and run their workload." msgstr "" -":code:`VirtualClientEngine`使用`Ray `_来实现`虚拟`客户" -"端,这是一个用于可扩展 Python 工作负载的开源框架。特别地,Flower 的 :code:" -"`VirtualClientEngine` 使用 `Actors `_ 来生成 `virtual` 客户端并运行它们的工作负载。" +":code:`VirtualClientEngine`使用`Ray " +"`_来实现`虚拟`客户端,这是一个用于可扩展 Python 工作负载的开源框架。特别地,Flower 的" +" :code:`VirtualClientEngine` 使用 `Actors `_ 来生成 `virtual` 客户端并运行它们的工作负载。" #: ../../source/how-to-run-simulations.rst:20 msgid "Launch your Flower simulation" @@ -6130,16 +5921,16 @@ msgstr "启动 Flower 模拟" #: ../../source/how-to-run-simulations.rst:22 msgid "" -"Running Flower simulations still require you to define your client class, a " -"strategy, and utility functions to download and load (and potentially " -"partition) your dataset. With that out of the way, launching your simulation " -"is done with `start_simulation `_ and a minimal example looks as follows:" +"Running Flower simulations still require you to define your client class," +" a strategy, and utility functions to download and load (and potentially " +"partition) your dataset. With that out of the way, launching your " +"simulation is done with `start_simulation `_ and a minimal example looks" +" as follows:" msgstr "" -"运行 Flower 模拟器仍然需要定义客户端类、策略以及下载和加载(可能还需要分割)" -"数据集的实用程序。在完成这些工作后,就可以使用 \"start_simulation `_\" 来启动模拟了,一个最简单的示" -"例如下:" +"运行 Flower 模拟器仍然需要定义客户端类、策略以及下载和加载(可能还需要分割)数据集的实用程序。在完成这些工作后,就可以使用 " +"\"start_simulation `_\" 来启动模拟了,一个最简单的示例如下:" #: ../../source/how-to-run-simulations.rst:44 msgid "VirtualClientEngine resources" @@ -6147,24 +5938,23 @@ msgstr "虚拟客户端引擎资源" #: ../../source/how-to-run-simulations.rst:45 msgid "" -"By default the VCE has access to all system resources (i.e. all CPUs, all " -"GPUs, etc) since that is also the default behavior when starting Ray. " -"However, in some settings you might want to limit how many of your system " -"resources are used for simulation. You can do this via the :code:" -"`ray_init_args` input argument to :code:`start_simulation` which the VCE " -"internally passes to Ray's :code:`ray.init` command. For a complete list of " -"settings you can configure check the `ray.init `_ documentation. Do not set :" -"code:`ray_init_args` if you want the VCE to use all your system's CPUs and " -"GPUs." -msgstr "" -"默认情况下,VCE 可以访问所有系统资源(即所有 CPU、所有 GPU 等),因为这也是启" -"动 Ray 时的默认行为。不过,在某些设置中,您可能希望限制有多少系统资源用于模" -"拟。您可以通过 :code:`ray_init_args` 输入到 :code:`start_simulation` 的参数来" -"做到这一点,VCE 会在内部将该参数传递给 Ray 的 :code:`ray.init` 命令。有关您可" -"以配置的设置的完整列表,请查看 `ray.init `_ 文档。如果希望 VCE 使用系统中所有的 " -"CPU 和 GPU,请不要设置 :code:`ray_init_args`。" +"By default the VCE has access to all system resources (i.e. all CPUs, all" +" GPUs, etc) since that is also the default behavior when starting Ray. " +"However, in some settings you might want to limit how many of your system" +" resources are used for simulation. You can do this via the " +":code:`ray_init_args` input argument to :code:`start_simulation` which " +"the VCE internally passes to Ray's :code:`ray.init` command. For a " +"complete list of settings you can configure check the `ray.init " +"`_" +" documentation. Do not set :code:`ray_init_args` if you want the VCE to " +"use all your system's CPUs and GPUs." +msgstr "" +"默认情况下,VCE 可以访问所有系统资源(即所有 CPU、所有 GPU 等),因为这也是启动 Ray " +"时的默认行为。不过,在某些设置中,您可能希望限制有多少系统资源用于模拟。您可以通过 :code:`ray_init_args` 输入到 " +":code:`start_simulation` 的参数来做到这一点,VCE 会在内部将该参数传递给 Ray 的 :code:`ray.init`" +" 命令。有关您可以配置的设置的完整列表,请查看 `ray.init `_ 文档。如果希望 VCE 使用系统中所有的 CPU 和 " +"GPU,请不要设置 :code:`ray_init_args`。" #: ../../source/how-to-run-simulations.rst:62 msgid "Assigning client resources" @@ -6172,27 +5962,27 @@ msgstr "分配客户端资源" #: ../../source/how-to-run-simulations.rst:63 msgid "" -"By default the :code:`VirtualClientEngine` assigns a single CPU core (and " -"nothing else) to each virtual client. This means that if your system has 10 " -"cores, that many virtual clients can be concurrently running." +"By default the :code:`VirtualClientEngine` assigns a single CPU core (and" +" nothing else) to each virtual client. This means that if your system has" +" 10 cores, that many virtual clients can be concurrently running." msgstr "" -"默认情况下,:code:`VirtualClientEngine` 会为每个虚拟客户端分配一个 CPU 内核" -"(不分配其他任何内核)。这意味着,如果系统有 10 个内核,那么可以同时运行这么" -"多虚拟客户端。" +"默认情况下,:code:`VirtualClientEngine` 会为每个虚拟客户端分配一个 CPU " +"内核(不分配其他任何内核)。这意味着,如果系统有 10 个内核,那么可以同时运行这么多虚拟客户端。" #: ../../source/how-to-run-simulations.rst:65 msgid "" -"More often than not, you would probably like to adjust the resources your " -"clients get assigned based on the complexity (i.e. compute and memory " -"footprint) of your FL workload. You can do so when starting your simulation " -"by setting the argument `client_resources` to `start_simulation `_. Two keys are internally used " -"by Ray to schedule and spawn workloads (in our case Flower clients):" +"More often than not, you would probably like to adjust the resources your" +" clients get assigned based on the complexity (i.e. compute and memory " +"footprint) of your FL workload. You can do so when starting your " +"simulation by setting the argument `client_resources` to " +"`start_simulation `_." +" Two keys are internally used by Ray to schedule and spawn workloads (in " +"our case Flower clients):" msgstr "" -"通常情况下,您可能希望根据 FL 工作负载的复杂性(即计算和内存占用)来调整分配" -"给客户端的资源。您可以在启动模拟时将参数 `client_resources` 设置为 " -"`start_simulation `_ 。" -"Ray 内部使用两个键来调度和生成工作负载(在我们的例子中是 Flower 客户端):" +"通常情况下,您可能希望根据 FL 工作负载的复杂性(即计算和内存占用)来调整分配给客户端的资源。您可以在启动模拟时将参数 " +"`client_resources` 设置为 `start_simulation `_ 。Ray " +"内部使用两个键来调度和生成工作负载(在我们的例子中是 Flower 客户端):" #: ../../source/how-to-run-simulations.rst:67 msgid ":code:`num_cpus` indicates the number of CPU cores a client would get." @@ -6212,30 +6002,29 @@ msgstr "让我们来看几个例子:" msgid "" "While the :code:`client_resources` can be used to control the degree of " "concurrency in your FL simulation, this does not stop you from running " -"dozens, hundreds or even thousands of clients in the same round and having " -"orders of magnitude more `dormant` (i.e. not participating in a round) " -"clients. Let's say you want to have 100 clients per round but your system " -"can only accommodate 8 clients concurrently. The :code:`VirtualClientEngine` " -"will schedule 100 jobs to run (each simulating a client sampled by the " -"strategy) and then will execute them in a resource-aware manner in batches " -"of 8." -msgstr "" -"虽然 :code:`client_resources` 可用来控制 FL 模拟的并发程度,但这并不能阻止您" -"在同一轮模拟中运行几十、几百甚至上千个客户端,并拥有数量级更多的 \"休眠\"(即" -"不参与一轮模拟)客户端。比方说,您希望每轮有 100 个客户端,但您的系统只能同时" -"容纳 8 个客户端。:code:`VirtualClientEngine` 将安排运行 100 个工作(每个工作" -"模拟策略采样的一个客户端),然后以资源感知的方式分批执行。" +"dozens, hundreds or even thousands of clients in the same round and " +"having orders of magnitude more `dormant` (i.e. not participating in a " +"round) clients. Let's say you want to have 100 clients per round but your" +" system can only accommodate 8 clients concurrently. The " +":code:`VirtualClientEngine` will schedule 100 jobs to run (each " +"simulating a client sampled by the strategy) and then will execute them " +"in a resource-aware manner in batches of 8." +msgstr "" +"虽然 :code:`client_resources` 可用来控制 FL " +"模拟的并发程度,但这并不能阻止您在同一轮模拟中运行几十、几百甚至上千个客户端,并拥有数量级更多的 " +"\"休眠\"(即不参与一轮模拟)客户端。比方说,您希望每轮有 100 个客户端,但您的系统只能同时容纳 8 " +"个客户端。:code:`VirtualClientEngine` 将安排运行 100 " +"个工作(每个工作模拟策略采样的一个客户端),然后以资源感知的方式分批执行。" #: ../../source/how-to-run-simulations.rst:91 msgid "" "To understand all the intricate details on how resources are used to " -"schedule FL clients and how to define custom resources, please take a look " -"at the `Ray documentation `_." +"schedule FL clients and how to define custom resources, please take a " +"look at the `Ray documentation `_." msgstr "" -"要了解资源如何用于调度 FL 客户端以及如何定义自定义资源的所有复杂细节,请查看 " -"`Ray 文档 `_。" +"要了解资源如何用于调度 FL 客户端以及如何定义自定义资源的所有复杂细节,请查看 `Ray 文档 " +"`_。" #: ../../source/how-to-run-simulations.rst:94 msgid "Simulation examples" @@ -6243,27 +6032,26 @@ msgstr "模拟示例" #: ../../source/how-to-run-simulations.rst:96 msgid "" -"A few ready-to-run complete examples for Flower simulation in Tensorflow/" -"Keras and PyTorch are provided in the `Flower repository `_. You can run them on Google Colab too:" +"A few ready-to-run complete examples for Flower simulation in " +"Tensorflow/Keras and PyTorch are provided in the `Flower repository " +"`_. You can run them on Google Colab too:" msgstr "" -"在 Tensorflow/Keras 和 PyTorch 中进行 Flower 模拟的几个可随时运行的完整示例已" -"在 `Flower 库 `_ 中提供。您也可以在 Google " -"Colab 上运行它们:" +"在 Tensorflow/Keras 和 PyTorch 中进行 Flower 模拟的几个可随时运行的完整示例已在 `Flower 库 " +"`_ 中提供。您也可以在 Google Colab 上运行它们:" #: ../../source/how-to-run-simulations.rst:98 msgid "" -"`Tensorflow/Keras Simulation `_: 100 clients collaboratively train a MLP " -"model on MNIST." +"`Tensorflow/Keras Simulation " +"`_: 100 clients collaboratively train a MLP model on MNIST." msgstr "" -"Tensorflow/Keras模拟 `_:100个客户端在MNIST上协作训练一个MLP模型。" +"Tensorflow/Keras模拟 `_:100个客户端在MNIST上协作训练一个MLP模型。" #: ../../source/how-to-run-simulations.rst:99 msgid "" -"`PyTorch Simulation `_: 100 clients collaboratively train a CNN model on " +"`PyTorch Simulation `_: 100 clients collaboratively train a CNN model on " "MNIST." msgstr "" "PyTorch 模拟 `)" -msgstr "" -"在所有节点中都有一份数据集副本(更多相关信息请参阅 :ref:`模拟注意事项" -"`)" +"Have a copy of your dataset in all nodes (more about this in " +":ref:`simulation considerations `)" +msgstr "在所有节点中都有一份数据集副本(更多相关信息请参阅 :ref:`模拟注意事项`)" #: ../../source/how-to-run-simulations.rst:111 msgid "" -"Pass :code:`ray_init_args={\"address\"=\"auto\"}` to `start_simulation `_ so the :code:" -"`VirtualClientEngine` attaches to a running Ray instance." +"Pass :code:`ray_init_args={\"address\"=\"auto\"}` to `start_simulation " +"`_ so the " +":code:`VirtualClientEngine` attaches to a running Ray instance." msgstr "" "将 :code:`ray_init_args={\"address\"=\"auto\"}`传递给 `start_simulation `_ ,这样 :code:" -"`VirtualClientEngine`就会连接到正在运行的 Ray 实例。" +"api-flwr.html#flwr.simulation.start_simulation>`_ ,这样 " +":code:`VirtualClientEngine`就会连接到正在运行的 Ray 实例。" #: ../../source/how-to-run-simulations.rst:112 msgid "" -"Start Ray on you head node: on the terminal type :code:`ray start --head`. " -"This command will print a few lines, one of which indicates how to attach " -"other nodes to the head node." +"Start Ray on you head node: on the terminal type :code:`ray start " +"--head`. This command will print a few lines, one of which indicates how " +"to attach other nodes to the head node." msgstr "" -"在头部节点上启动 Ray:在终端上输入 :code:`raystart--head`。该命令将打印几行输" -"出,其中一行说明如何将其他节点连接到头部节点。" +"在头部节点上启动 Ray:在终端上输入 :code:`raystart--" +"head`。该命令将打印几行输出,其中一行说明如何将其他节点连接到头部节点。" #: ../../source/how-to-run-simulations.rst:113 msgid "" -"Attach other nodes to the head node: copy the command shown after starting " -"the head and execute it on terminal of a new node: for example :code:`ray " -"start --address='192.168.1.132:6379'`" +"Attach other nodes to the head node: copy the command shown after " +"starting the head and execute it on terminal of a new node: for example " +":code:`ray start --address='192.168.1.132:6379'`" msgstr "" -"将其他节点附加到头部节点:复制启动头部后显示的命令,并在新节点的终端上执行:" -"例如 :code:`ray start --address='192.168.1.132:6379'`" +"将其他节点附加到头部节点:复制启动头部后显示的命令,并在新节点的终端上执行:例如 :code:`ray start " +"--address='192.168.1.132:6379'`" #: ../../source/how-to-run-simulations.rst:115 msgid "" "With all the above done, you can run your code from the head node as you " "would if the simulation was running on a single node." -msgstr "" -"完成上述所有操作后,您就可以在头部节点上运行代码了,就像在单个节点上运行模拟" -"一样。" +msgstr "完成上述所有操作后,您就可以在头部节点上运行代码了,就像在单个节点上运行模拟一样。" #: ../../source/how-to-run-simulations.rst:117 msgid "" -"Once your simulation is finished, if you'd like to dismantle your cluster " -"you simply need to run the command :code:`ray stop` in each node's terminal " -"(including the head node)." -msgstr "" -"模拟结束后,如果要拆除集群,只需在每个节点(包括头部节点)的终端运行 :code:" -"`ray stop` 命令即可。" +"Once your simulation is finished, if you'd like to dismantle your cluster" +" you simply need to run the command :code:`ray stop` in each node's " +"terminal (including the head node)." +msgstr "模拟结束后,如果要拆除集群,只需在每个节点(包括头部节点)的终端运行 :code:`ray stop` 命令即可。" #: ../../source/how-to-run-simulations.rst:120 msgid "Multi-node simulation good-to-know" @@ -6355,27 +6135,28 @@ msgstr "在此,我们列举了运行多节点 FL 模拟时的一些有趣功 #: ../../source/how-to-run-simulations.rst:124 msgid "" -"User :code:`ray status` to check all nodes connected to your head node as " -"well as the total resources available to the :code:`VirtualClientEngine`." +"User :code:`ray status` to check all nodes connected to your head node as" +" well as the total resources available to the " +":code:`VirtualClientEngine`." msgstr "" -"使用 :code:`ray status` 查看连接到头部节点的所有节点,以及 :code:" -"`VirtualClientEngine` 可用的总资源。" +"使用 :code:`ray status` 查看连接到头部节点的所有节点,以及 :code:`VirtualClientEngine` " +"可用的总资源。" #: ../../source/how-to-run-simulations.rst:126 msgid "" -"When attaching a new node to the head, all its resources (i.e. all CPUs, all " -"GPUs) will be visible by the head node. This means that the :code:" -"`VirtualClientEngine` can schedule as many `virtual` clients as that node " -"can possible run. In some settings you might want to exclude certain " -"resources from the simulation. You can do this by appending `--num-" -"cpus=` and/or `--num-gpus=` in any :" -"code:`ray start` command (including when starting the head)" +"When attaching a new node to the head, all its resources (i.e. all CPUs, " +"all GPUs) will be visible by the head node. This means that the " +":code:`VirtualClientEngine` can schedule as many `virtual` clients as " +"that node can possible run. In some settings you might want to exclude " +"certain resources from the simulation. You can do this by appending " +"`--num-cpus=` and/or `--num-" +"gpus=` in any :code:`ray start` command (including " +"when starting the head)" msgstr "" -"将新节点附加到头部节点时,头部节点将可见其所有资源(即所有 CPU 和 GPU)。这意" -"味着 :code:`VirtualClientEngine` 可以调度尽可能多的 \"虚拟 \"客户端来运行该节" -"点。在某些设置中,您可能希望将某些资源排除在模拟之外。为此,您可以在任何 :" -"code:`ray start` 命令(包括启动头部时)中添加 `--num-" -"cpus=`和/或 `--num-gpus=`" +"将新节点附加到头部节点时,头部节点将可见其所有资源(即所有 CPU 和 GPU)。这意味着 :code:`VirtualClientEngine`" +" 可以调度尽可能多的 \"虚拟 \"客户端来运行该节点。在某些设置中,您可能希望将某些资源排除在模拟之外。为此,您可以在任何 :code:`ray" +" start` 命令(包括启动头部时)中添加 `--num-cpus=`和/或 `--num-" +"gpus=`" #: ../../source/how-to-run-simulations.rst:132 msgid "Considerations for simulations" @@ -6383,26 +6164,23 @@ msgstr "模拟的注意事项" #: ../../source/how-to-run-simulations.rst:135 msgid "" -"We are actively working on these fronts so to make it trivial to run any FL " -"workload with Flower simulation." -msgstr "" -"我们正在积极开展这些方面的工作,以便使 FL 工作负载与 Flower 模拟的运行变得轻" -"而易举。" +"We are actively working on these fronts so to make it trivial to run any " +"FL workload with Flower simulation." +msgstr "我们正在积极开展这些方面的工作,以便使 FL 工作负载与 Flower 模拟的运行变得轻而易举。" #: ../../source/how-to-run-simulations.rst:138 msgid "" -"The current VCE allows you to run Federated Learning workloads in simulation " -"mode whether you are prototyping simple scenarios on your personal laptop or " -"you want to train a complex FL pipeline across multiple high-performance GPU " -"nodes. While we add more capabilities to the VCE, the points below highlight " -"some of the considerations to keep in mind when designing your FL pipeline " -"with Flower. We also highlight a couple of current limitations in our " -"implementation." +"The current VCE allows you to run Federated Learning workloads in " +"simulation mode whether you are prototyping simple scenarios on your " +"personal laptop or you want to train a complex FL pipeline across " +"multiple high-performance GPU nodes. While we add more capabilities to " +"the VCE, the points below highlight some of the considerations to keep in" +" mind when designing your FL pipeline with Flower. We also highlight a " +"couple of current limitations in our implementation." msgstr "" -"当前的 VCE 允许您在模拟模式下运行联邦学习工作负载,无论您是在个人笔记本电脑上" -"建立简单的场景原型,还是要在多个高性能 GPU 节点上训练复杂的 FL情景。虽然我们" -"为 VCE 增加了更多的功能,但以下几点强调了在使用 Flower 设计 FL 时需要注意的一" -"些事项。我们还强调了我们的实现中目前存在的一些局限性。" +"当前的 VCE 允许您在模拟模式下运行联邦学习工作负载,无论您是在个人笔记本电脑上建立简单的场景原型,还是要在多个高性能 GPU 节点上训练复杂的" +" FL情景。虽然我们为 VCE 增加了更多的功能,但以下几点强调了在使用 Flower 设计 FL " +"时需要注意的一些事项。我们还强调了我们的实现中目前存在的一些局限性。" #: ../../source/how-to-run-simulations.rst:141 msgid "GPU resources" @@ -6410,29 +6188,28 @@ msgstr "GPU 资源" #: ../../source/how-to-run-simulations.rst:143 msgid "" -"The VCE assigns a share of GPU memory to a client that specifies the key :" -"code:`num_gpus` in :code:`client_resources`. This being said, Ray (used " +"The VCE assigns a share of GPU memory to a client that specifies the key " +":code:`num_gpus` in :code:`client_resources`. This being said, Ray (used " "internally by the VCE) is by default:" msgstr "" -"VCE 会为指定 :code:`client_resources` 中 :code:`num_gpus` 关键字的客户端分配 " -"GPU 内存份额。也就是说,Ray(VCE 内部使用)是默认的:" +"VCE 会为指定 :code:`client_resources` 中 :code:`num_gpus` 关键字的客户端分配 GPU " +"内存份额。也就是说,Ray(VCE 内部使用)是默认的:" #: ../../source/how-to-run-simulations.rst:146 msgid "" -"not aware of the total VRAM available on the GPUs. This means that if you " -"set :code:`num_gpus=0.5` and you have two GPUs in your system with different " -"(e.g. 32GB and 8GB) VRAM amounts, they both would run 2 clients concurrently." +"not aware of the total VRAM available on the GPUs. This means that if you" +" set :code:`num_gpus=0.5` and you have two GPUs in your system with " +"different (e.g. 32GB and 8GB) VRAM amounts, they both would run 2 clients" +" concurrently." msgstr "" -"不知道 GPU 上可用的总 VRAM。这意味着,如果您设置 :code:`num_gpus=0.5`,而系统" -"中有两个不同(如 32GB 和 8GB)VRAM 的 GPU,它们都将同时运行 2 个客户端。" +"不知道 GPU 上可用的总 VRAM。这意味着,如果您设置 :code:`num_gpus=0.5`,而系统中有两个不同(如 32GB 和 " +"8GB)VRAM 的 GPU,它们都将同时运行 2 个客户端。" #: ../../source/how-to-run-simulations.rst:147 msgid "" "not aware of other unrelated (i.e. not created by the VCE) workloads are " "running on the GPU. Two takeaways from this are:" -msgstr "" -"不知道 GPU 上正在运行其他无关(即不是由 VCE 创建)的工作负载。从中可以得到以" -"下两点启示:" +msgstr "不知道 GPU 上正在运行其他无关(即不是由 VCE 创建)的工作负载。从中可以得到以下两点启示:" #: ../../source/how-to-run-simulations.rst:149 msgid "" @@ -6440,27 +6217,28 @@ msgid "" "aggregation (by instance when making use of the `evaluate method `_)" msgstr "" -"您的 Flower 服务器可能需要 GPU 来评估聚合后的 \"全局模型\"(例如在使用 \"评估" -"方法\"`_时)" +"您的 Flower 服务器可能需要 GPU 来评估聚合后的 \"全局模型\"(例如在使用 \"评估方法\"`_时)" #: ../../source/how-to-run-simulations.rst:150 msgid "" "If you want to run several independent Flower simulations on the same " -"machine you need to mask-out your GPUs with :code:" -"`CUDA_VISIBLE_DEVICES=\"\"` when launching your experiment." +"machine you need to mask-out your GPUs with " +":code:`CUDA_VISIBLE_DEVICES=\"\"` when launching your " +"experiment." msgstr "" -"如果您想在同一台机器上运行多个独立的 Flower 模拟,则需要在启动实验时使用 :" -"code:`CUDA_VISIBLE_DEVICES=\"\"` 屏蔽 GPU。" +"如果您想在同一台机器上运行多个独立的 Flower 模拟,则需要在启动实验时使用 " +":code:`CUDA_VISIBLE_DEVICES=\"\"` 屏蔽 GPU。" #: ../../source/how-to-run-simulations.rst:153 msgid "" -"In addition, the GPU resource limits passed to :code:`client_resources` are " -"not `enforced` (i.e. they can be exceeded) which can result in the situation " -"of client using more VRAM than the ratio specified when starting the " -"simulation." +"In addition, the GPU resource limits passed to :code:`client_resources` " +"are not `enforced` (i.e. they can be exceeded) which can result in the " +"situation of client using more VRAM than the ratio specified when " +"starting the simulation." msgstr "" -"此外,传递给 :code:`client_resources` 的 GPU 资源限制并不是 \"强制 \"的(即可" -"以超出),这可能导致客户端使用的 VRAM 超过启动模拟时指定的比例。" +"此外,传递给 :code:`client_resources` 的 GPU 资源限制并不是 \"强制 \"的(即可以超出),这可能导致客户端使用的" +" VRAM 超过启动模拟时指定的比例。" #: ../../source/how-to-run-simulations.rst:156 msgid "TensorFlow with GPUs" @@ -6468,40 +6246,42 @@ msgstr "使用 GPU 的 TensorFlow" #: ../../source/how-to-run-simulations.rst:158 msgid "" -"When `using a GPU with TensorFlow `_ " -"nearly your entire GPU memory of all your GPUs visible to the process will " -"be mapped. This is done by TensorFlow for optimization purposes. However, in " -"settings such as FL simulations where we want to split the GPU into multiple " -"`virtual` clients, this is not a desirable mechanism. Luckily we can disable " -"this default behavior by `enabling memory growth `_." +"When `using a GPU with TensorFlow " +"`_ nearly your entire GPU memory of" +" all your GPUs visible to the process will be mapped. This is done by " +"TensorFlow for optimization purposes. However, in settings such as FL " +"simulations where we want to split the GPU into multiple `virtual` " +"clients, this is not a desirable mechanism. Luckily we can disable this " +"default behavior by `enabling memory growth " +"`_." msgstr "" -"在 TensorFlow `_ 中使用 GPU 时,几乎所" -"有进程可见的 GPU 内存都将被映射。TensorFlow 这样做是出于优化目的。然而,在 " -"FL 模拟等设置中,我们希望将 GPU 分割成多个 \"虚拟 \"客户端,这并不是一个理想" -"的机制。幸运的是,我们可以通过 `启用内存增长 `_来禁用这一默认行为。" +"在 TensorFlow `_ 中使用 GPU 时,几乎所有进程可见的" +" GPU 内存都将被映射。TensorFlow 这样做是出于优化目的。然而,在 FL 模拟等设置中,我们希望将 GPU 分割成多个 \"虚拟 " +"\"客户端,这并不是一个理想的机制。幸运的是,我们可以通过 `启用内存增长 " +"`_来禁用这一默认行为。" #: ../../source/how-to-run-simulations.rst:160 msgid "" -"This would need to be done in the main process (which is where the server " -"would run) and in each Actor created by the VCE. By means of :code:" -"`actor_kwargs` we can pass the reserved key `\"on_actor_init_fn\"` in order " -"to specify a function to be executed upon actor initialization. In this " -"case, to enable GPU growth for TF workloads. It would look as follows:" +"This would need to be done in the main process (which is where the server" +" would run) and in each Actor created by the VCE. By means of " +":code:`actor_kwargs` we can pass the reserved key `\"on_actor_init_fn\"` " +"in order to specify a function to be executed upon actor initialization. " +"In this case, to enable GPU growth for TF workloads. It would look as " +"follows:" msgstr "" -"这需要在主进程(也就是服务器运行的地方)和 VCE 创建的每个角色中完成。通过 :" -"code:`actor_kwargs`,我们可以传递保留关键字`\"on_actor_init_fn\"`,以指定在角" -"色初始化时执行的函数。在本例中,为了使 TF 工作负载的 GPU 增长,它看起来如下:" +"这需要在主进程(也就是服务器运行的地方)和 VCE 创建的每个角色中完成。通过 " +":code:`actor_kwargs`,我们可以传递保留关键字`\"on_actor_init_fn\"`,以指定在角色初始化时执行的函数。在本例中,为了使" +" TF 工作负载的 GPU 增长,它看起来如下:" #: ../../source/how-to-run-simulations.rst:179 msgid "" "This is precisely the mechanism used in `Tensorflow/Keras Simulation " -"`_ " -"example." +"`_ example." msgstr "" -"这正是 \"Tensorflow/Keras 模拟 `_\"示例中使用的机制。" +"这正是 \"Tensorflow/Keras 模拟 " +"`_\"示例中使用的机制。" #: ../../source/how-to-run-simulations.rst:183 msgid "Multi-node setups" @@ -6509,35 +6289,35 @@ msgstr "多节点设置" #: ../../source/how-to-run-simulations.rst:185 msgid "" -"The VCE does not currently offer a way to control on which node a particular " -"`virtual` client is executed. In other words, if more than a single node " -"have the resources needed by a client to run, then any of those nodes could " -"get the client workload scheduled onto. Later in the FL process (i.e. in a " -"different round) the same client could be executed by a different node. " -"Depending on how your clients access their datasets, this might require " -"either having a copy of all dataset partitions on all nodes or a dataset " -"serving mechanism (e.g. using nfs, a database) to circumvent data " -"duplication." -msgstr "" -"VCE 目前不提供控制特定 \"虚拟 \"客户端在哪个节点上执行的方法。换句话说,如果" -"不止一个节点拥有客户端运行所需的资源,那么这些节点中的任何一个都可能被调度到" -"客户端工作负载上。在 FL 进程的稍后阶段(即在另一轮中),同一客户端可以由不同" -"的节点执行。根据客户访问数据集的方式,这可能需要在所有节点上复制所有数据集分" -"区,或采用数据集服务机制(如使用 nfs 或数据库)来避免数据重复。" +"The VCE does not currently offer a way to control on which node a " +"particular `virtual` client is executed. In other words, if more than a " +"single node have the resources needed by a client to run, then any of " +"those nodes could get the client workload scheduled onto. Later in the FL" +" process (i.e. in a different round) the same client could be executed by" +" a different node. Depending on how your clients access their datasets, " +"this might require either having a copy of all dataset partitions on all " +"nodes or a dataset serving mechanism (e.g. using nfs, a database) to " +"circumvent data duplication." +msgstr "" +"VCE 目前不提供控制特定 \"虚拟 " +"\"客户端在哪个节点上执行的方法。换句话说,如果不止一个节点拥有客户端运行所需的资源,那么这些节点中的任何一个都可能被调度到客户端工作负载上。在 " +"FL " +"进程的稍后阶段(即在另一轮中),同一客户端可以由不同的节点执行。根据客户访问数据集的方式,这可能需要在所有节点上复制所有数据集分区,或采用数据集服务机制(如使用" +" nfs 或数据库)来避免数据重复。" #: ../../source/how-to-run-simulations.rst:187 msgid "" -"By definition virtual clients are `stateless` due to their ephemeral nature. " -"A client state can be implemented as part of the Flower client class but " -"users need to ensure this saved to persistent storage (e.g. a database, " -"disk) and that can be retrieve later by the same client regardless on which " -"node it is running from. This is related to the point above also since, in " -"some way, the client's dataset could be seen as a type of `state`." +"By definition virtual clients are `stateless` due to their ephemeral " +"nature. A client state can be implemented as part of the Flower client " +"class but users need to ensure this saved to persistent storage (e.g. a " +"database, disk) and that can be retrieve later by the same client " +"regardless on which node it is running from. This is related to the point" +" above also since, in some way, the client's dataset could be seen as a " +"type of `state`." msgstr "" -"根据定义,虚拟客户端是 \"无状态 \"的,因为它们具有即时性。客户机状态可以作为 " -"Flower 客户机类的一部分来实现,但用户需要确保将其保存到持久存储(如数据库、磁" -"盘)中,而且无论客户机在哪个节点上运行,都能在以后检索到。这也与上述观点有" -"关,因为在某种程度上,客户端的数据集可以被视为一种 \"状态\"。" +"根据定义,虚拟客户端是 \"无状态 \"的,因为它们具有即时性。客户机状态可以作为 Flower " +"客户机类的一部分来实现,但用户需要确保将其保存到持久存储(如数据库、磁盘)中,而且无论客户机在哪个节点上运行,都能在以后检索到。这也与上述观点有关,因为在某种程度上,客户端的数据集可以被视为一种" +" \"状态\"。" #: ../../source/how-to-save-and-load-model-checkpoints.rst:2 msgid "Save and load model checkpoints" @@ -6545,12 +6325,10 @@ msgstr "保存和加载模型检查点" #: ../../source/how-to-save-and-load-model-checkpoints.rst:4 msgid "" -"Flower does not automatically save model updates on the server-side. This " -"how-to guide describes the steps to save (and load) model checkpoints in " -"Flower." -msgstr "" -"Flower 不会在服务器端自动保存模型更新。本指南将介绍在 Flower 中保存(和加载)" -"模型检查点的步骤。" +"Flower does not automatically save model updates on the server-side. This" +" how-to guide describes the steps to save (and load) model checkpoints in" +" Flower." +msgstr "Flower 不会在服务器端自动保存模型更新。本指南将介绍在 Flower 中保存(和加载)模型检查点的步骤。" #: ../../source/how-to-save-and-load-model-checkpoints.rst:8 msgid "Model checkpointing" @@ -6558,22 +6336,22 @@ msgstr "模型检查点" #: ../../source/how-to-save-and-load-model-checkpoints.rst:10 msgid "" -"Model updates can be persisted on the server-side by customizing :code:" -"`Strategy` methods. Implementing custom strategies is always an option, but " -"for many cases it may be more convenient to simply customize an existing " -"strategy. The following code example defines a new :code:`SaveModelStrategy` " -"which customized the existing built-in :code:`FedAvg` strategy. In " -"particular, it customizes :code:`aggregate_fit` by calling :code:" -"`aggregate_fit` in the base class (:code:`FedAvg`). It then continues to " -"save returned (aggregated) weights before it returns those aggregated " -"weights to the caller (i.e., the server):" -msgstr "" -"模型更新可通过自定义 :code:`Strategy` 方法在服务器端持久化。实现自定义策略始" -"终是一种选择,但在许多情况下,简单地自定义现有策略可能更方便。下面的代码示例" -"定义了一个新的 :code:`SaveModelStrategy`,它自定义了现有的内置 :code:" -"`FedAvg` 策略。特别是,它通过调用基类(:code:`FedAvg`)中的 :code:" -"`aggregate_fit` 来定制 :code:`aggregate_fit`。然后继续保存返回的(聚合)参" -"数,然后再将这些聚合参数返回给调用者(即服务器):" +"Model updates can be persisted on the server-side by customizing " +":code:`Strategy` methods. Implementing custom strategies is always an " +"option, but for many cases it may be more convenient to simply customize " +"an existing strategy. The following code example defines a new " +":code:`SaveModelStrategy` which customized the existing built-in " +":code:`FedAvg` strategy. In particular, it customizes " +":code:`aggregate_fit` by calling :code:`aggregate_fit` in the base class " +"(:code:`FedAvg`). It then continues to save returned (aggregated) weights" +" before it returns those aggregated weights to the caller (i.e., the " +"server):" +msgstr "" +"模型更新可通过自定义 :code:`Strategy` " +"方法在服务器端持久化。实现自定义策略始终是一种选择,但在许多情况下,简单地自定义现有策略可能更方便。下面的代码示例定义了一个新的 " +":code:`SaveModelStrategy`,它自定义了现有的内置 :code:`FedAvg` " +"策略。特别是,它通过调用基类(:code:`FedAvg`)中的 :code:`aggregate_fit` 来定制 " +":code:`aggregate_fit`。然后继续保存返回的(聚合)参数,然后再将这些聚合参数返回给调用者(即服务器):" #: ../../source/how-to-save-and-load-model-checkpoints.rst:47 msgid "Save and load PyTorch checkpoints" @@ -6581,32 +6359,29 @@ msgstr "保存和加载 PyTorch 检查点" #: ../../source/how-to-save-and-load-model-checkpoints.rst:49 msgid "" -"Similar to the previous example but with a few extra steps, we'll show how " -"to store a PyTorch checkpoint we'll use the ``torch.save`` function. " -"Firstly, ``aggregate_fit`` returns a ``Parameters`` object that has to be " -"transformed into a list of NumPy ``ndarray``'s, then those are transformed " -"into the PyTorch ``state_dict`` following the ``OrderedDict`` class " -"structure." +"Similar to the previous example but with a few extra steps, we'll show " +"how to store a PyTorch checkpoint we'll use the ``torch.save`` function. " +"Firstly, ``aggregate_fit`` returns a ``Parameters`` object that has to be" +" transformed into a list of NumPy ``ndarray``'s, then those are " +"transformed into the PyTorch ``state_dict`` following the ``OrderedDict``" +" class structure." msgstr "" -"与前面的例子类似,但多了几个步骤,我们将展示如何存储一个 PyTorch 检查点,我们" -"将使用 ``torch.save`` 函数。首先,``aggregate_fit`` 返回一个 ``Parameters`` " -"对象,它必须被转换成一个 NumPy ``ndarray`` 的列表,然后这些对象按照 " -"``OrderedDict`` 类结构被转换成 PyTorch `state_dict` 对象。" +"与前面的例子类似,但多了几个步骤,我们将展示如何存储一个 PyTorch 检查点,我们将使用 ``torch.save`` " +"函数。首先,``aggregate_fit`` 返回一个 ``Parameters`` 对象,它必须被转换成一个 NumPy " +"``ndarray`` 的列表,然后这些对象按照 ``OrderedDict`` 类结构被转换成 PyTorch `state_dict` 对象。" #: ../../source/how-to-save-and-load-model-checkpoints.rst:85 msgid "" -"To load your progress, you simply append the following lines to your code. " -"Note that this will iterate over all saved checkpoints and load the latest " -"one:" -msgstr "" -"要加载进度,只需在代码中添加以下几行。请注意,这将遍历所有已保存的检查点,并" -"加载最新的检查点:" +"To load your progress, you simply append the following lines to your " +"code. Note that this will iterate over all saved checkpoints and load the" +" latest one:" +msgstr "要加载进度,只需在代码中添加以下几行。请注意,这将遍历所有已保存的检查点,并加载最新的检查点:" #: ../../source/how-to-save-and-load-model-checkpoints.rst:97 #, fuzzy msgid "" -"Return/use this object of type ``Parameters`` wherever necessary, such as in " -"the ``initial_parameters`` when defining a ``Strategy``." +"Return/use this object of type ``Parameters`` wherever necessary, such as" +" in the ``initial_parameters`` when defining a ``Strategy``." msgstr "" "在必要时返回/使用此 ``Parameters`` 类型的对象,例如在定义 ``Strategy` 时的 " "``initial_parameters` 中。" @@ -6617,23 +6392,23 @@ msgstr "升级至 Flower 1.0" #: ../../source/how-to-upgrade-to-flower-1.0.rst:4 msgid "" -"Flower 1.0 is here. Along with new features, Flower 1.0 provides a stable " -"foundation for future growth. Compared to Flower 0.19 (and other 0.x series " -"releases), there are a few breaking changes that make it necessary to change " -"the code of existing 0.x-series projects." +"Flower 1.0 is here. Along with new features, Flower 1.0 provides a stable" +" foundation for future growth. Compared to Flower 0.19 (and other 0.x " +"series releases), there are a few breaking changes that make it necessary" +" to change the code of existing 0.x-series projects." msgstr "" -"Flower 1.0 正式发布。除了新功能,Flower 1.0 还为未来的发展奠定了稳定的基础。" -"与 Flower 0.19(以及其他 0.x 系列版本)相比,有一些破坏性改动需要修改现有 0." -"x 系列项目的代码。" +"Flower 1.0 正式发布。除了新功能,Flower 1.0 还为未来的发展奠定了稳定的基础。与 Flower 0.19(以及其他 0.x " +"系列版本)相比,有一些破坏性改动需要修改现有 0.x 系列项目的代码。" #: ../../source/how-to-upgrade-to-flower-1.0.rst:8 +#: ../../source/how-to-upgrade-to-flower-next.rst:43 msgid "Install update" msgstr "安装更新" #: ../../source/how-to-upgrade-to-flower-1.0.rst:10 msgid "" -"Here's how to update an existing installation to Flower 1.0 using either pip " -"or Poetry:" +"Here's how to update an existing installation to Flower 1.0 using either " +"pip or Poetry:" msgstr "下面介绍如何使用 pip 或 Poetry 将现有安装更新到 Flower 1.0:" #: ../../source/how-to-upgrade-to-flower-1.0.rst:12 @@ -6644,15 +6419,13 @@ msgstr "pip: 安装时添加 ``-U``." msgid "" "``python -m pip install -U flwr`` (when using ``start_server`` and " "``start_client``)" -msgstr "" -"`python -m pip install -U flwr``(当使用`start_server`和`start_client`时)" +msgstr "`python -m pip install -U flwr``(当使用`start_server`和`start_client`时)" #: ../../source/how-to-upgrade-to-flower-1.0.rst:15 msgid "" "``python -m pip install -U flwr[simulation]`` (when using " "``start_simulation``)" -msgstr "" -"`python -m pip install -U flwr[simulation]``(当使用`start_simulation``时)" +msgstr "`python -m pip install -U flwr[simulation]``(当使用`start_simulation``时)" #: ../../source/how-to-upgrade-to-flower-1.0.rst:17 msgid "" @@ -6660,23 +6433,23 @@ msgid "" "reinstall (don't forget to delete ``poetry.lock`` via ``rm poetry.lock`` " "before running ``poetry install``)." msgstr "" -"Poetry:更新 ``pyproject.toml`` 中的 ``flwr`` 依赖包,然后重新安装(运行 " -"``poetry install`` 前,别忘了通过 ``rm poetry.lock` 删除 ``poetry.lock`)。" +"Poetry:更新 ``pyproject.toml`` 中的 ``flwr`` 依赖包,然后重新安装(运行 ``poetry install``" +" 前,别忘了通过 ``rm poetry.lock` 删除 ``poetry.lock`)。" #: ../../source/how-to-upgrade-to-flower-1.0.rst:19 -msgid "" -"``flwr = \"^1.0.0\"`` (when using ``start_server`` and ``start_client``)" +msgid "``flwr = \"^1.0.0\"`` (when using ``start_server`` and ``start_client``)" msgstr "``flwr = \"^1.0.0\"`` (当使用 ``start_server` 和 ``start_client` 时)" #: ../../source/how-to-upgrade-to-flower-1.0.rst:20 msgid "" -"``flwr = { version = \"^1.0.0\", extras = [\"simulation\"] }`` (when using " -"``start_simulation``)" +"``flwr = { version = \"^1.0.0\", extras = [\"simulation\"] }`` (when " +"using ``start_simulation``)" msgstr "" -"``flwr = { version = \"^1.0.0\", extras = [\"simulation\"] }``(当使用" -"``start_simulation``时)" +"``flwr = { version = \"^1.0.0\", extras = [\"simulation\"] " +"}``(当使用``start_simulation``时)" #: ../../source/how-to-upgrade-to-flower-1.0.rst:24 +#: ../../source/how-to-upgrade-to-flower-next.rst:100 msgid "Required changes" msgstr "所需变更" @@ -6698,17 +6471,22 @@ msgstr "将所有参数作为关键字参数传递(而不是位置参数)。 msgid "" "Flower 0.19 (positional arguments): ``start_client(\"127.0.0.1:8080\", " "FlowerClient())``" -msgstr "" -"Flower 0.19 (位置参数): ``start_client(\"127.0.0.1:8080\", FlowerClient())``" +msgstr "Flower 0.19 (位置参数): ``start_client(\"127.0.0.1:8080\", FlowerClient())``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:34 msgid "" "Flower 1.0 (keyword arguments): " -"``start_client(server_address=\"127.0.0.1:8080\", client=FlowerClient())``" +"``start_client(server_address=\"127.0.0.1:8080\", " +"client=FlowerClient())``" msgstr "" "Flower 1.0(关键字参数): ``start_client(server_address=\"127.0.0.1:8080\", " "client=FlowerClient())``" +#: ../../source/how-to-upgrade-to-flower-1.0.rst:37 +#: ../../source/ref-api/flwr.client.Client.rst:2 +msgid "Client" +msgstr "客户端" + #: ../../source/how-to-upgrade-to-flower-1.0.rst:39 msgid "" "Subclasses of ``NumPyClient``: change ``def get_parameters(self):``` to " @@ -6722,8 +6500,8 @@ msgid "" "Subclasses of ``Client``: change ``def get_parameters(self):``` to ``def " "get_parameters(self, ins: GetParametersIns):``" msgstr "" -"客户端 \"的子类:将 \"get_parameters(self): \"改为 \"get_parameters(self, " -"ins: GetParametersIns):\"" +"客户端 \"的子类:将 \"get_parameters(self): \"改为 \"get_parameters(self, ins: " +"GetParametersIns):\"" #: ../../source/how-to-upgrade-to-flower-1.0.rst:43 msgid "Strategies / ``start_server`` / ``start_simulation``" @@ -6747,30 +6525,29 @@ msgstr "" #: ../../source/how-to-upgrade-to-flower-1.0.rst:48 msgid "" -"Flower 1.0: ``start_server(..., config=flwr.server." -"ServerConfig(num_rounds=3, round_timeout=600.0), ...)``" +"Flower 1.0: ``start_server(..., " +"config=flwr.server.ServerConfig(num_rounds=3, round_timeout=600.0), " +"...)``" msgstr "" -"Flower 1.0: ``start_server(..., config=flwr.server." -"ServerConfig(num_rounds=3, round_timeout=600.0), ...)``" +"Flower 1.0: ``start_server(..., " +"config=flwr.server.ServerConfig(num_rounds=3, round_timeout=600.0), " +"...)``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:50 msgid "" "Replace ``num_rounds=1`` in ``start_simulation`` with the new " "``config=ServerConfig(...)`` (see previous item)" -msgstr "" -"将`start_simulation``中的`num_rounds=1``替换为新的`config=ServerConfig(...)`" -"(参见前一项)" +msgstr "将`start_simulation``中的`num_rounds=1``替换为新的`config=ServerConfig(...)`(参见前一项)" #: ../../source/how-to-upgrade-to-flower-1.0.rst:51 msgid "" "Remove ``force_final_distributed_eval`` parameter from calls to " -"``start_server``. Distributed evaluation on all clients can be enabled by " -"configuring the strategy to sample all clients for evaluation after the last " -"round of training." +"``start_server``. Distributed evaluation on all clients can be enabled by" +" configuring the strategy to sample all clients for evaluation after the " +"last round of training." msgstr "" -"删除调用 ``start_server`` 时的 ``force_final_distributed_eval` 参数。可以通过" -"配置策略,在最后一轮训练后对所有客户端进行抽样评估,从而启用对所有客户端的分" -"布式评估。" +"删除调用 ``start_server`` 时的 ``force_final_distributed_eval` " +"参数。可以通过配置策略,在最后一轮训练后对所有客户端进行抽样评估,从而启用对所有客户端的分布式评估。" #: ../../source/how-to-upgrade-to-flower-1.0.rst:52 msgid "Rename parameter/ndarray conversion functions:" @@ -6786,18 +6563,17 @@ msgstr "``weights_to_parameters`` --> ``ndarrays_to_parameters``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:57 msgid "" -"Strategy initialization: if the strategy relies on the default values for " -"``fraction_fit`` and ``fraction_evaluate``, set ``fraction_fit`` and " +"Strategy initialization: if the strategy relies on the default values for" +" ``fraction_fit`` and ``fraction_evaluate``, set ``fraction_fit`` and " "``fraction_evaluate`` manually to ``0.1``. Projects that do not manually " "create a strategy (by calling ``start_server`` or ``start_simulation`` " -"without passing a strategy instance) should now manually initialize FedAvg " -"with ``fraction_fit`` and ``fraction_evaluate`` set to ``0.1``." +"without passing a strategy instance) should now manually initialize " +"FedAvg with ``fraction_fit`` and ``fraction_evaluate`` set to ``0.1``." msgstr "" -"策略初始化:如果策略依赖于 ``fraction_fit`` 和 ``fraction_evaluate`` 的默认" -"值,请手动将 ``fraction_fit`` 和 ``fraction_evaluate`` 设置为 ``0.1``。未手动" -"创建策略的项目(调用 ``start_server` 或 ``start_simulation` 时未传递策略实" -"例)现在应手动初始化 FedAvg,并将 `fraction_fit` 和 `fraction_evaluate` 设为 " -"`0.1``。" +"策略初始化:如果策略依赖于 ``fraction_fit`` 和 ``fraction_evaluate`` 的默认值,请手动将 " +"``fraction_fit`` 和 ``fraction_evaluate`` 设置为 ``0.1``。未手动创建策略的项目(调用 " +"``start_server` 或 ``start_simulation` 时未传递策略实例)现在应手动初始化 FedAvg,并将 " +"`fraction_fit` 和 `fraction_evaluate` 设为 `0.1``。" #: ../../source/how-to-upgrade-to-flower-1.0.rst:58 msgid "Rename built-in strategy parameters (e.g., ``FedAvg``):" @@ -6822,8 +6598,8 @@ msgid "" "``configure_evaluate``, ``aggregate_evaluate``, and ``evaluate_fn``." msgstr "" "将 `rnd` 更名为 `server_round`。这会影响多个方法和函数,例如 " -"``configure_fit``、``aggregate_fit``、``configure_evaluate``、" -"`aggregate_evaluate`` 和 ``evaluate_fn``。" +"``configure_fit``、``aggregate_fit``、``configure_evaluate``、`aggregate_evaluate``" +" 和 ``evaluate_fn``。" #: ../../source/how-to-upgrade-to-flower-1.0.rst:65 msgid "Add ``server_round`` and ``config`` to ``evaluate_fn``:" @@ -6831,19 +6607,21 @@ msgstr "在 ``evaluate_fn` 中添加 ``server_round` 和 ``config`:" #: ../../source/how-to-upgrade-to-flower-1.0.rst:67 msgid "" -"Flower 0.19: ``def evaluate(parameters: NDArrays) -> Optional[Tuple[float, " -"Dict[str, Scalar]]]:``" +"Flower 0.19: ``def evaluate(parameters: NDArrays) -> " +"Optional[Tuple[float, Dict[str, Scalar]]]:``" msgstr "" -"Flower 0.19: ``def evaluate(parameters: NDArrays) -> Optional[Tuple[float, " -"Dict[str, Scalar]]]:``" +"Flower 0.19: ``def evaluate(parameters: NDArrays) -> " +"Optional[Tuple[float, Dict[str, Scalar]]]:``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:68 msgid "" -"Flower 1.0: ``def evaluate(server_round: int, parameters: NDArrays, config: " -"Dict[str, Scalar]) -> Optional[Tuple[float, Dict[str, Scalar]]]:``" +"Flower 1.0: ``def evaluate(server_round: int, parameters: NDArrays, " +"config: Dict[str, Scalar]) -> Optional[Tuple[float, Dict[str, " +"Scalar]]]:``" msgstr "" -"Flower 1.0: ``def evaluate(server_round: int, parameters: NDArrays, config: " -"Dict[str, Scalar]) -> Optional[Tuple[float, Dict[str, Scalar]]]:``" +"Flower 1.0: ``def evaluate(server_round: int, parameters: NDArrays, " +"config: Dict[str, Scalar]) -> Optional[Tuple[float, Dict[str, " +"Scalar]]]:``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:71 msgid "Custom strategies" @@ -6851,23 +6629,22 @@ msgstr "定制策略" #: ../../source/how-to-upgrade-to-flower-1.0.rst:73 msgid "" -"The type of parameter ``failures`` has changed from ``List[BaseException]`` " -"to ``List[Union[Tuple[ClientProxy, FitRes], BaseException]]`` (in " -"``aggregate_fit``) and ``List[Union[Tuple[ClientProxy, EvaluateRes], " -"BaseException]]`` (in ``aggregate_evaluate``)" +"The type of parameter ``failures`` has changed from " +"``List[BaseException]`` to ``List[Union[Tuple[ClientProxy, FitRes], " +"BaseException]]`` (in ``aggregate_fit``) and " +"``List[Union[Tuple[ClientProxy, EvaluateRes], BaseException]]`` (in " +"``aggregate_evaluate``)" msgstr "" -"参数``failures``的类型已从``List[BaseException]``变为" -"``List[Union[Tuple[ClientProxy, FitRes], BaseException]]``(在" -"``agregate_fit``中)和``List[Union[Tuple[ClientProxy, EvaluateRes], " -"BaseException]]``(在``agregate_evaluate``中)" +"参数``failures``的类型已从``List[BaseException]``变为``List[Union[Tuple[ClientProxy," +" FitRes], " +"BaseException]]``(在``agregate_fit``中)和``List[Union[Tuple[ClientProxy, " +"EvaluateRes], BaseException]]``(在``agregate_evaluate``中)" #: ../../source/how-to-upgrade-to-flower-1.0.rst:74 msgid "" "The ``Strategy`` method ``evaluate`` now receives the current round of " "federated learning/evaluation as the first parameter:" -msgstr "" -"``Strategy``方法 的``evaluate``现在会接收当前一轮联邦学习/评估作为第一个参" -"数:" +msgstr "``Strategy``方法 的``evaluate``现在会接收当前一轮联邦学习/评估作为第一个参数:" #: ../../source/how-to-upgrade-to-flower-1.0.rst:76 msgid "" @@ -6879,11 +6656,11 @@ msgstr "" #: ../../source/how-to-upgrade-to-flower-1.0.rst:77 msgid "" -"Flower 1.0: ``def evaluate(self, server_round: int, parameters: Parameters) -" -"> Optional[Tuple[float, Dict[str, Scalar]]]:``" +"Flower 1.0: ``def evaluate(self, server_round: int, parameters: " +"Parameters) -> Optional[Tuple[float, Dict[str, Scalar]]]:``" msgstr "" -"Flower 1.0: ``def evaluate(self, server_round: int, parameters: Parameters) -" -"> Optional[Tuple[float, Dict[str, Scalar]]]:``" +"Flower 1.0: ``def evaluate(self, server_round: int, parameters: " +"Parameters) -> Optional[Tuple[float, Dict[str, Scalar]]]:``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:80 msgid "Optional improvements" @@ -6898,11 +6675,12 @@ msgstr "除了上述必要的改动之外,还有一些潜在的改进措施: #: ../../source/how-to-upgrade-to-flower-1.0.rst:84 msgid "" "Remove \"placeholder\" methods from subclasses of ``Client`` or " -"``NumPyClient``. If you, for example, use server-side evaluation, then empty " -"placeholder implementations of ``evaluate`` are no longer necessary." +"``NumPyClient``. If you, for example, use server-side evaluation, then " +"empty placeholder implementations of ``evaluate`` are no longer " +"necessary." msgstr "" -"删除 ``Client`` 或 ``NumPyClient`` 子类中的 \"占位符 \"方法。例如,如果你使用" -"服务器端评估,那么就不再需要``evaluate``的 \"空占位符 \"实现。" +"删除 ``Client`` 或 ``NumPyClient`` 子类中的 \"占位符 " +"\"方法。例如,如果你使用服务器端评估,那么就不再需要``evaluate``的 \"空占位符 \"实现。" #: ../../source/how-to-upgrade-to-flower-1.0.rst:85 msgid "" @@ -6910,116 +6688,313 @@ msgid "" "``start_simulation(..., config=flwr.server.ServerConfig(num_rounds=3, " "round_timeout=600.0), ...)``" msgstr "" -"通过 ``start_simulation`` 配置循环超时: ``start_simulation(..., config=flwr." -"server.ServerConfig(num_rounds=3, round_timeout=600.0), ...)``" +"通过 ``start_simulation`` 配置循环超时: ``start_simulation(..., " +"config=flwr.server.ServerConfig(num_rounds=3, round_timeout=600.0), " +"...)``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:89 +#: ../../source/how-to-upgrade-to-flower-next.rst:317 msgid "Further help" msgstr "更多帮助" #: ../../source/how-to-upgrade-to-flower-1.0.rst:91 msgid "" -"Most official `Flower code examples `_ are already updated to Flower 1.0, they can serve as a " -"reference for using the Flower 1.0 API. If there are further questions, " -"`join the Flower Slack `_ and use the channel " -"``#questions``." +"Most official `Flower code examples " +"`_ are already updated" +" to Flower 1.0, they can serve as a reference for using the Flower 1.0 " +"API. If there are further questions, `join the Flower Slack " +"`_ and use the channel ``#questions``." msgstr "" -"大多数官方的 `Flower 代码示例 `_ 已经更新到 Flower 1.0,它们可以作为使用 Flower 1.0 API 的参考。如" -"果还有其他问题,请加入 Flower Slack `_ 并使用 " -"\"#questions``\"。" +"大多数官方的 `Flower 代码示例 `_" +" 已经更新到 Flower 1.0,它们可以作为使用 Flower 1.0 API 的参考。如果还有其他问题,请加入 Flower Slack " +"`_ 并使用 \"#questions``\"。" -#: ../../source/how-to-use-built-in-mods.rst:2 +#: ../../source/how-to-upgrade-to-flower-next.rst:2 #, fuzzy -msgid "Use Built-in Mods" -msgstr "使用内置调制器" +msgid "Upgrade to Flower Next" +msgstr "升级至 Flower 1.0" -#: ../../source/how-to-use-built-in-mods.rst:4 -#, fuzzy +#: ../../source/how-to-upgrade-to-flower-next.rst:4 msgid "" -"**Note: This tutorial covers experimental features. The functionality and " -"interfaces may change in future versions.**" -msgstr "**注:本教程涵盖实验性功能。功能和界面可能会在未来版本中发生变化。" +"Welcome to the migration guide for updating Flower to Flower Next! " +"Whether you're a seasoned user or just getting started, this guide will " +"help you smoothly transition your existing setup to take advantage of the" +" latest features and improvements in Flower Next, starting from version " +"1.8." +msgstr "" -#: ../../source/how-to-use-built-in-mods.rst:6 -#, fuzzy +#: ../../source/how-to-upgrade-to-flower-next.rst:9 msgid "" -"In this tutorial, we will learn how to utilize built-in mods to augment the " -"behavior of a ``ClientApp``. Mods (sometimes also called Modifiers) allow us " -"to perform operations before and after a task is processed in the " -"``ClientApp``." +"This guide shows how to reuse pre-``1.8`` Flower code with minimum code " +"changes by using the *compatibility layer* in Flower Next. In another " +"guide, we will show how to run Flower Next end-to-end with pure Flower " +"Next APIs." msgstr "" -"在本教程中,我们将学习如何利用内置模块来增强 ``ClientApp`` 的行为。修改器(有" -"时也称为修改器)允许我们在 ``ClientApp`` 处理任务之前和之后执行操作。" -#: ../../source/how-to-use-built-in-mods.rst:9 -#, fuzzy -msgid "What are Mods?" -msgstr "什么是 Mods?" +#: ../../source/how-to-upgrade-to-flower-next.rst:13 +msgid "Let's dive in!" +msgstr "" -#: ../../source/how-to-use-built-in-mods.rst:11 +#: ../../source/how-to-upgrade-to-flower-next.rst:48 #, fuzzy msgid "" -"A Mod is a callable that wraps around a ``ClientApp``. It can manipulate or " -"inspect the incoming ``Message`` and the resulting outgoing ``Message``. The " -"signature for a ``Mod`` is as follows:" -msgstr "" -"Mod 是包裹在 ``ClientApp`` 周围的可调用程序。它可以操作或检查传入的 " -"``Message`` 和由此产生的传出的 ``Message`` 。一个 ``Mod`` 的签名如下:" +"Here's how to update an existing installation of Flower to Flower Next " +"with ``pip``:" +msgstr "下面介绍如何使用 pip 或 Poetry 将现有安装更新到 Flower 1.0:" -#: ../../source/how-to-use-built-in-mods.rst:18 +#: ../../source/how-to-upgrade-to-flower-next.rst:54 #, fuzzy -msgid "A typical mod function might look something like this:" -msgstr "一个典型的修改函数可能是这样的:" +msgid "or if you need Flower Next with simulation:" +msgstr "启动 Flower 模拟" -#: ../../source/how-to-use-built-in-mods.rst:31 -#, fuzzy -msgid "Using Mods" -msgstr "使用修改器" +#: ../../source/how-to-upgrade-to-flower-next.rst:61 +msgid "" +"Ensure you set the following version constraint in your " +"``requirements.txt``" +msgstr "" -#: ../../source/how-to-use-built-in-mods.rst:33 -#, fuzzy -msgid "To use mods in your ``ClientApp``, you can follow these steps:" -msgstr "要在您的 ``ClientApp`` 中使用 mod,可以按照以下步骤操作:" +#: ../../source/how-to-upgrade-to-flower-next.rst:71 +msgid "or ``pyproject.toml``:" +msgstr "" -#: ../../source/how-to-use-built-in-mods.rst:36 +#: ../../source/how-to-upgrade-to-flower-next.rst:82 #, fuzzy -msgid "1. Import the required mods" -msgstr "1. 导入所需修改" +msgid "Using Poetry" +msgstr "使用 pip" -#: ../../source/how-to-use-built-in-mods.rst:38 +#: ../../source/how-to-upgrade-to-flower-next.rst:84 #, fuzzy -msgid "First, import the built-in mod you intend to use:" -msgstr "首先,导入您打算使用的内置模式:" +msgid "" +"Update the ``flwr`` dependency in ``pyproject.toml`` and then reinstall " +"(don't forget to delete ``poetry.lock`` via ``rm poetry.lock`` before " +"running ``poetry install``)." +msgstr "" +"Poetry:更新 ``pyproject.toml`` 中的 ``flwr`` 依赖包,然后重新安装(运行 ``poetry install``" +" 前,别忘了通过 ``rm poetry.lock` 删除 ``poetry.lock`)。" -#: ../../source/how-to-use-built-in-mods.rst:46 +#: ../../source/how-to-upgrade-to-flower-next.rst:86 #, fuzzy -msgid "2. Define your client function" -msgstr "2. 定义客户功能" +msgid "" +"Ensure you set the following version constraint in your " +"``pyproject.toml``:" +msgstr "将 ``pyproject.toml`` 中的次要版本增加一个。" -#: ../../source/how-to-use-built-in-mods.rst:48 -#, fuzzy +#: ../../source/how-to-upgrade-to-flower-next.rst:102 msgid "" -"Define your client function (``client_fn``) that will be wrapped by the " -"mod(s):" -msgstr "定义将被 mod 封装的客户端函数(``client_fn``):" +"In Flower Next, the *infrastructure* and *application layers* have been " +"decoupled. Instead of starting a client in code via ``start_client()``, " +"you create a |clientapp_link|_ and start it via the command line. Instead" +" of starting a server in code via ``start_server()``, you create a " +"|serverapp_link|_ and start it via the command line. The long-running " +"components of server and client are called SuperLink and SuperNode. The " +"following non-breaking changes that require manual updates and allow you " +"to run your project both in the traditional way and in the Flower Next " +"way:" +msgstr "" -#: ../../source/how-to-use-built-in-mods.rst:57 +#: ../../source/how-to-upgrade-to-flower-next.rst:109 #, fuzzy -msgid "3. Create the ``ClientApp`` with mods" -msgstr "3. 用模块创建 ``ClientApp``" +msgid "|clientapp_link|_" +msgstr "客户端" -#: ../../source/how-to-use-built-in-mods.rst:59 -#, fuzzy +#: ../../source/how-to-upgrade-to-flower-next.rst:110 msgid "" -"Create your ``ClientApp`` and pass the mods as a list to the ``mods`` " -"argument. The order in which you provide the mods matters:" +"Wrap your existing client with |clientapp_link|_ instead of launching it " +"via |startclient_link|_. Here's an example:" msgstr "" -"创建您的 ``ClientApp`` 并将 mods 作为列表传递给 ``mods`` 参数。提供 mod 的顺" -"序很重要:" -#: ../../source/how-to-use-built-in-mods.rst:72 +#: ../../source/how-to-upgrade-to-flower-next.rst:132 +#, fuzzy +msgid "|serverapp_link|_" +msgstr "服务器" + +#: ../../source/how-to-upgrade-to-flower-next.rst:133 +msgid "" +"Wrap your existing strategy with |serverapp_link|_ instead of starting " +"the server via |startserver_link|_. Here's an example:" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:154 +msgid "Deployment" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:155 +msgid "" +"Run the ``SuperLink`` using |flowernext_superlink_link|_ before running, " +"in sequence, |flowernext_clientapp_link|_ (2x) and " +"|flowernext_serverapp_link|_. There is no need to execute `client.py` and" +" `server.py` as Python scripts." +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:158 +msgid "" +"Here's an example to start the server without HTTPS (only for " +"prototyping):" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:174 +msgid "" +"Here's another example to start with HTTPS. Use the ``--certificates`` " +"command line argument to pass paths to (CA certificate, server " +"certificate, and server private key)." +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:201 +#, fuzzy +msgid "Simulation in CLI" +msgstr "运行模拟" + +#: ../../source/how-to-upgrade-to-flower-next.rst:202 +msgid "" +"Wrap your existing client and strategy with |clientapp_link|_ and " +"|serverapp_link|_, respectively. There is no need to use |startsim_link|_" +" anymore. Here's an example:" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:232 +msgid "" +"Run |flower_simulation_link|_ in CLI and point to the ``server_app`` / " +"``client_app`` object in the code instead of executing the Python script." +" Here's an example (assuming the ``server_app`` and ``client_app`` " +"objects are in a ``sim.py`` module):" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:249 +msgid "" +"Set default resources for each |clientapp_link|_ using the ``--backend-" +"config`` command line argument instead of setting the " +"``client_resources`` argument in |startsim_link|_. Here's an example:" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:275 +msgid "Simulation in a Notebook" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:276 +msgid "" +"Run |runsim_link|_ in your notebook instead of |startsim_link|_. Here's " +"an example:" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:319 +#, fuzzy +msgid "" +"Some official `Flower code examples `_ " +"are already updated to Flower Next so they can serve as a reference for " +"using the Flower Next API. If there are further questions, `join the " +"Flower Slack `_ and use the channel " +"``#questions``. You can also `participate in Flower Discuss " +"`_ where you can find us answering questions," +" or share and learn from others about migrating to Flower Next." +msgstr "" +"大多数官方的 `Flower 代码示例 `_" +" 已经更新到 Flower 1.0,它们可以作为使用 Flower 1.0 API 的参考。如果还有其他问题,请加入 Flower Slack " +"`_ 并使用 \"#questions``\"。" + +#: ../../source/how-to-upgrade-to-flower-next.rst:325 +#, fuzzy +msgid "Important" +msgstr "重要变更:" + +#: ../../source/how-to-upgrade-to-flower-next.rst:328 +msgid "" +"As we continuously enhance Flower Next at a rapid pace, we'll be " +"periodically updating this guide. Please feel free to share any feedback " +"with us!" +msgstr "" + +#: ../../source/how-to-upgrade-to-flower-next.rst:334 +msgid "Happy migrating! 🚀" +msgstr "" + +#: ../../source/how-to-use-built-in-mods.rst:2 +#, fuzzy +msgid "Use Built-in Mods" +msgstr "使用内置调制器" + +#: ../../source/how-to-use-built-in-mods.rst:4 +#, fuzzy +msgid "" +"**Note: This tutorial covers experimental features. The functionality and" +" interfaces may change in future versions.**" +msgstr "**注:本教程涵盖实验性功能。功能和界面可能会在未来版本中发生变化。" + +#: ../../source/how-to-use-built-in-mods.rst:6 +#, fuzzy +msgid "" +"In this tutorial, we will learn how to utilize built-in mods to augment " +"the behavior of a ``ClientApp``. Mods (sometimes also called Modifiers) " +"allow us to perform operations before and after a task is processed in " +"the ``ClientApp``." +msgstr "" +"在本教程中,我们将学习如何利用内置模块来增强 ``ClientApp`` 的行为。修改器(有时也称为修改器)允许我们在 ``ClientApp``" +" 处理任务之前和之后执行操作。" + +#: ../../source/how-to-use-built-in-mods.rst:9 +#, fuzzy +msgid "What are Mods?" +msgstr "什么是 Mods?" + +#: ../../source/how-to-use-built-in-mods.rst:11 +#, fuzzy +msgid "" +"A Mod is a callable that wraps around a ``ClientApp``. It can manipulate " +"or inspect the incoming ``Message`` and the resulting outgoing " +"``Message``. The signature for a ``Mod`` is as follows:" +msgstr "" +"Mod 是包裹在 ``ClientApp`` 周围的可调用程序。它可以操作或检查传入的 ``Message`` 和由此产生的传出的 " +"``Message`` 。一个 ``Mod`` 的签名如下:" + +#: ../../source/how-to-use-built-in-mods.rst:18 +#, fuzzy +msgid "A typical mod function might look something like this:" +msgstr "一个典型的修改函数可能是这样的:" + +#: ../../source/how-to-use-built-in-mods.rst:31 +#, fuzzy +msgid "Using Mods" +msgstr "使用修改器" + +#: ../../source/how-to-use-built-in-mods.rst:33 +#, fuzzy +msgid "To use mods in your ``ClientApp``, you can follow these steps:" +msgstr "要在您的 ``ClientApp`` 中使用 mod,可以按照以下步骤操作:" + +#: ../../source/how-to-use-built-in-mods.rst:36 +#, fuzzy +msgid "1. Import the required mods" +msgstr "1. 导入所需修改" + +#: ../../source/how-to-use-built-in-mods.rst:38 +#, fuzzy +msgid "First, import the built-in mod you intend to use:" +msgstr "首先,导入您打算使用的内置模式:" + +#: ../../source/how-to-use-built-in-mods.rst:46 +#, fuzzy +msgid "2. Define your client function" +msgstr "2. 定义客户功能" + +#: ../../source/how-to-use-built-in-mods.rst:48 +#, fuzzy +msgid "" +"Define your client function (``client_fn``) that will be wrapped by the " +"mod(s):" +msgstr "定义将被 mod 封装的客户端函数(``client_fn``):" + +#: ../../source/how-to-use-built-in-mods.rst:57 +#, fuzzy +msgid "3. Create the ``ClientApp`` with mods" +msgstr "3. 用模块创建 ``ClientApp``" + +#: ../../source/how-to-use-built-in-mods.rst:59 +#, fuzzy +msgid "" +"Create your ``ClientApp`` and pass the mods as a list to the ``mods`` " +"argument. The order in which you provide the mods matters:" +msgstr "创建您的 ``ClientApp`` 并将 mods 作为列表传递给 ``mods`` 参数。提供 mod 的顺序很重要:" + +#: ../../source/how-to-use-built-in-mods.rst:72 #, fuzzy msgid "Order of execution" msgstr "停用" @@ -7061,22 +7036,20 @@ msgstr "``example_mod_1`` (返回途中最外层的模式)" #: ../../source/how-to-use-built-in-mods.rst:82 #, fuzzy msgid "" -"Each mod has a chance to inspect and modify the incoming ``Message`` before " -"passing it to the next mod, and likewise with the outgoing ``Message`` " -"before returning it up the stack." -msgstr "" -"每个模块都有机会检查和修改传入的 \"信息\",然后再将其传递给下一个模块,同样," -"也有机会检查和修改传出的 \"信息\",然后再将其返回堆栈。" +"Each mod has a chance to inspect and modify the incoming ``Message`` " +"before passing it to the next mod, and likewise with the outgoing " +"``Message`` before returning it up the stack." +msgstr "每个模块都有机会检查和修改传入的 \"信息\",然后再将其传递给下一个模块,同样,也有机会检查和修改传出的 \"信息\",然后再将其返回堆栈。" #: ../../source/how-to-use-built-in-mods.rst:87 #, fuzzy msgid "" "By following this guide, you have learned how to effectively use mods to " -"enhance your ``ClientApp``'s functionality. Remember that the order of mods " -"is crucial and affects how the input and output are processed." +"enhance your ``ClientApp``'s functionality. Remember that the order of " +"mods is crucial and affects how the input and output are processed." msgstr "" -"通过本指南,您已学会如何有效地使用 mod 来增强您的 ``ClientApp`` 的功能。请记" -"住,mod 的顺序至关重要,它会影响输入和输出的处理方式。" +"通过本指南,您已学会如何有效地使用 mod 来增强您的 ``ClientApp`` 的功能。请记住,mod " +"的顺序至关重要,它会影响输入和输出的处理方式。" #: ../../source/how-to-use-built-in-mods.rst:89 #, fuzzy @@ -7091,34 +7064,32 @@ msgstr "差分隐私" #: ../../source/how-to-use-differential-privacy.rst:3 #, fuzzy msgid "" -"This guide explains how you can utilize differential privacy in the Flower " -"framework. If you are not yet familiar with differential privacy, you can " -"refer to :doc:`explanation-differential-privacy`." +"This guide explains how you can utilize differential privacy in the " +"Flower framework. If you are not yet familiar with differential privacy, " +"you can refer to :doc:`explanation-differential-privacy`." msgstr "" -"本指南解释了如何在 Flower 框架中使用差分隐私。如果您还不熟悉差分隐私,可以参" -"考 :doc:`explanation-differential-privacy` 。" +"本指南解释了如何在 Flower 框架中使用差分隐私。如果您还不熟悉差分隐私,可以参考 :doc:`explanation-" +"differential-privacy` 。" #: ../../source/how-to-use-differential-privacy.rst:7 #, fuzzy msgid "" "Differential Privacy in Flower is in a preview phase. If you plan to use " -"these features in a production environment with sensitive data, feel free " -"contact us to discuss your requirements and to receive guidance on how to " -"best use these features." +"these features in a production environment with sensitive data, feel free" +" contact us to discuss your requirements and to receive guidance on how " +"to best use these features." msgstr "" -"Flower 中的差异隐私处于预览阶段。如果您计划在生产环境中使用这些敏感数据功能," -"请随时联系我们,讨论您的需求,并获得如何最好地使用这些功能的指导。" +"Flower " +"中的差异隐私处于预览阶段。如果您计划在生产环境中使用这些敏感数据功能,请随时联系我们,讨论您的需求,并获得如何最好地使用这些功能的指导。" #: ../../source/how-to-use-differential-privacy.rst:12 #, fuzzy msgid "" -"This approach consists of two seprate phases: clipping of the updates and " -"adding noise to the aggregated model. For the clipping phase, Flower " -"framework has made it possible to decide whether to perform clipping on the " -"server side or the client side." -msgstr "" -"这种方法包括两个独立的阶段:对更新进行剪切和在聚合模型中添加噪声。在剪切阶" -"段,Flower 框架可以决定是在服务器端还是在客户端执行剪切。" +"This approach consists of two seprate phases: clipping of the updates and" +" adding noise to the aggregated model. For the clipping phase, Flower " +"framework has made it possible to decide whether to perform clipping on " +"the server side or the client side." +msgstr "这种方法包括两个独立的阶段:对更新进行剪切和在聚合模型中添加噪声。在剪切阶段,Flower 框架可以决定是在服务器端还是在客户端执行剪切。" #: ../../source/how-to-use-differential-privacy.rst:15 #, fuzzy @@ -7126,23 +7097,20 @@ msgid "" "**Server-side Clipping**: This approach has the advantage of the server " "enforcing uniform clipping across all clients' updates and reducing the " "communication overhead for clipping values. However, it also has the " -"disadvantage of increasing the computational load on the server due to the " -"need to perform the clipping operation for all clients." +"disadvantage of increasing the computational load on the server due to " +"the need to perform the clipping operation for all clients." msgstr "" -"** 服务器端剪切**: 这种方法的优点是服务器可对所有客户端的更新执行统一的剪" -"切,并减少剪切值的通信开销。不过,这种方法也有缺点,那就是需要为所有客户端执" -"行剪切操作,从而增加了服务器的计算负荷。" +"** 服务器端剪切**: " +"这种方法的优点是服务器可对所有客户端的更新执行统一的剪切,并减少剪切值的通信开销。不过,这种方法也有缺点,那就是需要为所有客户端执行剪切操作,从而增加了服务器的计算负荷。" #: ../../source/how-to-use-differential-privacy.rst:16 #, fuzzy msgid "" -"**Client-side Clipping**: This approach has the advantage of reducing the " -"computational overhead on the server. However, it also has the disadvantage " -"of lacking centralized control, as the server has less control over the " -"clipping process." -msgstr "" -"**客户端剪切**: 这种方法的优点是可以减少服务器的计算开销。不过,它也有缺乏集" -"中控制的缺点,因为服务器对剪切过程的控制较少。" +"**Client-side Clipping**: This approach has the advantage of reducing the" +" computational overhead on the server. However, it also has the " +"disadvantage of lacking centralized control, as the server has less " +"control over the clipping process." +msgstr "**客户端剪切**: 这种方法的优点是可以减少服务器的计算开销。不过,它也有缺乏集中控制的缺点,因为服务器对剪切过程的控制较少。" #: ../../source/how-to-use-differential-privacy.rst:21 #, fuzzy @@ -7154,15 +7122,15 @@ msgstr "服务器端逻辑" msgid "" "For central DP with server-side clipping, there are two :code:`Strategy` " "classes that act as wrappers around the actual :code:`Strategy` instance " -"(for example, :code:`FedAvg`). The two wrapper classes are :code:" -"`DifferentialPrivacyServerSideFixedClipping` and :code:" -"`DifferentialPrivacyServerSideAdaptiveClipping` for fixed and adaptive " -"clipping." +"(for example, :code:`FedAvg`). The two wrapper classes are " +":code:`DifferentialPrivacyServerSideFixedClipping` and " +":code:`DifferentialPrivacyServerSideAdaptiveClipping` for fixed and " +"adaptive clipping." msgstr "" -"对于具有服务器端剪裁功能的中央 DP,有两个 :code:`Strategy` 类作为实际 :code:" -"`Strategy` 实例(例如 :code:`FedAvg`)的包装器。这两个封装类分别是 :code:" -"`DifferentialPrivacyServerSideFixedClipping` 和 :code:" -"`DifferentialPrivacyServerSideAdaptiveClipping` ,用于固定剪辑和自适应剪辑。" +"对于具有服务器端剪裁功能的中央 DP,有两个 :code:`Strategy` 类作为实际 :code:`Strategy` 实例(例如 " +":code:`FedAvg`)的包装器。这两个封装类分别是 " +":code:`DifferentialPrivacyServerSideFixedClipping` 和 " +":code:`DifferentialPrivacyServerSideAdaptiveClipping` ,用于固定剪辑和自适应剪辑。" #: ../../source/how-to-use-differential-privacy.rst:-1 #, fuzzy @@ -7172,15 +7140,16 @@ msgstr "服务器端逻辑" #: ../../source/how-to-use-differential-privacy.rst:31 #, fuzzy msgid "" -"The code sample below enables the :code:`FedAvg` strategy to use server-side " -"fixed clipping using the :code:`DifferentialPrivacyServerSideFixedClipping` " -"wrapper class. The same approach can be used with :code:" -"`DifferentialPrivacyServerSideAdaptiveClipping` by adjusting the " +"The code sample below enables the :code:`FedAvg` strategy to use server-" +"side fixed clipping using the " +":code:`DifferentialPrivacyServerSideFixedClipping` wrapper class. The " +"same approach can be used with " +":code:`DifferentialPrivacyServerSideAdaptiveClipping` by adjusting the " "corresponding input parameters." msgstr "" -"下面的代码示例使用 :code:`DifferentialPrivacyServerSideFixedClipping` 封装类" -"使 :code:`FedAvg` 策略使用服务器端固定剪辑。通过调整相应的输入参数,同样的方" -"法也可用于 :code:`DifferentialPrivacyServerSideAdaptiveClipping`。" +"下面的代码示例使用 :code:`DifferentialPrivacyServerSideFixedClipping` 封装类使 " +":code:`FedAvg` 策略使用服务器端固定剪辑。通过调整相应的输入参数,同样的方法也可用于 " +":code:`DifferentialPrivacyServerSideAdaptiveClipping`。" #: ../../source/how-to-use-differential-privacy.rst:52 #, fuzzy @@ -7191,18 +7160,18 @@ msgstr "客户端逻辑" #, fuzzy msgid "" "For central DP with client-side clipping, the server sends the clipping " -"value to selected clients on each round. Clients can use existing Flower :" -"code:`Mods` to perform the clipping. Two mods are available for fixed and " -"adaptive client-side clipping: :code:`fixedclipping_mod` and :code:" -"`adaptiveclipping_mod` with corresponding server-side wrappers :code:" -"`DifferentialPrivacyClientSideFixedClipping` and :code:" -"`DifferentialPrivacyClientSideAdaptiveClipping`." -msgstr "" -"对于带有客户端剪裁功能的中央 DP,服务器会在每一轮向选定的客户端发送剪裁值。客" -"户端可以使用现有的 Flower :code:`Mods`来执行剪裁。有两种模式可用于固定和自适" -"应客户端剪辑::code:`fixedclipping_mod` 和 :code:`adaptiveclipping_mod`,以及" -"相应的服务器端封装 :code:`DifferentialPrivacyClientSideFixedClipping` 和 :" -"code:`DifferentialPrivacyClientSideAdaptiveClipping`。" +"value to selected clients on each round. Clients can use existing Flower " +":code:`Mods` to perform the clipping. Two mods are available for fixed " +"and adaptive client-side clipping: :code:`fixedclipping_mod` and " +":code:`adaptiveclipping_mod` with corresponding server-side wrappers " +":code:`DifferentialPrivacyClientSideFixedClipping` and " +":code:`DifferentialPrivacyClientSideAdaptiveClipping`." +msgstr "" +"对于带有客户端剪裁功能的中央 DP,服务器会在每一轮向选定的客户端发送剪裁值。客户端可以使用现有的 Flower " +":code:`Mods`来执行剪裁。有两种模式可用于固定和自适应客户端剪辑::code:`fixedclipping_mod` 和 " +":code:`adaptiveclipping_mod`,以及相应的服务器端封装 " +":code:`DifferentialPrivacyClientSideFixedClipping` 和 " +":code:`DifferentialPrivacyClientSideAdaptiveClipping`。" #: ../../source/how-to-use-differential-privacy.rst:-1 #, fuzzy @@ -7213,35 +7182,33 @@ msgstr "客户端逻辑" #, fuzzy msgid "" "The code sample below enables the :code:`FedAvg` strategy to use " -"differential privacy with client-side fixed clipping using both the :code:" -"`DifferentialPrivacyClientSideFixedClipping` wrapper class and, on the " -"client, :code:`fixedclipping_mod`:" +"differential privacy with client-side fixed clipping using both the " +":code:`DifferentialPrivacyClientSideFixedClipping` wrapper class and, on " +"the client, :code:`fixedclipping_mod`:" msgstr "" -"下面的代码示例使用 :code:`DifferentialPrivacyClientSideFixedClipping` 封装类" -"和客户端的 :code:`fixedclipping_mod` 使 :code:`FedAvg` 策略在客户端固定剪辑的" -"情况下使用差分隐私:" +"下面的代码示例使用 :code:`DifferentialPrivacyClientSideFixedClipping` 封装类和客户端的 " +":code:`fixedclipping_mod` 使 :code:`FedAvg` 策略在客户端固定剪辑的情况下使用差分隐私:" #: ../../source/how-to-use-differential-privacy.rst:80 #, fuzzy msgid "" -"In addition to the server-side strategy wrapper, the :code:`ClientApp` needs " -"to configure the matching :code:`fixedclipping_mod` to perform the client-" -"side clipping:" +"In addition to the server-side strategy wrapper, the :code:`ClientApp` " +"needs to configure the matching :code:`fixedclipping_mod` to perform the " +"client-side clipping:" msgstr "" -"除了服务器端策略包装器外,:code:`ClientApp` 还需要配置匹配的 :code:" -"`fixedclipping_mod` 以执行客户端剪切:" +"除了服务器端策略包装器外,:code:`ClientApp` 还需要配置匹配的 :code:`fixedclipping_mod` " +"以执行客户端剪切:" #: ../../source/how-to-use-differential-privacy.rst:97 #, fuzzy msgid "" -"To utilize local differential privacy (DP) and add noise to the client model " -"parameters before transmitting them to the server in Flower, you can use the " -"`LocalDpMod`. The following hyperparameters need to be set: clipping norm " -"value, sensitivity, epsilon, and delta." +"To utilize local differential privacy (DP) and add noise to the client " +"model parameters before transmitting them to the server in Flower, you " +"can use the `LocalDpMod`. The following hyperparameters need to be set: " +"clipping norm value, sensitivity, epsilon, and delta." msgstr "" -"要利用本地差分隐私(DP)并在将客户端模型参数传输到 Flower 服务器之前为其添加" -"噪声,可以使用 `LocalDpMod`。需要设置以下超参数:剪切规范值、灵敏度、ε 和 " -"delta。" +"要利用本地差分隐私(DP)并在将客户端模型参数传输到 Flower 服务器之前为其添加噪声,可以使用 " +"`LocalDpMod`。需要设置以下超参数:剪切规范值、灵敏度、ε 和 delta。" #: ../../source/how-to-use-differential-privacy.rst:-1 #, fuzzy @@ -7256,12 +7223,11 @@ msgstr "下面的代码示例展示了如何使用 :code:`LocalDpMod`:" #: ../../source/how-to-use-differential-privacy.rst:122 #, fuzzy msgid "" -"Please note that the order of mods, especially those that modify parameters, " -"is important when using multiple modifiers. Typically, differential privacy " -"(DP) modifiers should be the last to operate on parameters." -msgstr "" -"请注意,在使用多个修改器时,修改器(尤其是修改参数的修改器)的顺序非常重要。" -"通常情况下,差分隐私 (DP) 修改器应最后对参数进行操作。" +"Please note that the order of mods, especially those that modify " +"parameters, is important when using multiple modifiers. Typically, " +"differential privacy (DP) modifiers should be the last to operate on " +"parameters." +msgstr "请注意,在使用多个修改器时,修改器(尤其是修改参数的修改器)的顺序非常重要。通常情况下,差分隐私 (DP) 修改器应最后对参数进行操作。" #: ../../source/how-to-use-differential-privacy.rst:125 #, fuzzy @@ -7271,18 +7237,19 @@ msgstr "使用隐私引擎进行本地培训" #: ../../source/how-to-use-differential-privacy.rst:126 #, fuzzy msgid "" -"For ensuring data instance-level privacy during local model training on the " -"client side, consider leveraging privacy engines such as Opacus and " -"TensorFlow Privacy. For examples of using Flower with these engines, please " -"refer to the Flower examples directory (`Opacus `_, `Tensorflow Privacy `_)." +"For ensuring data instance-level privacy during local model training on " +"the client side, consider leveraging privacy engines such as Opacus and " +"TensorFlow Privacy. For examples of using Flower with these engines, " +"please refer to the Flower examples directory (`Opacus " +"`_, `Tensorflow" +" Privacy `_)." msgstr "" -"要在客户端本地模型训练期间确保数据实例级隐私,可考虑利用 Opacus 和 " -"TensorFlow Privacy 等隐私引擎。有关将 Flower 与这些引擎结合使用的示例,请参" -"阅 Flower 示例目录(`Opacus `_, `Tensorflow Privacy `_)。" +"要在客户端本地模型训练期间确保数据实例级隐私,可考虑利用 Opacus 和 TensorFlow Privacy 等隐私引擎。有关将 Flower" +" 与这些引擎结合使用的示例,请参阅 Flower 示例目录(`Opacus " +"`_, `Tensorflow" +" Privacy `_)。" #: ../../source/how-to-use-strategies.rst:2 msgid "Use strategies" @@ -7290,17 +7257,15 @@ msgstr "使用策略" #: ../../source/how-to-use-strategies.rst:4 msgid "" -"Flower allows full customization of the learning process through the :code:" -"`Strategy` abstraction. A number of built-in strategies are provided in the " -"core framework." -msgstr "" -"Flower 允许通过 :code:`Strategy` 抽象类对学习过程进行完全定制。核心框架中提供" -"了许多内置策略。" +"Flower allows full customization of the learning process through the " +":code:`Strategy` abstraction. A number of built-in strategies are " +"provided in the core framework." +msgstr "Flower 允许通过 :code:`Strategy` 抽象类对学习过程进行完全定制。核心框架中提供了许多内置策略。" #: ../../source/how-to-use-strategies.rst:6 msgid "" -"There are three ways to customize the way Flower orchestrates the learning " -"process on the server side:" +"There are three ways to customize the way Flower orchestrates the " +"learning process on the server side:" msgstr "有三种方法可以自定义 Flower 在服务器端协调学习过程的方式:" #: ../../source/how-to-use-strategies.rst:8 @@ -7323,27 +7288,23 @@ msgstr "使用现有策略" #: ../../source/how-to-use-strategies.rst:16 msgid "" -"Flower comes with a number of popular federated learning strategies built-" -"in. A built-in strategy can be instantiated as follows:" +"Flower comes with a number of popular federated learning strategies " +"built-in. A built-in strategy can be instantiated as follows:" msgstr "Flower 内置了许多流行的联邦学习策略。内置策略的实例化方法如下:" #: ../../source/how-to-use-strategies.rst:25 msgid "" -"This creates a strategy with all parameters left at their default values and " -"passes it to the :code:`start_server` function. It is usually recommended to " -"adjust a few parameters during instantiation:" -msgstr "" -"这会创建一个所有参数都保持默认值的策略,并将其传递给 :code:`start_server` 函" -"数。通常建议在实例化过程中调整一些参数:" +"This creates a strategy with all parameters left at their default values " +"and passes it to the :code:`start_server` function. It is usually " +"recommended to adjust a few parameters during instantiation:" +msgstr "这会创建一个所有参数都保持默认值的策略,并将其传递给 :code:`start_server` 函数。通常建议在实例化过程中调整一些参数:" #: ../../source/how-to-use-strategies.rst:42 msgid "" "Existing strategies provide several ways to customize their behaviour. " "Callback functions allow strategies to call user-provided code during " "execution." -msgstr "" -"现有的策略提供了多种自定义行为的方法。回调函数允许策略在执行过程中调用用户提" -"供的代码。" +msgstr "现有的策略提供了多种自定义行为的方法。回调函数允许策略在执行过程中调用用户提供的代码。" #: ../../source/how-to-use-strategies.rst:45 msgid "Configuring client fit and client evaluate" @@ -7352,37 +7313,36 @@ msgstr "配置客户匹配和客户评估" #: ../../source/how-to-use-strategies.rst:47 msgid "" "The server can pass new configuration values to the client each round by " -"providing a function to :code:`on_fit_config_fn`. The provided function will " -"be called by the strategy and must return a dictionary of configuration key " -"values pairs that will be sent to the client. It must return a dictionary of " -"arbitrary configuration values :code:`client.fit` and :code:`client." -"evaluate` functions during each round of federated learning." +"providing a function to :code:`on_fit_config_fn`. The provided function " +"will be called by the strategy and must return a dictionary of " +"configuration key values pairs that will be sent to the client. It must " +"return a dictionary of arbitrary configuration values :code:`client.fit`" +" and :code:`client.evaluate` functions during each round of federated " +"learning." msgstr "" -"服务器可以通过向 :code:`on_fit_config_fn` 提供一个函数,在每一轮向客户端传递" -"新的配置值。提供的函数将被策略调用,并且必须返回一个配置键值对的字典,该字典" -"将被发送到客户端。在每一轮联邦学习期间,它必须返回一个任意配置值 dictionary :" -"code:`client.fit`和 :code:`client.evaluate`函数。" +"服务器可以通过向 :code:`on_fit_config_fn` " +"提供一个函数,在每一轮向客户端传递新的配置值。提供的函数将被策略调用,并且必须返回一个配置键值对的字典,该字典将被发送到客户端。在每一轮联邦学习期间,它必须返回一个任意配置值" +" dictionary :code:`client.fit`和 :code:`client.evaluate`函数。" #: ../../source/how-to-use-strategies.rst:75 msgid "" "The :code:`on_fit_config_fn` can be used to pass arbitrary configuration " "values from server to client, and poetentially change these values each " -"round, for example, to adjust the learning rate. The client will receive the " -"dictionary returned by the :code:`on_fit_config_fn` in its own :code:`client." -"fit()` function." +"round, for example, to adjust the learning rate. The client will receive " +"the dictionary returned by the :code:`on_fit_config_fn` in its own " +":code:`client.fit()` function." msgstr "" -":code:`on_fit_config_fn`可用于将任意配置值从服务器传递到客户端,并在每一轮改" -"变这些值,例如,调整学习率。客户端将在自己的 :code:`client.fit()` 函数中接" -"收 :code:`on_fit_config_fn` 返回的字典。" +":code:`on_fit_config_fn`可用于将任意配置值从服务器传递到客户端,并在每一轮改变这些值,例如,调整学习率。客户端将在自己的 " +":code:`client.fit()` 函数中接收 :code:`on_fit_config_fn` 返回的字典。" #: ../../source/how-to-use-strategies.rst:78 msgid "" -"Similar to :code:`on_fit_config_fn`, there is also :code:" -"`on_evaluate_config_fn` to customize the configuration sent to :code:`client." -"evaluate()`" +"Similar to :code:`on_fit_config_fn`, there is also " +":code:`on_evaluate_config_fn` to customize the configuration sent to " +":code:`client.evaluate()`" msgstr "" -"与 :code:`on_fit_config_fn` 类似,还有 :code:`on_evaluate_config_fn` 用于定制" -"发送到 :code:`client.evaluate()` 的配置" +"与 :code:`on_fit_config_fn` 类似,还有 :code:`on_evaluate_config_fn` 用于定制发送到 " +":code:`client.evaluate()` 的配置" #: ../../source/how-to-use-strategies.rst:81 msgid "Configuring server-side evaluation" @@ -7390,18 +7350,18 @@ msgstr "配置服务器端评估" #: ../../source/how-to-use-strategies.rst:83 msgid "" -"Server-side evaluation can be enabled by passing an evaluation function to :" -"code:`evaluate_fn`." +"Server-side evaluation can be enabled by passing an evaluation function " +"to :code:`evaluate_fn`." msgstr "服务器端评估可通过向 :code:`evaluate_fn` 传递评估函数来启用。" #: ../../source/how-to-use-strategies.rst:89 msgid "" -"Writing a fully custom strategy is a bit more involved, but it provides the " -"most flexibility. Read the `Implementing Strategies `_ guide to learn more." +"Writing a fully custom strategy is a bit more involved, but it provides " +"the most flexibility. Read the `Implementing Strategies `_ guide to learn more." msgstr "" -"编写完全自定义的策略涉及的内容较多,但灵活性最高。阅读 `实施策略 _ 指南,了解更多信息。" +"编写完全自定义的策略涉及的内容较多,但灵活性最高。阅读 `实施策略 _ " +"指南,了解更多信息。" #: ../../source/index.rst:34 msgid "Tutorial" @@ -7411,15 +7371,15 @@ msgstr "教程" msgid "Quickstart tutorials" msgstr "快速入门教程" -#: ../../source/index.rst:75 ../../source/index.rst:79 +#: ../../source/index.rst:74 ../../source/index.rst:78 msgid "How-to guides" msgstr "操作指南" -#: ../../source/index.rst:98 +#: ../../source/index.rst:99 msgid "Legacy example guides" msgstr "旧版指南范例" -#: ../../source/index.rst:109 ../../source/index.rst:113 +#: ../../source/index.rst:108 ../../source/index.rst:112 msgid "Explanations" msgstr "说明" @@ -7427,23 +7387,23 @@ msgstr "说明" msgid "API reference" msgstr "应用程序接口参考" -#: ../../source/index.rst:138 +#: ../../source/index.rst:137 msgid "Reference docs" msgstr "参考文档" -#: ../../source/index.rst:154 +#: ../../source/index.rst:153 msgid "Contributor tutorials" msgstr "贡献者教程" -#: ../../source/index.rst:161 +#: ../../source/index.rst:160 msgid "Contributor how-to guides" msgstr "投稿指南" -#: ../../source/index.rst:174 +#: ../../source/index.rst:173 msgid "Contributor explanations" msgstr "贡献者解释" -#: ../../source/index.rst:180 +#: ../../source/index.rst:179 msgid "Contributor references" msgstr "贡献者参考资料" @@ -7461,9 +7421,7 @@ msgstr "Flower 框架文档" msgid "" "Welcome to Flower's documentation. `Flower `_ is a " "friendly federated learning framework." -msgstr "" -"欢迎访问 Flower 文档。`Flower `_ 是一个友好的联邦学习框" -"架。" +msgstr "欢迎访问 Flower 文档。`Flower `_ 是一个友好的联邦学习框架。" #: ../../source/index.rst:11 msgid "Join the Flower Community" @@ -7474,9 +7432,7 @@ msgid "" "The Flower Community is growing quickly - we're a friendly group of " "researchers, engineers, students, professionals, academics, and other " "enthusiasts." -msgstr "" -"Flower 社区发展迅速--我们是一个由研究人员、工程师、学生、专业人士、学者和其他" -"爱好者组成的友好团体。" +msgstr "Flower 社区发展迅速--我们是一个由研究人员、工程师、学生、专业人士、学者和其他爱好者组成的友好团体。" #: ../../source/index.rst:15 msgid "Join us on Slack" @@ -7490,12 +7446,11 @@ msgstr "Flower 框架" msgid "" "The user guide is targeted at researchers and developers who want to use " "Flower to bring existing machine learning workloads into a federated " -"setting. One of Flower's design goals was to make this simple. Read on to " -"learn more." +"setting. One of Flower's design goals was to make this simple. Read on to" +" learn more." msgstr "" -"该用户指南面向希望使用 Flower 将现有机器学习工作负载引入联邦环境的研究人员和" -"开发人员。Flower 的设计目标之一就是让这一切变得简单。请继续阅读,了解更多信" -"息。" +"该用户指南面向希望使用 Flower 将现有机器学习工作负载引入联邦环境的研究人员和开发人员。Flower " +"的设计目标之一就是让这一切变得简单。请继续阅读,了解更多信息。" #: ../../source/index.rst:30 msgid "Tutorials" @@ -7503,80 +7458,79 @@ msgstr "教程" #: ../../source/index.rst:32 msgid "" -"A learning-oriented series of federated learning tutorials, the best place " -"to start." +"A learning-oriented series of federated learning tutorials, the best " +"place to start." msgstr "以学习为导向的联邦学习教程系列,最好的起点。" -#: ../../source/index.rst:62 -msgid "" -"QUICKSTART TUTORIALS: :doc:`PyTorch ` | :doc:" -"`TensorFlow ` | :doc:`🤗 Transformers " -"` | :doc:`JAX ` | :" -"doc:`Pandas ` | :doc:`fastai ` | :doc:`PyTorch Lightning ` | :doc:`MXNet ` | :doc:`scikit-learn " -"` | :doc:`XGBoost ` | :doc:`Android ` | :doc:`iOS " -"`" -msgstr "" -"快速入门教程: :doc:`PyTorch ` | :doc:" -"`TensorFlow ` | :doc:`🤗 Transformers " -"` | :doc:`JAX ` | :" -"doc:`Pandas ` | :doc:`fastai ` | :doc:`PyTorch Lightning ` | :doc:`MXNet ` | :doc:`scikit-learn " -"` | :doc:`XGBoost ` | :doc:`Android ` | :doc:`iOS " -"`" - -#: ../../source/index.rst:64 +#: ../../source/index.rst:61 +#, fuzzy +msgid "" +"QUICKSTART TUTORIALS: :doc:`PyTorch ` | " +":doc:`TensorFlow ` | :doc:`🤗 Transformers" +" ` | :doc:`JAX ` | :doc:`Pandas ` | :doc:`fastai " +"` | :doc:`PyTorch Lightning ` | :doc:`scikit-learn ` | :doc:`XGBoost ` | " +":doc:`Android ` | :doc:`iOS `" +msgstr "" +"快速入门教程: :doc:`PyTorch ` | :doc:`TensorFlow " +"` | :doc:`🤗 Transformers ` | :doc:`JAX ` | " +":doc:`Pandas ` | :doc:`fastai ` | :doc:`PyTorch Lightning ` | :doc:`MXNet ` | :doc" +":`scikit-learn ` | :doc:`XGBoost " +"` | :doc:`Android ` | :doc:`iOS `" + +#: ../../source/index.rst:63 msgid "We also made video tutorials for PyTorch:" msgstr "我们还为 PyTorch 制作了视频教程:" -#: ../../source/index.rst:69 +#: ../../source/index.rst:68 msgid "And TensorFlow:" msgstr "还有 TensorFlow:" -#: ../../source/index.rst:77 +#: ../../source/index.rst:76 msgid "" -"Problem-oriented how-to guides show step-by-step how to achieve a specific " -"goal." +"Problem-oriented how-to guides show step-by-step how to achieve a " +"specific goal." msgstr "以问题为导向的 \"如何做 \"指南逐步展示如何实现特定目标。" -#: ../../source/index.rst:111 +#: ../../source/index.rst:110 msgid "" "Understanding-oriented concept guides explain and discuss key topics and " "underlying ideas behind Flower and collaborative AI." -msgstr "" -"以理解为导向的概念指南解释并讨论了Flower和协作式人工智能背后的关键主题和基本" -"思想。" +msgstr "以理解为导向的概念指南解释并讨论了Flower和协作式人工智能背后的关键主题和基本思想。" -#: ../../source/index.rst:121 +#: ../../source/index.rst:120 msgid "References" msgstr "参考资料" -#: ../../source/index.rst:123 +#: ../../source/index.rst:122 msgid "Information-oriented API reference and other reference material." msgstr "以信息为导向的 API 参考资料和其他参考资料。" -#: ../../source/index.rst:132::1 +#: ../../source/index.rst:131::1 #, fuzzy msgid ":py:obj:`flwr `\\" msgstr ":py:obj:`flwr `\\" -#: ../../source/index.rst:132::1 flwr:1 of +#: ../../source/index.rst:131::1 flwr:1 of #, fuzzy msgid "Flower main package." msgstr "Flower 主包装。" -#: ../../source/index.rst:149 +#: ../../source/index.rst:148 msgid "Contributor docs" msgstr "贡献者文档" -#: ../../source/index.rst:151 +#: ../../source/index.rst:150 msgid "" -"The Flower community welcomes contributions. The following docs are intended " -"to help along the way." +"The Flower community welcomes contributions. The following docs are " +"intended to help along the way." msgstr "Flower 社区欢迎您的贡献。以下文档旨在为您提供帮助。" #: ../../source/ref-api-cli.rst:2 @@ -7584,23 +7538,20 @@ msgid "Flower CLI reference" msgstr "Flower CLI 参考" #: ../../source/ref-api-cli.rst:7 -msgid "flower-superlink" -msgstr "flower-superlink" +#, fuzzy +msgid "flower-simulation" +msgstr "运行模拟" #: ../../source/ref-api-cli.rst:17 -msgid "flower-driver-api" -msgstr "flower-driver-api" +msgid "flower-superlink" +msgstr "flower-superlink" #: ../../source/ref-api-cli.rst:27 -msgid "flower-fleet-api" -msgstr "flower-fleet-api" - -#: ../../source/ref-api-cli.rst:37 #, fuzzy msgid "flower-client-app" msgstr "Flower 客户端。" -#: ../../source/ref-api-cli.rst:47 +#: ../../source/ref-api-cli.rst:37 #, fuzzy msgid "flower-server-app" msgstr "flower-driver-api" @@ -7610,7 +7561,7 @@ msgstr "flower-driver-api" msgid "flwr" msgstr "Flower" -#: ../../source/ref-api/flwr.rst:25 ../../source/ref-api/flwr.server.rst:52 +#: ../../source/ref-api/flwr.rst:25 ../../source/ref-api/flwr.server.rst:51 #, fuzzy msgid "Modules" msgstr "模块" @@ -7639,7 +7590,7 @@ msgid ":py:obj:`flwr.server `\\" msgstr ":py:obj:`flwr.server `\\" #: ../../source/ref-api/flwr.rst:35::1 -#: ../../source/ref-api/flwr.server.rst:41::1 flwr.server:1 +#: ../../source/ref-api/flwr.server.rst:40::1 flwr.server:1 #: flwr.server.server.Server:1 of msgid "Flower server." msgstr "Flower 服务器。" @@ -7666,18 +7617,29 @@ msgstr "客户端" msgid "Functions" msgstr "四种函数:" -#: ../../source/ref-api/flwr.client.rst:24::1 +#: ../../source/ref-api/flwr.client.rst:25::1 #, fuzzy msgid ":py:obj:`run_client_app `\\ \\(\\)" msgstr ":py:obj:`run_client_app `\\ \\(\\)" -#: ../../source/ref-api/flwr.client.rst:24::1 -#: flwr.client.app.run_client_app:1 of +#: ../../source/ref-api/flwr.client.rst:25::1 +#: flwr.client.supernode.app.run_client_app:1 of #, fuzzy msgid "Run Flower client app." msgstr "Flower 客户端。" -#: ../../source/ref-api/flwr.client.rst:24::1 +#: ../../source/ref-api/flwr.client.rst:25::1 +#, fuzzy +msgid ":py:obj:`run_supernode `\\ \\(\\)" +msgstr ":py:obj:`run_superlink `\\ \\(\\)" + +#: ../../source/ref-api/flwr.client.rst:25::1 +#: flwr.client.supernode.app.run_supernode:1 of +#, fuzzy +msgid "Run Flower SuperNode." +msgstr "Flower 服务器。" + +#: ../../source/ref-api/flwr.client.rst:25::1 #, fuzzy msgid "" ":py:obj:`start_client `\\ \\(\\*\\, " @@ -7686,70 +7648,73 @@ msgstr "" ":py:obj:`start_client `\\ \\(\\*\\, " "server\\_address\\[\\, client\\_fn\\, ...\\]\\)" -#: ../../source/ref-api/flwr.client.rst:24::1 +#: ../../source/ref-api/flwr.client.rst:25::1 #: flwr.client.app.start_client:1 of msgid "Start a Flower client node which connects to a Flower server." msgstr "启动一个 Flower 客户节点,连接到 Flower 服务器。" -#: ../../source/ref-api/flwr.client.rst:24::1 +#: ../../source/ref-api/flwr.client.rst:25::1 #, fuzzy msgid "" -":py:obj:`start_numpy_client `\\ \\(\\*\\, " -"server\\_address\\, client\\)" +":py:obj:`start_numpy_client `\\ \\(\\*\\," +" server\\_address\\, client\\)" msgstr "" -":py:obj:`start_numpy_client `\\ \\(\\*\\, " -"server\\_address\\, client\\)" +":py:obj:`start_numpy_client `\\ \\(\\*\\," +" server\\_address\\, client\\)" -#: ../../source/ref-api/flwr.client.rst:24::1 +#: ../../source/ref-api/flwr.client.rst:25::1 #: flwr.client.app.start_numpy_client:1 of msgid "Start a Flower NumPyClient which connects to a gRPC server." msgstr "启动 Flower NumPyClient,连接到 gRPC 服务器。" -#: ../../source/ref-api/flwr.client.rst:26 +#: ../../source/ref-api/flwr.client.rst:27 #: ../../source/ref-api/flwr.common.rst:32 -#: ../../source/ref-api/flwr.server.rst:29 +#: ../../source/ref-api/flwr.server.rst:28 #: ../../source/ref-api/flwr.server.strategy.rst:17 #: ../../source/ref-api/flwr.server.workflow.rst:17 #, fuzzy msgid "Classes" msgstr "类别" -#: ../../source/ref-api/flwr.client.rst:33::1 +#: ../../source/ref-api/flwr.client.rst:34::1 #, fuzzy msgid ":py:obj:`Client `\\ \\(\\)" msgstr ":py:obj:`Client `\\ \\(\\)" -#: ../../source/ref-api/flwr.client.rst:33::1 +#: ../../source/ref-api/flwr.client.rst:34::1 #: flwr.client.client.Client:1 of msgid "Abstract base class for Flower clients." msgstr "Flower 客户端的抽象基类。" -#: ../../source/ref-api/flwr.client.rst:33::1 +#: ../../source/ref-api/flwr.client.rst:34::1 #, fuzzy msgid "" -":py:obj:`ClientApp `\\ \\(\\[client\\_fn\\, mods\\]\\)" +":py:obj:`ClientApp `\\ \\(\\[client\\_fn\\, " +"mods\\]\\)" msgstr "" -":py:obj:`ClientApp `\\ \\(\\[client\\_fn\\, mods\\]\\)" +":py:obj:`ClientApp `\\ \\(\\[client\\_fn\\, " +"mods\\]\\)" -#: ../../source/ref-api/flwr.client.rst:33::1 +#: ../../source/ref-api/flwr.client.rst:34::1 #: flwr.client.client_app.ClientApp:1 of #, fuzzy msgid "Flower ClientApp." msgstr "Flower 客户端。" -#: ../../source/ref-api/flwr.client.rst:33::1 +#: ../../source/ref-api/flwr.client.rst:34::1 #, fuzzy msgid ":py:obj:`NumPyClient `\\ \\(\\)" msgstr ":py:obj:`NumPyClient `\\ \\(\\)" -#: ../../source/ref-api/flwr.client.rst:33::1 +#: ../../source/ref-api/flwr.client.rst:34::1 #: flwr.client.numpy_client.NumPyClient:1 of msgid "Abstract base class for Flower clients using NumPy." msgstr "使用 NumPy 的 Flower 客户端的抽象基类。" #: flwr.client.client.Client:1 flwr.client.numpy_client.NumPyClient:1 #: flwr.server.client_manager.ClientManager:1 -#: flwr.server.strategy.strategy.Strategy:1 of +#: flwr.server.driver.driver.Driver:1 flwr.server.strategy.strategy.Strategy:1 +#: of #, fuzzy msgid "Bases: :py:class:`~abc.ABC`" msgstr "Bases: :py:class:`~abc.ABC`" @@ -7765,6 +7730,7 @@ msgstr "Bases: :py:class:`~abc.ABC`" #: ../../source/ref-api/flwr.common.Error.rst:15 #: ../../source/ref-api/flwr.common.EvaluateIns.rst:15 #: ../../source/ref-api/flwr.common.EvaluateRes.rst:15 +#: ../../source/ref-api/flwr.common.EventType.rst:15 #: ../../source/ref-api/flwr.common.FitIns.rst:15 #: ../../source/ref-api/flwr.common.FitRes.rst:15 #: ../../source/ref-api/flwr.common.GetParametersIns.rst:15 @@ -7858,10 +7824,8 @@ msgstr "评估客户端的反应。" #: ../../source/ref-api/flwr.client.Client.rst:44::1 #, fuzzy -msgid "" -":py:obj:`get_parameters `\\ \\(ins\\)" -msgstr "" -":py:obj:`get_parameters `\\ \\(ins\\)" +msgid ":py:obj:`get_parameters `\\ \\(ins\\)" +msgstr ":py:obj:`get_parameters `\\ \\(ins\\)" #: ../../source/ref-api/flwr.client.Client.rst:44::1 #: ../../source/ref-api/flwr.client.NumPyClient.rst:44::1 @@ -7872,10 +7836,8 @@ msgstr "返回当前本地模型参数。" #: ../../source/ref-api/flwr.client.Client.rst:44::1 #, fuzzy -msgid "" -":py:obj:`get_properties `\\ \\(ins\\)" -msgstr "" -":py:obj:`get_properties `\\ \\(ins\\)" +msgid ":py:obj:`get_properties `\\ \\(ins\\)" +msgstr ":py:obj:`get_properties `\\ \\(ins\\)" #: ../../source/ref-api/flwr.client.Client.rst:44::1 #: flwr.client.client.Client.get_properties:1 of @@ -7915,7 +7877,7 @@ msgstr "返回客户端(本身)。" #: ../../source/ref-api/flwr.common.Error.rst:25 #: ../../source/ref-api/flwr.common.EvaluateIns.rst:25 #: ../../source/ref-api/flwr.common.EvaluateRes.rst:25 -#: ../../source/ref-api/flwr.common.EventType.rst:19 +#: ../../source/ref-api/flwr.common.EventType.rst:165 #: ../../source/ref-api/flwr.common.FitIns.rst:25 #: ../../source/ref-api/flwr.common.FitRes.rst:25 #: ../../source/ref-api/flwr.common.GetParametersIns.rst:25 @@ -7960,7 +7922,6 @@ msgstr ":py:obj:`context `\\" #: flwr.server.client_manager.SimpleClientManager.register #: flwr.server.client_manager.SimpleClientManager.unregister #: flwr.server.client_manager.SimpleClientManager.wait_for -#: flwr.server.compat.app.start_driver flwr.server.driver.driver.Driver #: flwr.server.driver.driver.Driver.create_message #: flwr.server.driver.driver.Driver.pull_messages #: flwr.server.driver.driver.Driver.push_messages @@ -7994,12 +7955,10 @@ msgstr "参数" #: flwr.client.client.Client.evaluate:3 of msgid "" -"The evaluation instructions containing (global) model parameters received " -"from the server and a dictionary of configuration values used to customize " -"the local evaluation process." -msgstr "" -"评估指令包含从服务器接收的(全局)模型参数,以及用于定制本地评估流程的配置值" -"字典。" +"The evaluation instructions containing (global) model parameters received" +" from the server and a dictionary of configuration values used to " +"customize the local evaluation process." +msgstr "评估指令包含从服务器接收的(全局)模型参数,以及用于定制本地评估流程的配置值字典。" #: flwr.client.client.Client.evaluate flwr.client.client.Client.fit #: flwr.client.client.Client.get_parameters @@ -8014,7 +7973,6 @@ msgstr "" #: flwr.server.client_manager.SimpleClientManager.num_available #: flwr.server.client_manager.SimpleClientManager.register #: flwr.server.client_manager.SimpleClientManager.wait_for -#: flwr.server.compat.app.start_driver #: flwr.server.driver.driver.Driver.create_message #: flwr.server.driver.driver.Driver.pull_messages #: flwr.server.driver.driver.Driver.push_messages @@ -8035,8 +7993,7 @@ msgstr "返回" msgid "" "The evaluation result containing the loss on the local dataset and other " "details such as the number of local data examples used for evaluation." -msgstr "" -"评估结果包含本地数据集上的损失值和其他详细信息,如用于评估的本地数据的数量。" +msgstr "评估结果包含本地数据集上的损失值和其他详细信息,如用于评估的本地数据的数量。" #: flwr.client.client.Client.evaluate flwr.client.client.Client.fit #: flwr.client.client.Client.get_parameters @@ -8049,7 +8006,6 @@ msgstr "" #: flwr.server.client_manager.SimpleClientManager.num_available #: flwr.server.client_manager.SimpleClientManager.register #: flwr.server.client_manager.SimpleClientManager.wait_for -#: flwr.server.compat.app.start_driver #: flwr.server.driver.driver.Driver.create_message #: flwr.server.driver.driver.Driver.pull_messages #: flwr.server.driver.driver.Driver.push_messages @@ -8068,17 +8024,15 @@ msgstr "返回类型" #: flwr.client.client.Client.fit:3 of msgid "" -"The training instructions containing (global) model parameters received from " -"the server and a dictionary of configuration values used to customize the " -"local training process." -msgstr "" -"训练指令,包含从服务器接收的(全局)模型参数,以及用于定制本地训练过程的配置" -"值字典。" +"The training instructions containing (global) model parameters received " +"from the server and a dictionary of configuration values used to " +"customize the local training process." +msgstr "训练指令,包含从服务器接收的(全局)模型参数,以及用于定制本地训练过程的配置值字典。" #: flwr.client.client.Client.fit:8 of msgid "" -"The training result containing updated parameters and other details such as " -"the number of local training examples used for training." +"The training result containing updated parameters and other details such " +"as the number of local training examples used for training." msgstr "训练结果包含更新的参数和其他详细信息,如用于训练的本地训练示例的数量。" #: flwr.client.client.Client.get_parameters:3 of @@ -8117,9 +8071,9 @@ msgstr "客户端" #: flwr.common.typing.GetParametersRes:1 flwr.common.typing.GetPropertiesIns:1 #: flwr.common.typing.GetPropertiesRes:1 flwr.common.typing.Parameters:1 #: flwr.common.typing.ReconnectIns:1 flwr.common.typing.ServerMessage:1 -#: flwr.common.typing.Status:1 flwr.server.driver.driver.Driver:1 -#: flwr.server.history.History:1 flwr.server.server.Server:1 -#: flwr.server.server_app.ServerApp:1 flwr.server.server_config.ServerConfig:1 +#: flwr.common.typing.Status:1 flwr.server.history.History:1 +#: flwr.server.server.Server:1 flwr.server.server_app.ServerApp:1 +#: flwr.server.server_config.ServerConfig:1 #: flwr.server.workflow.default_workflows.DefaultWorkflow:1 #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:1 #: of @@ -8132,8 +8086,7 @@ msgstr "Bases: :py:class:`object`" #: flwr.client.client_app.ClientApp.evaluate:4 #: flwr.client.client_app.ClientApp.query:4 #: flwr.client.client_app.ClientApp.train:4 flwr.server.app.start_server:41 -#: flwr.server.compat.app.start_driver:32 flwr.server.server_app.ServerApp:4 -#: flwr.server.server_app.ServerApp.main:4 +#: flwr.server.server_app.ServerApp:4 flwr.server.server_app.ServerApp.main:4 #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:29 #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:22 #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:21 @@ -8147,28 +8100,24 @@ msgstr "实例" msgid "" "Assuming a typical `Client` implementation named `FlowerClient`, you can " "wrap it in a `ClientApp` as follows:" -msgstr "" -"假定有一个名为 `FlowerClient` 的典型 `Client` 实现,可以将其封装在一个 " -"`ClientApp` 中,如下所示:" +msgstr "假定有一个名为 `FlowerClient` 的典型 `Client` 实现,可以将其封装在一个 `ClientApp` 中,如下所示:" #: flwr.client.client_app.ClientApp:16 of #, fuzzy msgid "" -"If the above code is in a Python module called `client`, it can be started " -"as follows:" -msgstr "" -"如果上述代码位于一个名为 \"客户端 \"的 Python 模块中,则可以按如下方式启动" -"它:" +"If the above code is in a Python module called `client`, it can be " +"started as follows:" +msgstr "如果上述代码位于一个名为 \"客户端 \"的 Python 模块中,则可以按如下方式启动它:" #: flwr.client.client_app.ClientApp:21 of #, fuzzy msgid "" -"In this `client:app` example, `client` refers to the Python module `client." -"py` in which the previous code lives in and `app` refers to the global " -"attribute `app` that points to an object of type `ClientApp`." +"In this `client:app` example, `client` refers to the Python module " +"`client.py` in which the previous code lives in and `app` refers to the " +"global attribute `app` that points to an object of type `ClientApp`." msgstr "" -"在这个 `client:app` 例子中,`client` 指的是前面代码所在的 Python 模块 " -"`client.py`,而 `app` 指的是指向 `ClientApp` 类型对象的全局属性 `app` 。" +"在这个 `client:app` 例子中,`client` 指的是前面代码所在的 Python 模块 `client.py`,而 `app` " +"指的是指向 `ClientApp` 类型对象的全局属性 `app` 。" #: flwr.client.client_app.ClientApp.evaluate:1::1 of #, fuzzy @@ -8218,10 +8167,8 @@ msgstr "" #: ../../source/ref-api/flwr.client.NumPyClient.rst:44::1 #, fuzzy -msgid "" -":py:obj:`fit `\\ \\(parameters\\, config\\)" -msgstr "" -":py:obj:`fit `\\ \\(parameters\\, config\\)" +msgid ":py:obj:`fit `\\ \\(parameters\\, config\\)" +msgstr ":py:obj:`fit `\\ \\(parameters\\, config\\)" #: ../../source/ref-api/flwr.client.NumPyClient.rst:44::1 #: flwr.client.numpy_client.NumPyClient.fit:1 of @@ -8259,9 +8206,11 @@ msgstr "返回客户端的属性集。" #: ../../source/ref-api/flwr.client.NumPyClient.rst:44::1 #, fuzzy msgid "" -":py:obj:`set_context `\\ \\(context\\)" +":py:obj:`set_context `\\ " +"\\(context\\)" msgstr "" -":py:obj:`set_context `\\ \\(context\\)" +":py:obj:`set_context `\\ " +"\\(context\\)" #: ../../source/ref-api/flwr.client.NumPyClient.rst:44::1 #, fuzzy @@ -8290,30 +8239,28 @@ msgstr "当前(全局)模型参数。" #: flwr.client.numpy_client.NumPyClient.evaluate:5 of msgid "" -"Configuration parameters which allow the server to influence evaluation on " -"the client. It can be used to communicate arbitrary values from the server " -"to the client, for example, to influence the number of examples used for " -"evaluation." -msgstr "" -"允许服务器影响客户端评估的配置参数。它可用于将任意值从服务器传送到客户端,例" -"如,影响用于评估的示例数量。" +"Configuration parameters which allow the server to influence evaluation " +"on the client. It can be used to communicate arbitrary values from the " +"server to the client, for example, to influence the number of examples " +"used for evaluation." +msgstr "允许服务器影响客户端评估的配置参数。它可用于将任意值从服务器传送到客户端,例如,影响用于评估的示例数量。" #: flwr.client.numpy_client.NumPyClient.evaluate:11 of msgid "" "* **loss** (*float*) -- The evaluation loss of the model on the local " "dataset. * **num_examples** (*int*) -- The number of examples used for " "evaluation. * **metrics** (*Dict[str, Scalar]*) -- A dictionary mapping " -"arbitrary string keys to values of type bool, bytes, float, int, or str. " -"It can be used to communicate arbitrary values back to the server." +"arbitrary string keys to values of type bool, bytes, float, int, or " +"str. It can be used to communicate arbitrary values back to the server." msgstr "" -"**loss** (*float*) -- 模型在本地数据集上的评估损失值。**num_examples** " -"(*int*) -- 用于评估的示例数量。**metrics** (*Dict[str, Scalar]*) -- 将任意字" -"符串键映射到 bool、bytes、float、int 或 str 类型值的字典。它可用于将任意值传" -"回服务器。" +"**loss** (*float*) -- 模型在本地数据集上的评估损失值。**num_examples** (*int*) -- " +"用于评估的示例数量。**metrics** (*Dict[str, Scalar]*) -- 将任意字符串键映射到 " +"bool、bytes、float、int 或 str 类型值的字典。它可用于将任意值传回服务器。" #: flwr.client.numpy_client.NumPyClient.evaluate:11 of msgid "" -"**loss** (*float*) -- The evaluation loss of the model on the local dataset." +"**loss** (*float*) -- The evaluation loss of the model on the local " +"dataset." msgstr "**loss** (*float*) -- 模型在本地数据集上的评估损失值。" #: flwr.client.numpy_client.NumPyClient.evaluate:12 of @@ -8323,43 +8270,41 @@ msgstr "**num_examples** (*int*) -- 用于评估的示例数量。" #: flwr.client.numpy_client.NumPyClient.evaluate:13 #: flwr.client.numpy_client.NumPyClient.fit:13 of msgid "" -"**metrics** (*Dict[str, Scalar]*) -- A dictionary mapping arbitrary string " -"keys to values of type bool, bytes, float, int, or str. It can be used to " -"communicate arbitrary values back to the server." +"**metrics** (*Dict[str, Scalar]*) -- A dictionary mapping arbitrary " +"string keys to values of type bool, bytes, float, int, or str. It can be " +"used to communicate arbitrary values back to the server." msgstr "" -"**metrics** (*Dict[str, Scalar]*) -- 将任意字符串键映射到 bool、bytes、" -"float、int 或 str 类型值的字典。它可用于将任意值传回服务器。" +"**metrics** (*Dict[str, Scalar]*) -- 将任意字符串键映射到 bool、bytes、float、int 或 " +"str 类型值的字典。它可用于将任意值传回服务器。" #: flwr.client.numpy_client.NumPyClient.evaluate:19 of msgid "" -"The previous return type format (int, float, float) and the extended format " -"(int, float, float, Dict[str, Scalar]) have been deprecated and removed " -"since Flower 0.19." +"The previous return type format (int, float, float) and the extended " +"format (int, float, float, Dict[str, Scalar]) have been deprecated and " +"removed since Flower 0.19." msgstr "" -"自 Flower 0.19 起,之前的返回类型格式(int、float、float)和扩展格式(int、" -"float、float、Dict[str, Scalar])已被弃用和移除。" +"自 Flower 0.19 起,之前的返回类型格式(int、float、float)和扩展格式(int、float、float、Dict[str," +" Scalar])已被弃用和移除。" #: flwr.client.numpy_client.NumPyClient.fit:5 of msgid "" -"Configuration parameters which allow the server to influence training on the " -"client. It can be used to communicate arbitrary values from the server to " -"the client, for example, to set the number of (local) training epochs." -msgstr "" -"允许服务器影响客户端训练的配置参数。它可用于将任意值从服务器传送到客户端,例" -"如设置(本地)训练遍历数。" +"Configuration parameters which allow the server to influence training on " +"the client. It can be used to communicate arbitrary values from the " +"server to the client, for example, to set the number of (local) training " +"epochs." +msgstr "允许服务器影响客户端训练的配置参数。它可用于将任意值从服务器传送到客户端,例如设置(本地)训练遍历数。" #: flwr.client.numpy_client.NumPyClient.fit:11 of msgid "" "* **parameters** (*NDArrays*) -- The locally updated model parameters. * " "**num_examples** (*int*) -- The number of examples used for training. * " -"**metrics** (*Dict[str, Scalar]*) -- A dictionary mapping arbitrary string " -"keys to values of type bool, bytes, float, int, or str. It can be used to " -"communicate arbitrary values back to the server." +"**metrics** (*Dict[str, Scalar]*) -- A dictionary mapping arbitrary " +"string keys to values of type bool, bytes, float, int, or str. It can " +"be used to communicate arbitrary values back to the server." msgstr "" -"**parameters** (*NDArrays*) -- 本地更新的模型参数。**num_examples** (*int*) " -"-- 用于训练的示例数量。**metrics** (*Dict[str, Scalar]*) -- 将任意字符串键映" -"射到 bool、bytes、float、int 或 str 类型值的字典。它可用于将任意值传回服务" -"器。" +"**parameters** (*NDArrays*) -- 本地更新的模型参数。**num_examples** (*int*) -- " +"用于训练的示例数量。**metrics** (*Dict[str, Scalar]*) -- 将任意字符串键映射到 " +"bool、bytes、float、int 或 str 类型值的字典。它可用于将任意值传回服务器。" #: flwr.client.numpy_client.NumPyClient.fit:11 of msgid "**parameters** (*NDArrays*) -- The locally updated model parameters." @@ -8371,37 +8316,41 @@ msgstr "**num_examples** (*int*) -- 用于训练的数据数量。" #: flwr.client.numpy_client.NumPyClient.get_parameters:3 of msgid "" -"Configuration parameters requested by the server. This can be used to tell " -"the client which parameters are needed along with some Scalar attributes." -msgstr "" -"服务器请求的配置参数。这可以用来告诉客户端需要哪些参数以及一些标量属性。" +"Configuration parameters requested by the server. This can be used to " +"tell the client which parameters are needed along with some Scalar " +"attributes." +msgstr "服务器请求的配置参数。这可以用来告诉客户端需要哪些参数以及一些标量属性。" #: flwr.client.numpy_client.NumPyClient.get_parameters:8 of -msgid "" -"**parameters** -- The local model parameters as a list of NumPy ndarrays." +msgid "**parameters** -- The local model parameters as a list of NumPy ndarrays." msgstr "**parameters** -- NumPy ndarrays 的本地模型参数列表。" #: flwr.client.numpy_client.NumPyClient.get_properties:3 of msgid "" -"Configuration parameters requested by the server. This can be used to tell " -"the client which properties are needed along with some Scalar attributes." -msgstr "" -"服务器请求的配置参数。这可以用来告诉客户端需要哪些属性以及一些标量属性。" +"Configuration parameters requested by the server. This can be used to " +"tell the client which properties are needed along with some Scalar " +"attributes." +msgstr "服务器请求的配置参数。这可以用来告诉客户端需要哪些属性以及一些标量属性。" #: flwr.client.numpy_client.NumPyClient.get_properties:8 of msgid "" -"**properties** -- A dictionary mapping arbitrary string keys to values of " -"type bool, bytes, float, int, or str. It can be used to communicate " +"**properties** -- A dictionary mapping arbitrary string keys to values of" +" type bool, bytes, float, int, or str. It can be used to communicate " "arbitrary property values back to the server." msgstr "" -"**properties** -- 将任意字符串键映射到 bool、bytes、float、int 或 str 类型值" -"的字典。它可用于将任意属性值传回服务器。" +"**properties** -- 将任意字符串键映射到 bool、bytes、float、int 或 str " +"类型值的字典。它可用于将任意属性值传回服务器。" #: ../../source/ref-api/flwr.client.run_client_app.rst:2 #, fuzzy msgid "run\\_client\\_app" msgstr "run\\_client\\_app" +#: ../../source/ref-api/flwr.client.run_supernode.rst:2 +#, fuzzy +msgid "run\\_supernode" +msgstr "flower-superlink" + #: ../../source/ref-api/flwr.client.start_client.rst:2 #, fuzzy msgid "start\\_client" @@ -8410,7 +8359,8 @@ msgstr "启动客户端" #: flwr.client.app.start_client:3 flwr.client.app.start_numpy_client:9 of msgid "" "The IPv4 or IPv6 address of the server. If the Flower server runs on the " -"same machine on port 8080, then `server_address` would be `\"[::]:8080\"`." +"same machine on port 8080, then `server_address` would be " +"`\"[::]:8080\"`." msgstr "" "服务器的 IPv4 或 IPv6 地址:如果 Flower 服务器在同一台机器上运行,端口为 " "8080,则`server_address`应为`\"[::]:8080\"`。" @@ -8421,70 +8371,62 @@ msgstr "用于实例化客户端的可调用程序。(默认值:无)" #: flwr.client.app.start_client:9 of msgid "" -"An implementation of the abstract base class `flwr.client.Client` (default: " -"None)" +"An implementation of the abstract base class `flwr.client.Client` " +"(default: None)" msgstr "抽象基类 `flwr.client.Client` 的实现(默认值:无)" #: flwr.client.app.start_client:12 flwr.client.app.start_numpy_client:15 of msgid "" -"The maximum length of gRPC messages that can be exchanged with the Flower " -"server. The default should be sufficient for most models. Users who train " -"very large models might need to increase this value. Note that the Flower " -"server needs to be started with the same value (see `flwr.server." -"start_server`), otherwise it will not know about the increased limit and " -"block larger messages." +"The maximum length of gRPC messages that can be exchanged with the Flower" +" server. The default should be sufficient for most models. Users who " +"train very large models might need to increase this value. Note that the " +"Flower server needs to be started with the same value (see " +"`flwr.server.start_server`), otherwise it will not know about the " +"increased limit and block larger messages." msgstr "" -"可与 Flower 服务器交换的 gRPC 信息的最大长度:默认值对大多数模型都足够了。训" -"练超大模型的用户可能需要增加该值。请注意,Flower 服务器需要以相同的值启动(请" -"参阅 `flwr.server.start_server`),否则它将不知道增加的限制并阻止更大的消息。" +"可与 Flower 服务器交换的 gRPC 信息的最大长度:默认值对大多数模型都足够了。训练超大模型的用户可能需要增加该值。请注意,Flower " +"服务器需要以相同的值启动(请参阅 `flwr.server.start_server`),否则它将不知道增加的限制并阻止更大的消息。" -#: flwr.client.app.start_client:19 flwr.client.app.start_numpy_client:22 -#: flwr.server.compat.app.start_driver:21 of +#: flwr.client.app.start_client:19 flwr.client.app.start_numpy_client:22 of msgid "" "The PEM-encoded root certificates as a byte string or a path string. If " -"provided, a secure connection using the certificates will be established to " -"an SSL-enabled Flower server." -msgstr "" -"字节字符串或路径字符串形式的 PEM 编码根证书。如果提供,将使用这些证书与启用 " -"SSL 的 Flower 服务器建立安全连接。" +"provided, a secure connection using the certificates will be established " +"to an SSL-enabled Flower server." +msgstr "字节字符串或路径字符串形式的 PEM 编码根证书。如果提供,将使用这些证书与启用 SSL 的 Flower 服务器建立安全连接。" #: flwr.client.app.start_client:23 flwr.client.app.start_numpy_client:26 of #, fuzzy msgid "" -"Starts an insecure gRPC connection when True. Enables HTTPS connection when " -"False, using system certificates if `root_certificates` is None." +"Starts an insecure gRPC connection when True. Enables HTTPS connection " +"when False, using system certificates if `root_certificates` is None." msgstr "" -"为 True 时启动不安全的 gRPC 连接。False 时启用 HTTPS 连接,如果 " -"`root_certificates` 为 None,则使用系统证书。" +"为 True 时启动不安全的 gRPC 连接。False 时启用 HTTPS 连接,如果 `root_certificates` 为 " +"None,则使用系统证书。" #: flwr.client.app.start_client:26 flwr.client.app.start_numpy_client:29 of msgid "" "Configure the transport layer. Allowed values: - 'grpc-bidi': gRPC, " -"bidirectional streaming - 'grpc-rere': gRPC, request-response (experimental) " -"- 'rest': HTTP (experimental)" +"bidirectional streaming - 'grpc-rere': gRPC, request-response " +"(experimental) - 'rest': HTTP (experimental)" msgstr "" -"配置传输层:允许的值包括 - 'grpc-bidi': gRPC,双向流 - 'grpc-rere': gRPC,请" -"求-响应(实验性) - 'rest': HTTP(实验性)" +"配置传输层:允许的值包括 - 'grpc-bidi': gRPC,双向流 - 'grpc-rere': gRPC,请求-响应(实验性) - " +"'rest': HTTP(实验性)" #: flwr.client.app.start_client:31 of #, fuzzy msgid "" "The maximum number of times the client will try to connect to the server " -"before giving up in case of a connection error. If set to None, there is no " -"limit to the number of tries." -msgstr "" -"客户端在出现连接错误时放弃连接服务器的最大尝试次数。如果设置为 \"无\",则不限" -"制尝试次数。" +"before giving up in case of a connection error. If set to None, there is " +"no limit to the number of tries." +msgstr "客户端在出现连接错误时放弃连接服务器的最大尝试次数。如果设置为 \"无\",则不限制尝试次数。" #: flwr.client.app.start_client:35 of #, fuzzy msgid "" -"The maximum duration before the client stops trying to connect to the server " -"in case of connection error. If set to None, there is no limit to the total " -"time." -msgstr "" -"在出现连接错误时,客户端停止尝试连接服务器之前的最长持续时间。如果设置为 \"无" -"\",则总时间没有限制。" +"The maximum duration before the client stops trying to connect to the " +"server in case of connection error. If set to None, there is no limit to " +"the total time." +msgstr "在出现连接错误时,客户端停止尝试连接服务器之前的最长持续时间。如果设置为 \"无\",则总时间没有限制。" #: flwr.client.app.start_client:42 flwr.client.app.start_numpy_client:37 of msgid "Starting a gRPC client with an insecure server connection:" @@ -8508,13 +8450,13 @@ msgstr "start_numpy_client" #: flwr.client.app.start_numpy_client:5 of #, fuzzy msgid "" -"This function is deprecated since 1.7.0. Use :code:`flwr.client." -"start_client` instead and first convert your :code:`NumPyClient` to type :" -"code:`flwr.client.Client` by executing its :code:`to_client()` method." +"This function is deprecated since 1.7.0. Use " +":code:`flwr.client.start_client` instead and first convert your " +":code:`NumPyClient` to type :code:`flwr.client.Client` by executing its " +":code:`to_client()` method." msgstr "" -"自 1.7.0 起该函数已被弃用。请使用 :code:`flwr.client.start_client`,并首先通" -"过执行 :code:`to_client()`方法将 :code:`NumPyClient`转换为 :code:`flwr." -"client.Client`。" +"自 1.7.0 起该函数已被弃用。请使用 :code:`flwr.client.start_client`,并首先通过执行 " +":code:`to_client()`方法将 :code:`NumPyClient`转换为 :code:`flwr.client.Client`。" #: flwr.client.app.start_numpy_client:13 of msgid "An implementation of the abstract base class `flwr.client.NumPyClient`." @@ -8526,10 +8468,8 @@ msgstr "常见" #: ../../source/ref-api/flwr.common.rst:30::1 #, fuzzy -msgid "" -":py:obj:`array_from_numpy `\\ \\(ndarray\\)" -msgstr "" -":py:obj:`array_from_numpy `\\ \\(ndarray\\)" +msgid ":py:obj:`array_from_numpy `\\ \\(ndarray\\)" +msgstr ":py:obj:`array_from_numpy `\\ \\(ndarray\\)" #: ../../source/ref-api/flwr.common.rst:30::1 #: flwr.common.record.conversion_utils.array_from_numpy:1 of @@ -8539,10 +8479,8 @@ msgstr "将参数对象转换为 NumPy ndarrays。" #: ../../source/ref-api/flwr.common.rst:30::1 #, fuzzy -msgid "" -":py:obj:`bytes_to_ndarray `\\ \\(tensor\\)" -msgstr "" -":py:obj:`bytes_to_ndarray `\\ \\(tensor\\)" +msgid ":py:obj:`bytes_to_ndarray `\\ \\(tensor\\)" +msgstr ":py:obj:`bytes_to_ndarray `\\ \\(tensor\\)" #: ../../source/ref-api/flwr.common.rst:30::1 #: flwr.common.parameter.bytes_to_ndarray:1 of @@ -8594,10 +8532,8 @@ msgstr "以整数严重性 \"级别 \"记录 \"msg % args\"。" #: ../../source/ref-api/flwr.common.rst:30::1 #, fuzzy -msgid "" -":py:obj:`ndarray_to_bytes `\\ \\(ndarray\\)" -msgstr "" -":py:obj:`ndarray_to_bytes `\\ \\(ndarray\\)" +msgid ":py:obj:`ndarray_to_bytes `\\ \\(ndarray\\)" +msgstr ":py:obj:`ndarray_to_bytes `\\ \\(ndarray\\)" #: ../../source/ref-api/flwr.common.rst:30::1 #: flwr.common.parameter.ndarray_to_bytes:1 of @@ -8648,9 +8584,11 @@ msgstr "将参数对象转换为 NumPy ndarrays。" #: ../../source/ref-api/flwr.common.rst:64::1 #, fuzzy msgid "" -":py:obj:`Array `\\ \\(dtype\\, shape\\, stype\\, data\\)" +":py:obj:`Array `\\ \\(dtype\\, shape\\, stype\\, " +"data\\)" msgstr "" -":py:obj:`Array `\\ \\(dtype\\, shape\\, stype\\, data\\)" +":py:obj:`Array `\\ \\(dtype\\, shape\\, stype\\, " +"data\\)" #: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.record.parametersrecord.Array:1 of @@ -8688,8 +8626,9 @@ msgid "" ":py:obj:`ConfigsRecord `\\ " "\\(\\[configs\\_dict\\, keep\\_input\\]\\)" msgstr "" -"Flower 1.0: ``start_server(..., config=flwr.server." -"ServerConfig(num_rounds=3, round_timeout=600.0), ...)``" +"Flower 1.0: ``start_server(..., " +"config=flwr.server.ServerConfig(num_rounds=3, round_timeout=600.0), " +"...)``" #: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.record.configsrecord.ConfigsRecord:1 of @@ -8721,9 +8660,11 @@ msgstr "客户端向服务器发送 DisconnectRes 信息。" #: ../../source/ref-api/flwr.common.rst:64::1 #, fuzzy msgid "" -":py:obj:`EvaluateIns `\\ \\(parameters\\, config\\)" +":py:obj:`EvaluateIns `\\ \\(parameters\\, " +"config\\)" msgstr "" -":py:obj:`EvaluateIns `\\ \\(parameters\\, config\\)" +":py:obj:`EvaluateIns `\\ \\(parameters\\, " +"config\\)" #: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.EvaluateIns:1 of @@ -8791,10 +8732,8 @@ msgstr "数据类,用于存储所发生错误的相关信息。" #: ../../source/ref-api/flwr.common.rst:64::1 #, fuzzy -msgid "" -":py:obj:`GetParametersIns `\\ \\(config\\)" -msgstr "" -":py:obj:`GetParametersIns `\\ \\(config\\)" +msgid ":py:obj:`GetParametersIns `\\ \\(config\\)" +msgstr ":py:obj:`GetParametersIns `\\ \\(config\\)" #: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.GetParametersIns:1 of @@ -8817,10 +8756,8 @@ msgstr "要求返回参数时的响应。" #: ../../source/ref-api/flwr.common.rst:64::1 #, fuzzy -msgid "" -":py:obj:`GetPropertiesIns `\\ \\(config\\)" -msgstr "" -":py:obj:`GetPropertiesIns `\\ \\(config\\)" +msgid ":py:obj:`GetPropertiesIns `\\ \\(config\\)" +msgstr ":py:obj:`GetPropertiesIns `\\ \\(config\\)" #: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.GetPropertiesIns:1 of @@ -8881,11 +8818,11 @@ msgstr "传统信息类型。" #: ../../source/ref-api/flwr.common.rst:64::1 #, fuzzy msgid "" -":py:obj:`Metadata `\\ \\(run\\_id\\, message\\_id\\, " -"src\\_node\\_id\\, ...\\)" +":py:obj:`Metadata `\\ \\(run\\_id\\, " +"message\\_id\\, src\\_node\\_id\\, ...\\)" msgstr "" -":py:obj:`Metadata `\\ \\(run\\_id\\, message\\_id\\, " -"src\\_node\\_id\\, ...\\)" +":py:obj:`Metadata `\\ \\(run\\_id\\, " +"message\\_id\\, src\\_node\\_id\\, ...\\)" #: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.message.Metadata:1 of @@ -8916,11 +8853,11 @@ msgstr ":py:obj:`NDArray `\\" #: ../../source/ref-api/flwr.common.rst:64::1 #, fuzzy msgid "" -"alias of :py:class:`~numpy.ndarray`\\ [:py:obj:`~typing.Any`, :py:class:" -"`~numpy.dtype`\\ [:py:obj:`~typing.Any`]]" +"alias of :py:class:`~numpy.ndarray`\\ [:py:obj:`~typing.Any`, " +":py:class:`~numpy.dtype`\\ [:py:obj:`~typing.Any`]]" msgstr "" -"alias of :py:class:`~numpy.ndarray`\\ [:py:obj:`~typing.Any`, :py:class:" -"`~numpy.dtype`\\ [:py:obj:`~typing.Any`]]" +"alias of :py:class:`~numpy.ndarray`\\ [:py:obj:`~typing.Any`, " +":py:class:`~numpy.dtype`\\ [:py:obj:`~typing.Any`]]" #: ../../source/ref-api/flwr.common.rst:64::1 #, fuzzy @@ -9015,28 +8952,24 @@ msgstr "数据类,包含数组类或张量类对象的序列化数据以及相 #: flwr.common.record.parametersrecord.Array:6 of #, fuzzy msgid "" -"A string representing the data type of the serialised object (e.g. `np." -"float32`)" +"A string representing the data type of the serialised object (e.g. " +"`np.float32`)" msgstr "表示序列化对象数据类型的字符串(例如 `np.float32`)" #: flwr.common.record.parametersrecord.Array:8 of #, fuzzy msgid "" -"A list representing the shape of the unserialized array-like object. This is " -"used to deserialize the data (depending on the serialization method) or " -"simply as a metadata field." -msgstr "" -"代表未序列化数组对象形状的列表。它可用于反序列化数据(取决于序列化方法),或" -"仅作为元数据字段使用。" +"A list representing the shape of the unserialized array-like object. This" +" is used to deserialize the data (depending on the serialization method) " +"or simply as a metadata field." +msgstr "代表未序列化数组对象形状的列表。它可用于反序列化数据(取决于序列化方法),或仅作为元数据字段使用。" #: flwr.common.record.parametersrecord.Array:12 of #, fuzzy msgid "" -"A string indicating the type of serialisation mechanism used to generate the " -"bytes in `data` from an array-like or tensor-like object." -msgstr "" -"表示序列化机制类型的字符串,用于从类似数组或类似张量的对象中生成 `data` 中的" -"字节。" +"A string indicating the type of serialisation mechanism used to generate " +"the bytes in `data` from an array-like or tensor-like object." +msgstr "表示序列化机制类型的字符串,用于从类似数组或类似张量的对象中生成 `data` 中的字节。" #: flwr.common.record.parametersrecord.Array:15 of #, fuzzy @@ -9092,16 +9025,20 @@ msgstr ":py:obj:`fit_res `\\" #: ../../source/ref-api/flwr.common.ClientMessage.rst:31::1 #, fuzzy msgid "" -":py:obj:`get_parameters_res `\\" +":py:obj:`get_parameters_res " +"`\\" msgstr "" -":py:obj:`get_parameters_res `\\" +":py:obj:`get_parameters_res " +"`\\" #: ../../source/ref-api/flwr.common.ClientMessage.rst:31::1 #, fuzzy msgid "" -":py:obj:`get_properties_res `\\" +":py:obj:`get_properties_res " +"`\\" msgstr "" -":py:obj:`get_properties_res `\\" +":py:obj:`get_properties_res " +"`\\" #: ../../source/ref-api/flwr.common.Code.rst:2 #, fuzzy @@ -9121,20 +9058,20 @@ msgstr ":py:obj:`OK `\\" #: ../../source/ref-api/flwr.common.Code.rst:26::1 #, fuzzy msgid "" -":py:obj:`GET_PROPERTIES_NOT_IMPLEMENTED `\\" +":py:obj:`GET_PROPERTIES_NOT_IMPLEMENTED " +"`\\" msgstr "" -":py:obj:`GET_PROPERTIES_NOT_IMPLEMENTED `\\" +":py:obj:`GET_PROPERTIES_NOT_IMPLEMENTED " +"`\\" #: ../../source/ref-api/flwr.common.Code.rst:26::1 #, fuzzy msgid "" -":py:obj:`GET_PARAMETERS_NOT_IMPLEMENTED `\\" +":py:obj:`GET_PARAMETERS_NOT_IMPLEMENTED " +"`\\" msgstr "" -":py:obj:`GET_PARAMETERS_NOT_IMPLEMENTED `\\" +":py:obj:`GET_PARAMETERS_NOT_IMPLEMENTED " +"`\\" #: ../../source/ref-api/flwr.common.Code.rst:26::1 #, fuzzy @@ -9144,11 +9081,11 @@ msgstr ":py:obj:`FIT_NOT_IMPLEMENTED `\\" #: ../../source/ref-api/flwr.common.Code.rst:26::1 #, fuzzy msgid "" -":py:obj:`EVALUATE_NOT_IMPLEMENTED `\\" +":py:obj:`EVALUATE_NOT_IMPLEMENTED " +"`\\" msgstr "" -":py:obj:`EVALUATE_NOT_IMPLEMENTED `\\" +":py:obj:`EVALUATE_NOT_IMPLEMENTED " +"`\\" #: ../../source/ref-api/flwr.common.ConfigsRecord.rst:2 #, fuzzy @@ -9158,19 +9095,20 @@ msgstr "配置日志记录" #: flwr.common.record.configsrecord.ConfigsRecord:1 of #, fuzzy msgid "" -"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ [:py:class:" -"`str`, :py:obj:`~typing.Union`\\ [:py:class:`int`, :py:class:`float`, :py:" -"class:`str`, :py:class:`bytes`, :py:class:`bool`, :py:class:`~typing.List`\\ " -"[:py:class:`int`], :py:class:`~typing.List`\\ [:py:class:`float`], :py:class:" -"`~typing.List`\\ [:py:class:`str`], :py:class:`~typing.List`\\ [:py:class:" -"`bytes`], :py:class:`~typing.List`\\ [:py:class:`bool`]]]" +"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " +"[:py:class:`str`, :py:class:`int` | :py:class:`float` | :py:class:`str` |" +" :py:class:`bytes` | :py:class:`bool` | :py:class:`~typing.List`\\ " +"[:py:class:`int`] | :py:class:`~typing.List`\\ [:py:class:`float`] | " +":py:class:`~typing.List`\\ [:py:class:`str`] | :py:class:`~typing.List`\\" +" [:py:class:`bytes`] | :py:class:`~typing.List`\\ [:py:class:`bool`]]" msgstr "" -"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ [:py:class:" -"`str`, :py:obj:`~typing.Union`\\ [:py:class:`int`, :py:class:`float`, :py:" -"class:`str`, :py:class:`bytes`, :py:class:`bool`, :py:class:`~typing.List`\\ " -"[:py:class:`int`], :py:class:`~typing.List`\\ [:py:class:`float`], :py:class:" -"`~typing.List`\\ [:py:class:`str`], :py:class:`~typing.List`\\ [:py:class:" -"`bytes`], :py:class:`~typing.List`\\ [:py:class:`bool`]]]" +"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " +"[:py:class:`str`, :py:obj:`~typing.Union`\\ [:py:class:`int`, " +":py:class:`float`, :py:class:`str`, :py:class:`bytes`, :py:class:`bool`, " +":py:class:`~typing.List`\\ [:py:class:`int`], :py:class:`~typing.List`\\ " +"[:py:class:`float`], :py:class:`~typing.List`\\ [:py:class:`str`], " +":py:class:`~typing.List`\\ [:py:class:`bytes`], " +":py:class:`~typing.List`\\ [:py:class:`bool`]]]" #: flwr.common.record.typeddict.TypedDict.clear:1::1 of #, fuzzy @@ -9225,8 +9163,7 @@ msgstr ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" #: flwr.common.record.typeddict.TypedDict.clear:1::1 #: flwr.common.record.typeddict.TypedDict.pop:1 of #, fuzzy -msgid "" -"If key is not found, d is returned if given, otherwise KeyError is raised." +msgid "If key is not found, d is returned if given, otherwise KeyError is raised." msgstr "如果未找到 key,则返回 d(如果给定),否则引发 KeyError。" #: flwr.common.record.typeddict.TypedDict.clear:1::1 of @@ -9262,15 +9199,13 @@ msgstr "背景" #: flwr.common.context.Context:3 of #, fuzzy msgid "" -"Holds records added by the entity in a given run and that will stay local. " -"This means that the data it holds will never leave the system it's running " -"from. This can be used as an intermediate storage or scratchpad when " -"executing mods. It can also be used as a memory to access at different " -"points during the lifecycle of this entity (e.g. across multiple rounds)" -msgstr "" -"保存实体在给定运行中添加的记录,这些记录将保留在本地。这意味着它保存的数据永" -"远不会离开运行的系统。在执行模式时,它可用作中间存储或抓取板。它还可以作为存" -"储器,在实体生命周期的不同阶段(如多轮)进行访问。" +"Holds records added by the entity in a given run and that will stay " +"local. This means that the data it holds will never leave the system it's" +" running from. This can be used as an intermediate storage or scratchpad " +"when executing mods. It can also be used as a memory to access at " +"different points during the lifecycle of this entity (e.g. across " +"multiple rounds)" +msgstr "保存实体在给定运行中添加的记录,这些记录将保留在本地。这意味着它保存的数据永远不会离开运行的系统。在执行模式时,它可用作中间存储或抓取板。它还可以作为存储器,在实体生命周期的不同阶段(如多轮)进行访问。" #: ../../source/ref-api/flwr.common.Context.rst:28::1 #, fuzzy @@ -9372,2162 +9307,2941 @@ msgstr "返回类型" msgid "Bases: :py:class:`str`, :py:class:`~enum.Enum`" msgstr "Bases: :py:class:`str`, :py:class:`~enum.Enum`" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -#, fuzzy -msgid ":py:obj:`PING `\\" -msgstr ":py:obj:`PING `\\" - -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy msgid "" -":py:obj:`START_CLIENT_ENTER `\\" +":py:obj:`encode `\\ \\(\\[encoding\\, " +"errors\\]\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.encode:1 of +msgid "Encode the string using the codec registered for encoding." msgstr "" -":py:obj:`START_CLIENT_ENTER `\\" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy msgid "" -":py:obj:`START_CLIENT_LEAVE `\\" +":py:obj:`replace `\\ \\(old\\, new\\[\\, " +"count\\]\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.replace:1 of +msgid "Return a copy with all occurrences of substring old replaced by new." msgstr "" -":py:obj:`START_CLIENT_LEAVE `\\" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy msgid "" -":py:obj:`START_SERVER_ENTER `\\" -msgstr "" -":py:obj:`START_SERVER_ENTER `\\" +":py:obj:`split `\\ \\(\\[sep\\, " +"maxsplit\\]\\)" +msgstr ":py:obj:`PING `\\" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -#, fuzzy +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.rsplit:1 flwr.common.EventType.split:1 of msgid "" -":py:obj:`START_SERVER_LEAVE `\\" +"Return a list of the substrings in the string, using sep as the separator" +" string." msgstr "" -":py:obj:`START_SERVER_LEAVE `\\" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy msgid "" -":py:obj:`RUN_DRIVER_API_ENTER `\\" +":py:obj:`rsplit `\\ \\(\\[sep\\, " +"maxsplit\\]\\)" +msgstr ":py:obj:`PING `\\" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#, fuzzy +msgid ":py:obj:`join `\\ \\(iterable\\, \\/\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.join:1 of +msgid "Concatenate any number of strings." msgstr "" -":py:obj:`RUN_DRIVER_API_ENTER `\\" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "" -":py:obj:`RUN_DRIVER_API_LEAVE `\\" +msgid ":py:obj:`capitalize `\\ \\(\\)" +msgstr ":py:obj:`PING `\\" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.capitalize:1 of +msgid "Return a capitalized version of the string." msgstr "" -":py:obj:`RUN_DRIVER_API_LEAVE `\\" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "" -":py:obj:`RUN_FLEET_API_ENTER `\\" +msgid ":py:obj:`casefold `\\ \\(\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.casefold:1 of +msgid "Return a version of the string suitable for caseless comparisons." msgstr "" -":py:obj:`RUN_FLEET_API_ENTER `\\" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "" -":py:obj:`RUN_FLEET_API_LEAVE `\\" +msgid ":py:obj:`title `\\ \\(\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.title:1 of +msgid "Return a version of the string where each word is titlecased." msgstr "" -":py:obj:`RUN_FLEET_API_LEAVE `\\" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy msgid "" -":py:obj:`RUN_SUPERLINK_ENTER `\\" -msgstr "" -":py:obj:`RUN_SUPERLINK_ENTER `\\" +":py:obj:`center `\\ \\(width\\[\\, " +"fillchar\\]\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.center:1 of +#, fuzzy +msgid "Return a centered string of length width." +msgstr "返回客户端的属性集。" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy msgid "" -":py:obj:`RUN_SUPERLINK_LEAVE `\\" +":py:obj:`count `\\ \\(sub\\[\\, start\\[\\, " +"end\\]\\]\\)" +msgstr ":py:obj:`Context `\\ \\(state\\)" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +"Return the number of non-overlapping occurrences of substring sub in " +"string S[start:end]." msgstr "" -":py:obj:`RUN_SUPERLINK_LEAVE `\\" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy msgid "" -":py:obj:`START_SIMULATION_ENTER `\\" +":py:obj:`expandtabs `\\ " +"\\(\\[tabsize\\]\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.expandtabs:1 of +msgid "Return a copy where all tab characters are expanded using spaces." msgstr "" -":py:obj:`START_SIMULATION_ENTER `\\" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy msgid "" -":py:obj:`START_SIMULATION_LEAVE `\\" +":py:obj:`find `\\ \\(sub\\[\\, start\\[\\, " +"end\\]\\]\\)" +msgstr ":py:obj:`PING `\\" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "" +"Return the lowest index in S where substring sub is found, such that sub " +"is contained within S[start:end]." msgstr "" -":py:obj:`START_SIMULATION_LEAVE `\\" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid ":py:obj:`DRIVER_CONNECT `\\" -msgstr ":py:obj:`DRIVER_CONNECT `\\" +msgid ":py:obj:`partition `\\ \\(sep\\, \\/\\)" +msgstr ":py:obj:`partition_id `\\" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -#, fuzzy -msgid ":py:obj:`DRIVER_DISCONNECT `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.partition:1 flwr.common.EventType.rpartition:1 of +msgid "Partition the string into three parts using the given separator." msgstr "" -":py:obj:`DRIVER_DISCONNECT `\\" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy msgid "" -":py:obj:`START_DRIVER_ENTER `\\" -msgstr "" -":py:obj:`START_DRIVER_ENTER `\\" +":py:obj:`index `\\ \\(sub\\[\\, start\\[\\, " +"end\\]\\]\\)" +msgstr ":py:obj:`Context `\\ \\(state\\)" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy msgid "" -":py:obj:`START_DRIVER_LEAVE `\\" +":py:obj:`ljust `\\ \\(width\\[\\, " +"fillchar\\]\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.ljust:1 of +msgid "Return a left-justified string of length width." msgstr "" -":py:obj:`START_DRIVER_LEAVE `\\" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "" -":py:obj:`RUN_CLIENT_APP_ENTER `\\" +msgid ":py:obj:`lower `\\ \\(\\)" +msgstr ":py:obj:`now `\\ \\(\\)" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.lower:1 of +msgid "Return a copy of the string converted to lowercase." msgstr "" -":py:obj:`RUN_CLIENT_APP_ENTER `\\" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "" -":py:obj:`RUN_CLIENT_APP_LEAVE `\\" +msgid ":py:obj:`lstrip `\\ \\(\\[chars\\]\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.lstrip:1 of +msgid "Return a copy of the string with leading whitespace removed." msgstr "" -":py:obj:`RUN_CLIENT_APP_LEAVE `\\" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy msgid "" -":py:obj:`RUN_SERVER_APP_ENTER `\\" -msgstr "" -":py:obj:`RUN_SERVER_APP_ENTER `\\" +":py:obj:`rfind `\\ \\(sub\\[\\, start\\[\\, " +"end\\]\\]\\)" +msgstr ":py:obj:`PING `\\" -#: ../../source/ref-api/flwr.common.EventType.rst:42::1 -#, fuzzy +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 msgid "" -":py:obj:`RUN_SERVER_APP_LEAVE `\\" +"Return the highest index in S where substring sub is found, such that sub" +" is contained within S[start:end]." msgstr "" -":py:obj:`RUN_SERVER_APP_LEAVE `\\" -#: ../../source/ref-api/flwr.common.FitIns.rst:2 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "FitIns" -msgstr "FitIns" +msgid "" +":py:obj:`rindex `\\ \\(sub\\[\\, " +"start\\[\\, end\\]\\]\\)" +msgstr ":py:obj:`Context `\\ \\(state\\)" -#: ../../source/ref-api/flwr.common.FitIns.rst:29::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid ":py:obj:`parameters `\\" -msgstr ":py:obj:`parameters `\\" +msgid "" +":py:obj:`rjust `\\ \\(width\\[\\, " +"fillchar\\]\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" -#: ../../source/ref-api/flwr.common.FitIns.rst:29::1 -#, fuzzy -msgid ":py:obj:`config `\\" -msgstr ":py:obj:`config `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.rjust:1 of +msgid "Return a right-justified string of length width." +msgstr "" -#: ../../source/ref-api/flwr.common.FitRes.rst:2 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "FitRes" -msgstr "FitRes" +msgid ":py:obj:`rstrip `\\ \\(\\[chars\\]\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" -#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 -#, fuzzy -msgid ":py:obj:`status `\\" -msgstr ":py:obj:`status `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.rstrip:1 of +msgid "Return a copy of the string with trailing whitespace removed." +msgstr "" -#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid ":py:obj:`parameters `\\" -msgstr ":py:obj:`parameters `\\" +msgid ":py:obj:`rpartition `\\ \\(sep\\, \\/\\)" +msgstr ":py:obj:`partition_id `\\" -#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid ":py:obj:`num_examples `\\" -msgstr ":py:obj:`num_examples `\\" +msgid "" +":py:obj:`splitlines `\\ " +"\\(\\[keepends\\]\\)" +msgstr ":py:obj:`PING `\\" -#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 -#, fuzzy -msgid ":py:obj:`metrics `\\" -msgstr ":py:obj:`metrics `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.splitlines:1 of +msgid "Return a list of the lines in the string, breaking at line boundaries." +msgstr "" -#: ../../source/ref-api/flwr.common.GetParametersIns.rst:2 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "GetParametersIns" -msgstr "参数" +msgid ":py:obj:`strip `\\ \\(\\[chars\\]\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" -#: ../../source/ref-api/flwr.common.GetParametersIns.rst:28::1 -#, fuzzy -msgid ":py:obj:`config `\\" -msgstr ":py:obj:`config `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.strip:1 of +msgid "Return a copy of the string with leading and trailing whitespace removed." +msgstr "" -#: ../../source/ref-api/flwr.common.GetParametersRes.rst:2 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "GetParametersRes" -msgstr "参数" +msgid ":py:obj:`swapcase `\\ \\(\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" -#: ../../source/ref-api/flwr.common.GetParametersRes.rst:29::1 -#, fuzzy -msgid ":py:obj:`status `\\" -msgstr ":py:obj:`status `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.swapcase:1 of +msgid "" +"Convert uppercase characters to lowercase and lowercase characters to " +"uppercase." +msgstr "" -#: ../../source/ref-api/flwr.common.GetParametersRes.rst:29::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid ":py:obj:`parameters `\\" -msgstr ":py:obj:`parameters `\\" +msgid ":py:obj:`translate `\\ \\(table\\, \\/\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" -#: ../../source/ref-api/flwr.common.GetPropertiesIns.rst:2 -#, fuzzy -msgid "GetPropertiesIns" -msgstr "GetPropertiesIns" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.translate:1 of +msgid "Replace each character in the string using the given translation table." +msgstr "" -#: ../../source/ref-api/flwr.common.GetPropertiesIns.rst:28::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid ":py:obj:`config `\\" -msgstr ":py:obj:`config `\\" +msgid ":py:obj:`upper `\\ \\(\\)" +msgstr ":py:obj:`PING `\\" -#: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:2 -#, fuzzy -msgid "GetPropertiesRes" -msgstr "GetPropertiesRes" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.upper:1 of +msgid "Return a copy of the string converted to uppercase." +msgstr "" -#: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:29::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid ":py:obj:`status `\\" -msgstr ":py:obj:`status `\\" +msgid "" +":py:obj:`startswith `\\ \\(prefix\\[\\," +" start\\[\\, end\\]\\]\\)" +msgstr ":py:obj:`Status `\\ \\(code\\, message\\)" -#: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:29::1 -#, fuzzy -msgid ":py:obj:`properties `\\" -msgstr ":py:obj:`properties `\\" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "Return True if S starts with the specified prefix, False otherwise." +msgstr "" -#: ../../source/ref-api/flwr.common.Message.rst:2 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "Message" -msgstr "服务器端" +msgid "" +":py:obj:`endswith `\\ \\(suffix\\[\\, " +"start\\[\\, end\\]\\]\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" -#: flwr.common.Message.content:1::1 flwr.common.Message.metadata:1 -#: flwr.common.message.Message:3 of -#, fuzzy -msgid "A dataclass including information about the message to be executed." -msgstr "数据类型,包括要执行的信息的相关信息。" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "Return True if S ends with the specified suffix, False otherwise." +msgstr "" -#: flwr.common.message.Message:5 of +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy msgid "" -"Holds records either sent by another entity (e.g. sent by the server-side " -"logic to a client, or vice-versa) or that will be sent to it." +":py:obj:`removeprefix `\\ " +"\\(prefix\\, \\/\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.removeprefix:1 of +msgid "Return a str with the given prefix string removed if present." msgstr "" -"保存由其他实体发送的记录(如由服务器端逻辑发送到客户端,反之亦然)或将发送到" -"该实体的记录。" -#: flwr.common.message.Message:8 of +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy msgid "" -"A dataclass that captures information about an error that took place when " -"processing another message." -msgstr "数据类,用于捕捉处理其他报文时发生的错误信息。" +":py:obj:`removesuffix `\\ " +"\\(suffix\\, \\/\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 -#, fuzzy -msgid "" -":py:obj:`create_error_reply `\\ " -"\\(error\\, ttl\\)" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.removesuffix:1 of +msgid "Return a str with the given suffix string removed if present." msgstr "" -":py:obj:`create_error_reply `\\ " -"\\(error\\, ttl\\)" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 -#: flwr.common.message.Message.create_error_reply:1 of +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "Construct a reply message indicating an error happened." -msgstr "构建一条回复信息,说明发生了错误。" +msgid ":py:obj:`isascii `\\ \\(\\)" +msgstr ":py:obj:`PING `\\" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 -#, fuzzy -msgid "" -":py:obj:`create_reply `\\ \\(content\\, " -"ttl\\)" +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isascii:1 of +msgid "Return True if all characters in the string are ASCII, False otherwise." msgstr "" -":py:obj:`create_reply `\\ \\(content\\, " -"ttl\\)" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 -#: flwr.common.message.Message.create_reply:1 of +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "Create a reply to this message with specified content and TTL." -msgstr "以指定的内容和 TTL 创建对该信息的回复。" +msgid ":py:obj:`islower `\\ \\(\\)" +msgstr ":py:obj:`now `\\ \\(\\)" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.islower:1 of +msgid "Return True if the string is a lowercase string, False otherwise." +msgstr "" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid ":py:obj:`has_content `\\ \\(\\)" -msgstr ":py:obj:`has_content `\\ \\(\\)" +msgid ":py:obj:`isupper `\\ \\(\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 -#: flwr.common.message.Message.has_content:1 of +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isupper:1 of +msgid "Return True if the string is an uppercase string, False otherwise." +msgstr "" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "Return True if message has content, else False." -msgstr "如果信息有内容,则返回 True,否则返回 False。" +msgid ":py:obj:`istitle `\\ \\(\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.istitle:1 of +msgid "Return True if the string is a title-cased string, False otherwise." +msgstr "" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid ":py:obj:`has_error `\\ \\(\\)" -msgstr ":py:obj:`has_error `\\ \\(\\)" +msgid ":py:obj:`isspace `\\ \\(\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" -#: ../../source/ref-api/flwr.common.Message.rst:35::1 -#: flwr.common.message.Message.has_error:1 of +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isspace:1 of +msgid "Return True if the string is a whitespace string, False otherwise." +msgstr "" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "Return True if message has an error, else False." -msgstr "如果信息有错误,则返回 True,否则返回 False。" +msgid ":py:obj:`isdecimal `\\ \\(\\)" +msgstr ":py:obj:`PING `\\" -#: flwr.common.Message.content:1::1 of +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isdecimal:1 of +msgid "Return True if the string is a decimal string, False otherwise." +msgstr "" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid ":py:obj:`content `\\" -msgstr ":py:obj:`content `\\" +msgid ":py:obj:`isdigit `\\ \\(\\)" +msgstr ":py:obj:`PING `\\" -#: flwr.common.Message.content:1 flwr.common.Message.content:1::1 -#: of +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isdigit:1 of +msgid "Return True if the string is a digit string, False otherwise." +msgstr "" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "The content of this message." -msgstr "评估客户端的反应。" +msgid ":py:obj:`isnumeric `\\ \\(\\)" +msgstr ":py:obj:`PING `\\" -#: flwr.common.Message.content:1::1 of +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isnumeric:1 of +msgid "Return True if the string is a numeric string, False otherwise." +msgstr "" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid ":py:obj:`error `\\" -msgstr ":py:obj:`error `\\" +msgid ":py:obj:`isalpha `\\ \\(\\)" +msgstr ":py:obj:`PING `\\" -#: flwr.common.Message.content:1::1 flwr.common.Message.error:1 of +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isalpha:1 of +msgid "Return True if the string is an alphabetic string, False otherwise." +msgstr "" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "Error captured by this message." -msgstr "该信息捕捉到的错误。" +msgid ":py:obj:`isalnum `\\ \\(\\)" +msgstr ":py:obj:`PING `\\" -#: flwr.common.Message.content:1::1 of +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isalnum:1 of +msgid "Return True if the string is an alpha-numeric string, False otherwise." +msgstr "" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid ":py:obj:`metadata `\\" -msgstr ":py:obj:`metadata `\\" +msgid ":py:obj:`isidentifier `\\ \\(\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" -#: flwr.common.message.Message.create_error_reply:3 of +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isidentifier:1 of +msgid "Return True if the string is a valid Python identifier, False otherwise." +msgstr "" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "The error that was encountered." -msgstr "遇到的错误。" +msgid ":py:obj:`isprintable `\\ \\(\\)" +msgstr ":py:obj:`PING `\\" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.ttl:1 flwr.common.message.Message.create_error_reply:5 -#: flwr.common.message.Message.create_reply:9 flwr.common.message.Metadata:16 -#: of +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.isprintable:1 of +msgid "Return True if the string is printable, False otherwise." +msgstr "" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "Time-to-live for this message." -msgstr "该信息的有效时间。" +msgid ":py:obj:`zfill `\\ \\(width\\, \\/\\)" +msgstr ":py:obj:`PING `\\" -#: flwr.common.message.Message.create_reply:3 of +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.zfill:1 of +msgid "" +"Pad a numeric string with zeros on the left, to fill a field of the given" +" width." +msgstr "" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy msgid "" -"The method generates a new `Message` as a reply to this message. It inherits " -"'run_id', 'src_node_id', 'dst_node_id', and 'message_type' from this message " -"and sets 'reply_to_message' to the ID of this message." +":py:obj:`format `\\ \\(\\*args\\, " +"\\*\\*kwargs\\)" msgstr "" -"该方法会生成一条新的 \"信息\",作为对该信息的回复。该方法继承了该消息的 " -"\"run_id\"、\"src_node_id\"、\"dst_node_id \"和 \"message_type\",并将 " -"\"reply_to_message \"设置为该消息的 ID。" +":py:obj:`log `\\ \\(level\\, msg\\, \\*args\\, " +"\\*\\*kwargs\\)" -#: flwr.common.message.Message.create_reply:7 of +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "Return a formatted version of S, using substitutions from args and kwargs." +msgstr "" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "The content for the reply message." -msgstr "回复信息的内容。" +msgid ":py:obj:`format_map `\\ \\(mapping\\)" +msgstr ":py:obj:`EventType `\\ \\(value\\)" + +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +msgid "Return a formatted version of S, using substitutions from mapping." +msgstr "" -#: flwr.common.message.Message.create_reply:12 of +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 #, fuzzy -msgid "A new `Message` instance representing the reply." -msgstr "代表回复的新的 `Message` 实例。" +msgid ":py:obj:`maketrans `\\" +msgstr ":py:obj:`TRAIN `\\" -#: ../../source/ref-api/flwr.common.MessageType.rst:2 +#: ../../source/ref-api/flwr.common.EventType.rst:163::1 +#: flwr.common.EventType.maketrans:1 of +msgid "Return a translation table usable for str.translate()." +msgstr "" + +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy -msgid "MessageType" -msgstr "返回类型" +msgid ":py:obj:`PING `\\" +msgstr ":py:obj:`PING `\\" -#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy -msgid ":py:obj:`EVALUATE `\\" -msgstr ":py:obj:`EVALUATE `\\" +msgid ":py:obj:`START_CLIENT_ENTER `\\" +msgstr ":py:obj:`START_CLIENT_ENTER `\\" -#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy -msgid ":py:obj:`QUERY `\\" -msgstr ":py:obj:`QUERY `\\" +msgid ":py:obj:`START_CLIENT_LEAVE `\\" +msgstr ":py:obj:`START_CLIENT_LEAVE `\\" -#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy -msgid ":py:obj:`TRAIN `\\" -msgstr ":py:obj:`TRAIN `\\" +msgid ":py:obj:`START_SERVER_ENTER `\\" +msgstr ":py:obj:`START_SERVER_ENTER `\\" -#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:2 +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy -msgid "MessageTypeLegacy" -msgstr "MessageTypeLegacy" +msgid ":py:obj:`START_SERVER_LEAVE `\\" +msgstr ":py:obj:`START_SERVER_LEAVE `\\" -#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:29::1 +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy msgid "" -":py:obj:`GET_PARAMETERS `\\" +":py:obj:`RUN_DRIVER_API_ENTER " +"`\\" msgstr "" -":py:obj:`GET_PARAMETERS `\\" +":py:obj:`RUN_DRIVER_API_ENTER " +"`\\" -#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:29::1 +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy msgid "" -":py:obj:`GET_PROPERTIES `\\" +":py:obj:`RUN_DRIVER_API_LEAVE " +"`\\" msgstr "" -":py:obj:`GET_PROPERTIES `\\" - -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.run_id:1 flwr.common.message.Metadata:3 of -#, fuzzy -msgid "An identifier for the current run." -msgstr "当前运行的标识符。" - -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.message_id:1 flwr.common.message.Metadata:5 of -#, fuzzy -msgid "An identifier for the current message." -msgstr "当前信息的标识符。" +":py:obj:`RUN_DRIVER_API_LEAVE " +"`\\" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.src_node_id:1 flwr.common.message.Metadata:7 of +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy -msgid "An identifier for the node sending this message." -msgstr "发送此信息的节点的标识符。" +msgid "" +":py:obj:`RUN_FLEET_API_ENTER " +"`\\" +msgstr "" +":py:obj:`RUN_FLEET_API_ENTER " +"`\\" -#: flwr.common.Metadata.dst_node_id:1 -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.message.Metadata:9 of +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy -msgid "An identifier for the node receiving this message." -msgstr "接收此信息的节点的标识符。" +msgid "" +":py:obj:`RUN_FLEET_API_LEAVE " +"`\\" +msgstr "" +":py:obj:`RUN_FLEET_API_LEAVE " +"`\\" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.reply_to_message:1 flwr.common.message.Metadata:11 of +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy -msgid "An identifier for the message this message replies to." -msgstr "该信息回复的信息的标识符。" +msgid "" +":py:obj:`RUN_SUPERLINK_ENTER " +"`\\" +msgstr "" +":py:obj:`RUN_SUPERLINK_ENTER " +"`\\" -#: flwr.common.message.Metadata:13 of +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy msgid "" -"An identifier for grouping messages. In some settings, this is used as the " -"FL round." -msgstr "用于分组报文的标识符。在某些设置中,它被用作 FL 轮。" +":py:obj:`RUN_SUPERLINK_LEAVE " +"`\\" +msgstr "" +":py:obj:`RUN_SUPERLINK_LEAVE " +"`\\" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.message_type:1 flwr.common.message.Metadata:18 of +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy -msgid "A string that encodes the action to be executed on the receiving end." -msgstr "编码接收端要执行的操作的字符串。" +msgid "" +":py:obj:`START_SIMULATION_ENTER " +"`\\" +msgstr "" +":py:obj:`START_SIMULATION_ENTER " +"`\\" -#: flwr.common.message.Metadata:21 of +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy msgid "" -"An identifier that can be used when loading a particular data partition for " -"a ClientApp. Making use of this identifier is more relevant when conducting " -"simulations." +":py:obj:`START_SIMULATION_LEAVE " +"`\\" msgstr "" -"为 ClientApp 加载特定数据分区时可使用的标识符。在进行模拟时,使用该标识符更有" -"意义。" +":py:obj:`START_SIMULATION_LEAVE " +"`\\" -#: flwr.common.Metadata.dst_node_id:1::1 of +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy -msgid ":py:obj:`dst_node_id `\\" -msgstr ":py:obj:`dst_node_id `\\" +msgid ":py:obj:`DRIVER_CONNECT `\\" +msgstr ":py:obj:`DRIVER_CONNECT `\\" -#: flwr.common.Metadata.dst_node_id:1::1 of +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy -msgid ":py:obj:`group_id `\\" -msgstr ":py:obj:`group_id `\\" +msgid ":py:obj:`DRIVER_DISCONNECT `\\" +msgstr ":py:obj:`DRIVER_DISCONNECT `\\" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.group_id:1 of +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy -msgid "An identifier for grouping messages." -msgstr "用于分组信息的标识符。" +msgid ":py:obj:`START_DRIVER_ENTER `\\" +msgstr ":py:obj:`START_DRIVER_ENTER `\\" -#: flwr.common.Metadata.dst_node_id:1::1 of +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy -msgid ":py:obj:`message_id `\\" -msgstr ":py:obj:`message_id `\\" +msgid ":py:obj:`START_DRIVER_LEAVE `\\" +msgstr ":py:obj:`START_DRIVER_LEAVE `\\" -#: flwr.common.Metadata.dst_node_id:1::1 of +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy -msgid ":py:obj:`message_type `\\" -msgstr ":py:obj:`message_type `\\" +msgid "" +":py:obj:`RUN_CLIENT_APP_ENTER " +"`\\" +msgstr "" +":py:obj:`RUN_CLIENT_APP_ENTER " +"`\\" -#: flwr.common.Metadata.dst_node_id:1::1 of +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy -msgid ":py:obj:`partition_id `\\" -msgstr ":py:obj:`partition_id `\\" +msgid "" +":py:obj:`RUN_CLIENT_APP_LEAVE " +"`\\" +msgstr "" +":py:obj:`RUN_CLIENT_APP_LEAVE " +"`\\" -#: flwr.common.Metadata.dst_node_id:1::1 -#: flwr.common.Metadata.partition_id:1 of +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy -msgid "An identifier telling which data partition a ClientApp should use." -msgstr "告诉 ClientApp 应使用哪个数据分区的标识符。" +msgid "" +":py:obj:`RUN_SERVER_APP_ENTER " +"`\\" +msgstr "" +":py:obj:`RUN_SERVER_APP_ENTER " +"`\\" -#: flwr.common.Metadata.dst_node_id:1::1 of +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy -msgid ":py:obj:`reply_to_message `\\" -msgstr ":py:obj:`reply_to_message `\\" +msgid "" +":py:obj:`RUN_SERVER_APP_LEAVE " +"`\\" +msgstr "" +":py:obj:`RUN_SERVER_APP_LEAVE " +"`\\" -#: flwr.common.Metadata.dst_node_id:1::1 of +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy -msgid ":py:obj:`run_id `\\" -msgstr ":py:obj:`run_id `\\" +msgid "" +":py:obj:`RUN_SUPERNODE_ENTER " +"`\\" +msgstr "" +":py:obj:`RUN_SUPERLINK_ENTER " +"`\\" -#: flwr.common.Metadata.dst_node_id:1::1 of +#: flwr.common.EventType.capitalize:1::1 of #, fuzzy -msgid ":py:obj:`src_node_id `\\" -msgstr ":py:obj:`src_node_id `\\" +msgid "" +":py:obj:`RUN_SUPERNODE_LEAVE " +"`\\" +msgstr "" +":py:obj:`RUN_SUPERLINK_LEAVE " +"`\\" -#: flwr.common.Metadata.dst_node_id:1::1 of -#, fuzzy -msgid ":py:obj:`ttl `\\" -msgstr ":py:obj:`ttl `\\" +#: flwr.common.EventType.capitalize:3 of +msgid "" +"More specifically, make the first character have upper case and the rest " +"lower case." +msgstr "" -#: ../../source/ref-api/flwr.common.MetricsRecord.rst:2 -#, fuzzy -msgid "MetricsRecord" -msgstr "MetricsRecord" +#: flwr.common.EventType.center:3 flwr.common.EventType.ljust:3 +#: flwr.common.EventType.rjust:3 of +msgid "Padding is done using the specified fill character (default is a space)." +msgstr "" -#: flwr.common.record.metricsrecord.MetricsRecord:1 of -#, fuzzy +#: flwr.common.EventType.count:1 of msgid "" -"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ [:py:class:" -"`str`, :py:obj:`~typing.Union`\\ [:py:class:`int`, :py:class:`float`, :py:" -"class:`~typing.List`\\ [:py:class:`int`], :py:class:`~typing.List`\\ [:py:" -"class:`float`]]]" +"Return the number of non-overlapping occurrences of substring sub in " +"string S[start:end]. Optional arguments start and end are interpreted as" +" in slice notation." msgstr "" -"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ [:py:class:" -"`str`, :py:obj:`~typing.Union`\\ [:py:class:`int`, :py:class:`float`, :py:" -"class:`~typing.List`\\ [:py:class:`int`], :py:class:`~typing.List`\\ [:py:" -"class:`float`]]]" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -#, fuzzy -msgid ":py:obj:`clear `\\ \\(\\)" -msgstr ":py:obj:`clear `\\ \\(\\)" +#: flwr.common.EventType.encode:3 of +msgid "encoding" +msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -#, fuzzy -msgid ":py:obj:`count_bytes `\\ \\(\\)" -msgstr ":py:obj:`count_bytes `\\ \\(\\)" +#: flwr.common.EventType.encode:4 of +msgid "The encoding in which to encode the string." +msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +#: flwr.common.EventType.encode:9 of #, fuzzy -msgid ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" -msgstr ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" +msgid "errors" +msgstr "错误" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -#, fuzzy -msgid ":py:obj:`items `\\ \\(\\)" -msgstr ":py:obj:`items `\\ \\(\\)" +#: flwr.common.EventType.encode:6 of +msgid "" +"The error handling scheme to use for encoding errors. The default is " +"'strict' meaning that encoding errors raise a UnicodeEncodeError. Other " +"possible values are 'ignore', 'replace' and 'xmlcharrefreplace' as well " +"as any other name registered with codecs.register_error that can handle " +"UnicodeEncodeErrors." +msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -#, fuzzy -msgid ":py:obj:`keys `\\ \\(\\)" -msgstr ":py:obj:`keys `\\ \\(\\)" +#: flwr.common.EventType.endswith:1 of +msgid "" +"Return True if S ends with the specified suffix, False otherwise. With " +"optional start, test S beginning at that position. With optional end, " +"stop comparing S at that position. suffix can also be a tuple of strings " +"to try." +msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -#, fuzzy -msgid ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" -msgstr ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" +#: flwr.common.EventType.expandtabs:3 of +msgid "If tabsize is not given, a tab size of 8 characters is assumed." +msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -#, fuzzy +#: flwr.common.EventType.find:1 flwr.common.EventType.index:1 of msgid "" -":py:obj:`update `\\ \\(\\[E\\, " -"\\]\\*\\*F\\)" +"Return the lowest index in S where substring sub is found, such that sub " +"is contained within S[start:end]. Optional arguments start and end are " +"interpreted as in slice notation." msgstr "" -":py:obj:`update `\\ \\(\\[E\\, " -"\\]\\*\\*F\\)" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -#, fuzzy -msgid ":py:obj:`values `\\ \\(\\)" -msgstr ":py:obj:`values `\\ \\(\\)" +#: flwr.common.EventType.find:5 flwr.common.EventType.rfind:5 of +msgid "Return -1 on failure." +msgstr "" -#: ../../source/ref-api/flwr.common.NDArray.rst:2 -#, fuzzy -msgid "NDArray" -msgstr "NDArray" +#: flwr.common.EventType.format:1 of +msgid "" +"Return a formatted version of S, using substitutions from args and " +"kwargs. The substitutions are identified by braces ('{' and '}')." +msgstr "" -#: ../../source/ref-api/flwr.common.Parameters.rst:29::1 -#, fuzzy -msgid ":py:obj:`tensors `\\" -msgstr ":py:obj:`tensors `\\" +#: flwr.common.EventType.format_map:1 of +msgid "" +"Return a formatted version of S, using substitutions from mapping. The " +"substitutions are identified by braces ('{' and '}')." +msgstr "" -#: ../../source/ref-api/flwr.common.Parameters.rst:29::1 -#, fuzzy -msgid ":py:obj:`tensor_type `\\" -msgstr ":py:obj:`tensor_type `\\" +#: flwr.common.EventType.index:5 flwr.common.EventType.rindex:5 of +msgid "Raises ValueError when the substring is not found." +msgstr "" -#: ../../source/ref-api/flwr.common.ParametersRecord.rst:2 -#, fuzzy -msgid "ParametersRecord" -msgstr "参数" +#: flwr.common.EventType.isalnum:3 of +msgid "" +"A string is alpha-numeric if all characters in the string are alpha-" +"numeric and there is at least one character in the string." +msgstr "" -#: flwr.common.record.parametersrecord.ParametersRecord:1 of -#, fuzzy +#: flwr.common.EventType.isalpha:3 of msgid "" -"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ [:py:class:" -"`str`, :py:class:`~flwr.common.record.parametersrecord.Array`]" +"A string is alphabetic if all characters in the string are alphabetic and" +" there is at least one character in the string." msgstr "" -"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ [:py:class:" -"`str`, :py:class:`~flwr.common.record.parametersrecord.Array`]" -#: flwr.common.record.parametersrecord.ParametersRecord:3 of -#, fuzzy +#: flwr.common.EventType.isascii:3 of msgid "" -"A dataclass storing named Arrays in order. This means that it holds entries " -"as an OrderedDict[str, Array]. ParametersRecord objects can be viewed as an " -"equivalent to PyTorch's state_dict, but holding serialised tensors instead." +"ASCII characters have code points in the range U+0000-U+007F. Empty " +"string is ASCII too." msgstr "" -"按顺序存储命名数组的数据类。这意味着它以 OrderedDict[str, Array] 的形式保存条" -"目。ParametersRecord 对象相当于 PyTorch 的 state_dict,但它保存的是序列化的张" -"量。" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -#, fuzzy -msgid ":py:obj:`clear `\\ \\(\\)" -msgstr ":py:obj:`clear `\\ \\(\\)" +#: flwr.common.EventType.isdecimal:3 of +msgid "" +"A string is a decimal string if all characters in the string are decimal " +"and there is at least one character in the string." +msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -#, fuzzy +#: flwr.common.EventType.isdigit:3 of msgid "" -":py:obj:`count_bytes `\\ \\(\\)" +"A string is a digit string if all characters in the string are digits and" +" there is at least one character in the string." msgstr "" -":py:obj:`count_bytes `\\ \\(\\)" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -#, fuzzy -msgid ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" -msgstr ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" +#: flwr.common.EventType.isidentifier:3 of +msgid "" +"Call keyword.iskeyword(s) to test whether string s is a reserved " +"identifier, such as \"def\" or \"class\"." +msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -#, fuzzy -msgid ":py:obj:`items `\\ \\(\\)" -msgstr ":py:obj:`items `\\ \\(\\)" +#: flwr.common.EventType.islower:3 of +msgid "" +"A string is lowercase if all cased characters in the string are lowercase" +" and there is at least one cased character in the string." +msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -#, fuzzy -msgid ":py:obj:`keys `\\ \\(\\)" -msgstr ":py:obj:`keys `\\ \\(\\)" +#: flwr.common.EventType.isnumeric:3 of +msgid "" +"A string is numeric if all characters in the string are numeric and there" +" is at least one character in the string." +msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -#, fuzzy -msgid ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" -msgstr ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" +#: flwr.common.EventType.isprintable:3 of +msgid "" +"A string is printable if all of its characters are considered printable " +"in repr() or if it is empty." +msgstr "" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -#, fuzzy +#: flwr.common.EventType.isspace:3 of msgid "" -":py:obj:`update `\\ \\(\\[E\\, " -"\\]\\*\\*F\\)" +"A string is whitespace if all characters in the string are whitespace and" +" there is at least one character in the string." msgstr "" -":py:obj:`update `\\ \\(\\[E\\, " -"\\]\\*\\*F\\)" -#: flwr.common.record.typeddict.TypedDict.clear:1::1 of -#, fuzzy -msgid ":py:obj:`values `\\ \\(\\)" -msgstr ":py:obj:`values `\\ \\(\\)" +#: flwr.common.EventType.istitle:3 of +msgid "" +"In a title-cased string, upper- and title-case characters may only follow" +" uncased characters and lowercase characters only cased ones." +msgstr "" -#: flwr.common.record.parametersrecord.ParametersRecord.count_bytes:3 of -#, fuzzy +#: flwr.common.EventType.isupper:3 of msgid "" -"Note that a small amount of Bytes might also be included in this counting " -"that correspond to metadata of the serialized object (e.g. of NumPy array) " -"needed for deseralization." +"A string is uppercase if all cased characters in the string are uppercase" +" and there is at least one cased character in the string." msgstr "" -"请注意,该计数中还可能包含少量字节,这些字节与序列化对象(如 NumPy 数组)的元" -"数据相对应,需要进行去eralization。" -#: ../../source/ref-api/flwr.common.ReconnectIns.rst:2 -#, fuzzy -msgid "ReconnectIns" -msgstr "启用 SSL 连接" +#: flwr.common.EventType.join:3 of +msgid "" +"The string whose method is called is inserted in between each given " +"string. The result is returned as a new string." +msgstr "" -#: ../../source/ref-api/flwr.common.ReconnectIns.rst:28::1 -#, fuzzy -msgid ":py:obj:`seconds `\\" -msgstr ":py:obj:`seconds `\\" +#: flwr.common.EventType.join:6 of +msgid "Example: '.'.join(['ab', 'pq', 'rs']) -> 'ab.pq.rs'" +msgstr "" -#: ../../source/ref-api/flwr.common.RecordSet.rst:2 -#, fuzzy -msgid "RecordSet" -msgstr "RecordSet" +#: flwr.common.EventType.lstrip:3 flwr.common.EventType.rstrip:3 +#: flwr.common.EventType.strip:3 of +msgid "If chars is given and not None, remove characters in chars instead." +msgstr "" -#: flwr.common.RecordSet.configs_records:1::1 of -#, fuzzy -msgid ":py:obj:`configs_records `\\" -msgstr ":py:obj:`configs_records `\\" +#: flwr.common.EventType.maketrans:3 of +msgid "" +"If there is only one argument, it must be a dictionary mapping Unicode " +"ordinals (integers) or characters to Unicode ordinals, strings or None. " +"Character keys will be then converted to ordinals. If there are two " +"arguments, they must be strings of equal length, and in the resulting " +"dictionary, each character in x will be mapped to the character at the " +"same position in y. If there is a third argument, it must be a string, " +"whose characters will be mapped to None in the result." +msgstr "" -#: flwr.common.RecordSet.configs_records:1 -#: flwr.common.RecordSet.configs_records:1::1 of -#, fuzzy -msgid "Dictionary holding ConfigsRecord instances." -msgstr "包含 ConfigsRecord 实例的字典。" +#: flwr.common.EventType.partition:3 of +msgid "" +"This will search for the separator in the string. If the separator is " +"found, returns a 3-tuple containing the part before the separator, the " +"separator itself, and the part after it." +msgstr "" -#: flwr.common.RecordSet.configs_records:1::1 of -#, fuzzy -msgid ":py:obj:`metrics_records `\\" -msgstr ":py:obj:`metrics_records `\\" +#: flwr.common.EventType.partition:7 of +msgid "" +"If the separator is not found, returns a 3-tuple containing the original " +"string and two empty strings." +msgstr "" -#: flwr.common.RecordSet.configs_records:1::1 -#: flwr.common.RecordSet.metrics_records:1 of -#, fuzzy -msgid "Dictionary holding MetricsRecord instances." -msgstr "保存 MetricsRecord 实例的字典。" +#: flwr.common.EventType.removeprefix:3 of +msgid "" +"If the string starts with the prefix string, return string[len(prefix):]." +" Otherwise, return a copy of the original string." +msgstr "" -#: flwr.common.RecordSet.configs_records:1::1 of -#, fuzzy +#: flwr.common.EventType.removesuffix:3 of msgid "" -":py:obj:`parameters_records `\\" +"If the string ends with the suffix string and that suffix is not empty, " +"return string[:-len(suffix)]. Otherwise, return a copy of the original " +"string." msgstr "" -":py:obj:`parameters_records `\\" -#: flwr.common.RecordSet.configs_records:1::1 -#: flwr.common.RecordSet.parameters_records:1 of +#: flwr.common.EventType.replace:5 of #, fuzzy -msgid "Dictionary holding ParametersRecord instances." -msgstr "存放 ParametersRecord 实例的字典。" +msgid "count" +msgstr "背景" -#: ../../source/ref-api/flwr.common.ServerMessage.rst:2 -#, fuzzy -msgid "ServerMessage" -msgstr "服务器端" +#: flwr.common.EventType.replace:4 of +msgid "" +"Maximum number of occurrences to replace. -1 (the default value) means " +"replace all occurrences." +msgstr "" -#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 -#, fuzzy -msgid ":py:obj:`evaluate_ins `\\" -msgstr ":py:obj:`evaluate_ins `\\" +#: flwr.common.EventType.replace:7 of +msgid "" +"If the optional argument count is given, only the first count occurrences" +" are replaced." +msgstr "" -#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 -#, fuzzy -msgid ":py:obj:`fit_ins `\\" -msgstr ":py:obj:`fit_ins `\\" +#: flwr.common.EventType.rfind:1 flwr.common.EventType.rindex:1 of +msgid "" +"Return the highest index in S where substring sub is found, such that sub" +" is contained within S[start:end]. Optional arguments start and end are " +"interpreted as in slice notation." +msgstr "" -#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 -#, fuzzy +#: flwr.common.EventType.rpartition:3 of msgid "" -":py:obj:`get_parameters_ins `\\" +"This will search for the separator in the string, starting at the end. If" +" the separator is found, returns a 3-tuple containing the part before the" +" separator, the separator itself, and the part after it." msgstr "" -":py:obj:`get_parameters_ins `\\" -#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 -#, fuzzy +#: flwr.common.EventType.rpartition:7 of msgid "" -":py:obj:`get_properties_ins `\\" +"If the separator is not found, returns a 3-tuple containing two empty " +"strings and the original string." msgstr "" -":py:obj:`get_properties_ins `\\" -#: ../../source/ref-api/flwr.common.Status.rst:2 -#, fuzzy -msgid "Status" -msgstr "客户端状态。" +#: flwr.common.EventType.rsplit:7 flwr.common.EventType.split:7 of +msgid "sep" +msgstr "" -#: ../../source/ref-api/flwr.common.Status.rst:29::1 -#, fuzzy -msgid ":py:obj:`code `\\" -msgstr ":py:obj:`code `\\" +#: flwr.common.EventType.rsplit:4 flwr.common.EventType.split:4 of +msgid "The separator used to split the string." +msgstr "" -#: ../../source/ref-api/flwr.common.Status.rst:29::1 -#, fuzzy -msgid ":py:obj:`message `\\" -msgstr ":py:obj:`message `\\" +#: flwr.common.EventType.rsplit:6 flwr.common.EventType.split:6 of +msgid "" +"When set to None (the default value), will split on any whitespace " +"character (including \\\\n \\\\r \\\\t \\\\f and spaces) and will discard" +" empty strings from the result." +msgstr "" -#: ../../source/ref-api/flwr.common.array_from_numpy.rst:2 -#, fuzzy -msgid "array\\_from\\_numpy" -msgstr "array\\_from\\_numpy" +#: flwr.common.EventType.rsplit:11 flwr.common.EventType.split:11 of +msgid "maxsplit" +msgstr "" -#: ../../source/ref-api/flwr.common.bytes_to_ndarray.rst:2 -#, fuzzy -msgid "bytes\\_to\\_ndarray" -msgstr "bytes\\_to\\_ndarray" - -#: ../../source/ref-api/flwr.common.configure.rst:2 -#, fuzzy -msgid "configure" -msgstr "配置日志记录" +#: flwr.common.EventType.rsplit:10 flwr.common.EventType.split:10 of +msgid "" +"Maximum number of splits (starting from the left). -1 (the default value)" +" means no limit." +msgstr "" -#: ../../source/ref-api/flwr.common.event.rst:2 -#, fuzzy -msgid "event" -msgstr "事件" +#: flwr.common.EventType.rsplit:13 of +msgid "Splitting starts at the end of the string and works to the front." +msgstr "" -#: ../../source/ref-api/flwr.common.log.rst:2 -#, fuzzy -msgid "log" -msgstr "登录" +#: flwr.common.EventType.split:13 of +msgid "" +"Note, str.split() is mainly useful for data that has been intentionally " +"delimited. With natural text that includes punctuation, consider using " +"the regular expression module." +msgstr "" -#: logging.Logger.log:3 of +#: flwr.common.EventType.splitlines:3 of msgid "" -"To pass exception information, use the keyword argument exc_info with a true " -"value, e.g." -msgstr "要传递异常信息,请使用带 true 值的关键字参数 exc_info,例如。" +"Line breaks are not included in the resulting list unless keepends is " +"given and true." +msgstr "" -#: logging.Logger.log:6 of -#, python-format -msgid "logger.log(level, \"We have a %s\", \"mysterious problem\", exc_info=1)" +#: flwr.common.EventType.startswith:1 of +msgid "" +"Return True if S starts with the specified prefix, False otherwise. With " +"optional start, test S beginning at that position. With optional end, " +"stop comparing S at that position. prefix can also be a tuple of strings " +"to try." msgstr "" -"logger.log(level, \"We have a %s\", \"mysterious problem\", exc_info=1)" -#: ../../source/ref-api/flwr.common.ndarray_to_bytes.rst:2 -#, fuzzy -msgid "ndarray\\_to\\_bytes" -msgstr "ndarray\\_to\\_bytes" +#: flwr.common.EventType.title:3 of +msgid "" +"More specifically, words start with uppercased characters and all " +"remaining cased characters have lower case." +msgstr "" -#: ../../source/ref-api/flwr.common.ndarrays_to_parameters.rst:2 +#: flwr.common.EventType.translate:5 of #, fuzzy -msgid "ndarrays\\_to\\_parameters" -msgstr "ndarrays\\_to\\_parameters" +msgid "table" +msgstr "数据库" -#: ../../source/ref-api/flwr.common.now.rst:2 -#, fuzzy -msgid "now" -msgstr "现在" +#: flwr.common.EventType.translate:4 of +msgid "" +"Translation table, which must be a mapping of Unicode ordinals to Unicode" +" ordinals, strings, or None." +msgstr "" -#: ../../source/ref-api/flwr.common.parameters_to_ndarrays.rst:2 -#, fuzzy -msgid "parameters\\_to\\_ndarrays" -msgstr "parameters\\_to\\_ndarrays" +#: flwr.common.EventType.translate:7 of +msgid "" +"The table must implement lookup/indexing via __getitem__, for instance a " +"dictionary or list. If this operation raises LookupError, the character " +"is left untouched. Characters mapped to None are deleted." +msgstr "" -#: ../../source/ref-api/flwr.server.rst:2 -msgid "server" -msgstr "服务器" +#: flwr.common.EventType.zfill:3 of +msgid "The string is never truncated." +msgstr "" -#: ../../source/ref-api/flwr.server.rst:27::1 +#: ../../source/ref-api/flwr.common.FitIns.rst:2 #, fuzzy -msgid ":py:obj:`run_driver_api `\\ \\(\\)" -msgstr ":py:obj:`run_driver_api `\\ \\(\\)" +msgid "FitIns" +msgstr "FitIns" -#: ../../source/ref-api/flwr.server.rst:27::1 -#: flwr.server.app.run_driver_api:1 of +#: ../../source/ref-api/flwr.common.FitIns.rst:29::1 #, fuzzy -msgid "Run Flower server (Driver API)." -msgstr "flower-driver-api" +msgid ":py:obj:`parameters `\\" +msgstr ":py:obj:`parameters `\\" -#: ../../source/ref-api/flwr.server.rst:27::1 +#: ../../source/ref-api/flwr.common.FitIns.rst:29::1 #, fuzzy -msgid ":py:obj:`run_fleet_api `\\ \\(\\)" -msgstr ":py:obj:`run_fleet_api `\\ \\(\\)" +msgid ":py:obj:`config `\\" +msgstr ":py:obj:`config `\\" -#: ../../source/ref-api/flwr.server.rst:27::1 -#: flwr.server.app.run_fleet_api:1 of +#: ../../source/ref-api/flwr.common.FitRes.rst:2 #, fuzzy -msgid "Run Flower server (Fleet API)." -msgstr "Flower 服务器。" +msgid "FitRes" +msgstr "FitRes" -#: ../../source/ref-api/flwr.server.rst:27::1 +#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 #, fuzzy -msgid ":py:obj:`run_server_app `\\ \\(\\)" -msgstr ":py:obj:`run_server_app `\\ \\(\\)" +msgid ":py:obj:`status `\\" +msgstr ":py:obj:`status `\\" -#: ../../source/ref-api/flwr.server.rst:27::1 -#: flwr.server.run_serverapp.run_server_app:1 of +#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 #, fuzzy -msgid "Run Flower server app." -msgstr "Flower 服务器。" +msgid ":py:obj:`parameters `\\" +msgstr ":py:obj:`parameters `\\" -#: ../../source/ref-api/flwr.server.rst:27::1 +#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 #, fuzzy -msgid ":py:obj:`run_superlink `\\ \\(\\)" -msgstr ":py:obj:`run_superlink `\\ \\(\\)" +msgid ":py:obj:`num_examples `\\" +msgstr ":py:obj:`num_examples `\\" -#: ../../source/ref-api/flwr.server.rst:27::1 -#: flwr.server.app.run_superlink:1 of +#: ../../source/ref-api/flwr.common.FitRes.rst:31::1 #, fuzzy -msgid "Run Flower server (Driver API and Fleet API)." -msgstr "运行 Flower 服务器(Driver API 和 Fleet API)。" +msgid ":py:obj:`metrics `\\" +msgstr ":py:obj:`metrics `\\" -#: ../../source/ref-api/flwr.server.rst:27::1 +#: ../../source/ref-api/flwr.common.GetParametersIns.rst:2 #, fuzzy -msgid "" -":py:obj:`start_driver `\\ \\(\\*\\[\\, " -"server\\_address\\, server\\, ...\\]\\)" -msgstr "" -":py:obj:`start_driver `\\ \\(\\*\\[\\, " -"server\\_address\\, server\\, ...\\]\\)" +msgid "GetParametersIns" +msgstr "参数" -#: ../../source/ref-api/flwr.server.rst:27::1 -#: flwr.server.compat.app.start_driver:1 of +#: ../../source/ref-api/flwr.common.GetParametersIns.rst:28::1 #, fuzzy -msgid "Start a Flower Driver API server." -msgstr "启动基于 Ray 的Flower模拟服务器。" +msgid ":py:obj:`config `\\" +msgstr ":py:obj:`config `\\" -#: ../../source/ref-api/flwr.server.rst:27::1 +#: ../../source/ref-api/flwr.common.GetParametersRes.rst:2 #, fuzzy -msgid "" -":py:obj:`start_server `\\ \\(\\*\\[\\, " -"server\\_address\\, server\\, ...\\]\\)" -msgstr "" -":py:obj:`start_server `\\ \\(\\*\\[\\, " -"server\\_address\\, server\\, ...\\]\\)" - -#: ../../source/ref-api/flwr.server.rst:27::1 -#: flwr.server.app.start_server:1 of -msgid "Start a Flower server using the gRPC transport layer." -msgstr "使用 gRPC 传输层启动 Flower 服务器。" +msgid "GetParametersRes" +msgstr "参数" -#: ../../source/ref-api/flwr.server.rst:41::1 +#: ../../source/ref-api/flwr.common.GetParametersRes.rst:29::1 #, fuzzy -msgid ":py:obj:`ClientManager `\\ \\(\\)" -msgstr ":py:obj:`ClientManager `\\ \\(\\)" +msgid ":py:obj:`status `\\" +msgstr ":py:obj:`status `\\" -#: ../../source/ref-api/flwr.server.rst:41::1 -#: flwr.server.client_manager.ClientManager:1 of +#: ../../source/ref-api/flwr.common.GetParametersRes.rst:29::1 #, fuzzy -msgid "Abstract base class for managing Flower clients." -msgstr "Flower 客户端的抽象基类。" +msgid ":py:obj:`parameters `\\" +msgstr ":py:obj:`parameters `\\" -#: ../../source/ref-api/flwr.server.rst:41::1 +#: ../../source/ref-api/flwr.common.GetPropertiesIns.rst:2 #, fuzzy -msgid "" -":py:obj:`Driver `\\ " -"\\(\\[driver\\_service\\_address\\, ...\\]\\)" -msgstr "" -"Flower 1.0: ``start_server(..., config=flwr.server." -"ServerConfig(num_rounds=3, round_timeout=600.0), ...)``" +msgid "GetPropertiesIns" +msgstr "GetPropertiesIns" -#: ../../source/ref-api/flwr.server.rst:41::1 -#: flwr.server.driver.driver.Driver:1 of +#: ../../source/ref-api/flwr.common.GetPropertiesIns.rst:28::1 #, fuzzy -msgid "`Driver` class provides an interface to the Driver API." -msgstr "`Driver` 类为驱动程序 API 提供了一个接口。" +msgid ":py:obj:`config `\\" +msgstr ":py:obj:`config `\\" -#: ../../source/ref-api/flwr.server.rst:41::1 +#: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:2 #, fuzzy -msgid ":py:obj:`History `\\ \\(\\)" -msgstr ":py:obj:`History `\\ \\(\\)" +msgid "GetPropertiesRes" +msgstr "GetPropertiesRes" -#: ../../source/ref-api/flwr.server.rst:41::1 -#: flwr.server.history.History:1 of +#: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:29::1 #, fuzzy -msgid "History class for training and/or evaluation metrics collection." -msgstr "**hist** -- 包含训练和评估指标的对象。" +msgid ":py:obj:`status `\\" +msgstr ":py:obj:`status `\\" -#: ../../source/ref-api/flwr.server.rst:41::1 +#: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:29::1 #, fuzzy -msgid "" -":py:obj:`LegacyContext `\\ \\(state\\[\\, " -"config\\, strategy\\, ...\\]\\)" -msgstr "" -":py:obj:`LegacyContext `\\ \\(state\\[\\, " -"config\\, strategy\\, ...\\]\\)" +msgid ":py:obj:`properties `\\" +msgstr ":py:obj:`properties `\\" -#: ../../source/ref-api/flwr.server.rst:41::1 -#: flwr.server.compat.legacy_context.LegacyContext:1 of +#: ../../source/ref-api/flwr.common.Message.rst:2 #, fuzzy -msgid "Legacy Context." -msgstr "传承背景。" +msgid "Message" +msgstr "服务器端" -#: ../../source/ref-api/flwr.server.rst:41::1 +#: flwr.common.Message.content:1::1 flwr.common.Message.metadata:1 +#: flwr.common.message.Message:3 of #, fuzzy -msgid "" -":py:obj:`Server `\\ \\(\\*\\, client\\_manager\\[\\, " -"strategy\\]\\)" -msgstr "" -":py:obj:`Server `\\ \\(\\*\\, client\\_manager\\[\\, " -"strategy\\]\\)" +msgid "A dataclass including information about the message to be executed." +msgstr "数据类型,包括要执行的信息的相关信息。" -#: ../../source/ref-api/flwr.server.rst:41::1 +#: flwr.common.message.Message:5 of #, fuzzy msgid "" -":py:obj:`ServerApp `\\ \\(\\[server\\, config\\, " -"strategy\\, ...\\]\\)" -msgstr "server.strategy.Strategy" +"Holds records either sent by another entity (e.g. sent by the server-side" +" logic to a client, or vice-versa) or that will be sent to it." +msgstr "保存由其他实体发送的记录(如由服务器端逻辑发送到客户端,反之亦然)或将发送到该实体的记录。" -#: ../../source/ref-api/flwr.server.rst:41::1 -#: flwr.server.server_app.ServerApp:1 of +#: flwr.common.message.Message:8 of #, fuzzy -msgid "Flower ServerApp." -msgstr "Flower 服务器。" +msgid "" +"A dataclass that captures information about an error that took place when" +" processing another message." +msgstr "数据类,用于捕捉处理其他报文时发生的错误信息。" -#: ../../source/ref-api/flwr.server.rst:41::1 +#: ../../source/ref-api/flwr.common.Message.rst:35::1 #, fuzzy msgid "" -":py:obj:`ServerConfig `\\ \\(\\[num\\_rounds\\, " -"round\\_timeout\\]\\)" +":py:obj:`create_error_reply `\\ " +"\\(error\\[\\, ttl\\]\\)" msgstr "" -"Flower 1.0: ``start_server(..., config=flwr.server." -"ServerConfig(num_rounds=3, round_timeout=600.0), ...)``" +":py:obj:`create_error_reply `\\ " +"\\(error\\, ttl\\)" -#: ../../source/ref-api/flwr.server.rst:41::1 -#: flwr.server.server_config.ServerConfig:1 of +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: flwr.common.message.Message.create_error_reply:1 of #, fuzzy -msgid "Flower server config." -msgstr "Flower 服务器。" +msgid "Construct a reply message indicating an error happened." +msgstr "构建一条回复信息,说明发生了错误。" -#: ../../source/ref-api/flwr.server.rst:41::1 +#: ../../source/ref-api/flwr.common.Message.rst:35::1 #, fuzzy msgid "" -":py:obj:`SimpleClientManager `\\ \\(\\)" +":py:obj:`create_reply `\\ " +"\\(content\\[\\, ttl\\]\\)" msgstr "" -":py:obj:`SimpleClientManager `\\ \\(\\)" +":py:obj:`create_reply `\\ \\(content\\," +" ttl\\)" -#: ../../source/ref-api/flwr.server.rst:41::1 -#: flwr.server.client_manager.SimpleClientManager:1 of +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: flwr.common.message.Message.create_reply:1 of #, fuzzy -msgid "Provides a pool of available clients." -msgstr "使用部分可用客户进行评估。" +msgid "Create a reply to this message with specified content and TTL." +msgstr "以指定的内容和 TTL 创建对该信息的回复。" -#: ../../source/ref-api/flwr.server.rst:60::1 +#: ../../source/ref-api/flwr.common.Message.rst:35::1 #, fuzzy -msgid ":py:obj:`flwr.server.strategy `\\" -msgstr "server.strategy.Strategy" - -#: ../../source/ref-api/flwr.server.rst:60::1 -#: flwr.server.strategy:1 of -msgid "Contains the strategy abstraction and different implementations." -msgstr "包含策略抽象和不同的实现方法。" +msgid ":py:obj:`has_content `\\ \\(\\)" +msgstr ":py:obj:`has_content `\\ \\(\\)" -#: ../../source/ref-api/flwr.server.rst:60::1 +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: flwr.common.message.Message.has_content:1 of #, fuzzy -msgid ":py:obj:`flwr.server.workflow `\\" -msgstr "server.strategy.Strategy" +msgid "Return True if message has content, else False." +msgstr "如果信息有内容,则返回 True,否则返回 False。" -#: ../../source/ref-api/flwr.server.rst:60::1 -#: flwr.server.workflow:1 of +#: ../../source/ref-api/flwr.common.Message.rst:35::1 #, fuzzy -msgid "Workflows." -msgstr "工作流程" +msgid ":py:obj:`has_error `\\ \\(\\)" +msgstr ":py:obj:`has_error `\\ \\(\\)" -#: ../../source/ref-api/flwr.server.ClientManager.rst:2 +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: flwr.common.message.Message.has_error:1 of #, fuzzy -msgid "ClientManager" -msgstr "客户端" +msgid "Return True if message has an error, else False." +msgstr "如果信息有错误,则返回 True,否则返回 False。" -#: flwr.server.client_manager.ClientManager.all:1::1 of +#: flwr.common.Message.content:1::1 of #, fuzzy -msgid ":py:obj:`all `\\ \\(\\)" -msgstr ":py:obj:`all `\\ \\(\\)" +msgid ":py:obj:`content `\\" +msgstr ":py:obj:`content `\\" -#: flwr.server.client_manager.ClientManager.all:1 -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.all:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +#: flwr.common.Message.content:1 flwr.common.Message.content:1::1 +#: of #, fuzzy -msgid "Return all available clients." -msgstr "返回所有可用客户。" +msgid "The content of this message." +msgstr "评估客户端的反应。" -#: flwr.server.client_manager.ClientManager.all:1::1 of +#: flwr.common.Message.content:1::1 of #, fuzzy -msgid "" -":py:obj:`num_available `\\ \\(\\)" -msgstr "" -":py:obj:`num_available `\\ \\(\\)" +msgid ":py:obj:`error `\\" +msgstr ":py:obj:`error `\\" -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.ClientManager.num_available:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.num_available:1 of +#: flwr.common.Message.content:1::1 flwr.common.Message.error:1 of #, fuzzy -msgid "Return the number of available clients." -msgstr "返回样本大小和所需的可用客户数量。" +msgid "Error captured by this message." +msgstr "该信息捕捉到的错误。" -#: flwr.server.client_manager.ClientManager.all:1::1 of +#: flwr.common.Message.content:1::1 of #, fuzzy -msgid ":py:obj:`register `\\ \\(client\\)" -msgstr ":py:obj:`register `\\ \\(client\\)" +msgid ":py:obj:`metadata `\\" +msgstr ":py:obj:`metadata `\\" -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.ClientManager.register:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.register:1 of +#: flwr.common.message.Message.create_error_reply:3 of #, fuzzy -msgid "Register Flower ClientProxy instance." -msgstr "注册 Flower ClientProxy 实例。" +msgid "The error that was encountered." +msgstr "遇到的错误。" -#: flwr.server.client_manager.ClientManager.all:1::1 of -#, fuzzy +#: flwr.common.message.Message.create_error_reply:5 +#: flwr.common.message.Message.create_reply:9 of msgid "" -":py:obj:`sample `\\ \\(num\\_clients\\[\\, " -"min\\_num\\_clients\\, criterion\\]\\)" +"Time-to-live for this message in seconds. If unset, it will be set based " +"on the remaining time for the received message before it expires. This " +"follows the equation: ttl = msg.meta.ttl - (reply.meta.created_at - " +"msg.meta.created_at)" msgstr "" -":py:obj:`sample `\\ \\(num\\_clients\\[\\, " -"min\\_num\\_clients\\, criterion\\]\\)" - -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.ClientManager.sample:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.sample:1 of -#, fuzzy -msgid "Sample a number of Flower ClientProxy instances." -msgstr "取样若干 Flower ClientProxy 实例。" -#: flwr.server.client_manager.ClientManager.all:1::1 of -#, fuzzy +#: flwr.common.message.Message.create_error_reply:5 +#: flwr.common.message.Message.create_reply:9 of msgid "" -":py:obj:`unregister `\\ \\(client\\)" +"Time-to-live for this message in seconds. If unset, it will be set based " +"on the remaining time for the received message before it expires. This " +"follows the equation:" msgstr "" -":py:obj:`unregister `\\ \\(client\\)" -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.ClientManager.unregister:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.unregister:1 of -#, fuzzy -msgid "Unregister Flower ClientProxy instance." -msgstr "取消注册 Flower ClientProxy 实例。" +#: flwr.common.message.Message.create_error_reply:9 +#: flwr.common.message.Message.create_reply:13 of +msgid "ttl = msg.meta.ttl - (reply.meta.created_at - msg.meta.created_at)" +msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 of +#: flwr.common.message.Message.create_reply:3 of #, fuzzy msgid "" -":py:obj:`wait_for `\\ " -"\\(num\\_clients\\, timeout\\)" +"The method generates a new `Message` as a reply to this message. It " +"inherits 'run_id', 'src_node_id', 'dst_node_id', and 'message_type' from " +"this message and sets 'reply_to_message' to the ID of this message." msgstr "" -":py:obj:`wait_for `\\ " -"\\(num\\_clients\\, timeout\\)" +"该方法会生成一条新的 \"信息\",作为对该信息的回复。该方法继承了该消息的 " +"\"run_id\"、\"src_node_id\"、\"dst_node_id \"和 \"message_type\",并将 " +"\"reply_to_message \"设置为该消息的 ID。" -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.ClientManager.wait_for:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.wait_for:1 of +#: flwr.common.message.Message.create_reply:7 of #, fuzzy -msgid "Wait until at least `num_clients` are available." -msgstr "等待至少 `num_clients` 可用。" +msgid "The content for the reply message." +msgstr "回复信息的内容。" -#: flwr.server.client_manager.ClientManager.num_available:3 -#: flwr.server.client_manager.SimpleClientManager.num_available:3 of +#: flwr.common.message.Message.create_reply:16 of #, fuzzy -msgid "**num_available** -- The number of currently available clients." -msgstr "**num_available** -- 当前可用客户端的数量。" +msgid "A new `Message` instance representing the reply." +msgstr "代表回复的新的 `Message` 实例。" -#: flwr.server.client_manager.ClientManager.register:6 -#: flwr.server.client_manager.SimpleClientManager.register:6 of +#: ../../source/ref-api/flwr.common.MessageType.rst:2 #, fuzzy -msgid "" -"**success** -- Indicating if registration was successful. False if " -"ClientProxy is already registered or can not be registered for any reason." -msgstr "" -"**success** -- 表示注册是否成功。如果 ClientProxy 已注册或因故无法注册,则为 " -"False。" +msgid "MessageType" +msgstr "返回类型" -#: flwr.server.client_manager.ClientManager.unregister:3 -#: flwr.server.client_manager.SimpleClientManager.unregister:3 of +#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 #, fuzzy -msgid "This method is idempotent." -msgstr "这种方法是幂等的。" +msgid ":py:obj:`EVALUATE `\\" +msgstr ":py:obj:`EVALUATE `\\" -#: ../../source/ref-api/flwr.server.Driver.rst:2 +#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 #, fuzzy -msgid "Driver" -msgstr "服务器" +msgid ":py:obj:`QUERY `\\" +msgstr ":py:obj:`QUERY `\\" -#: flwr.server.driver.driver.Driver:3 of +#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 #, fuzzy -msgid "" -"The IPv4 or IPv6 address of the Driver API server. Defaults to " -"`\"[::]:9091\"`." -msgstr "服务器的 IPv4 或 IPv6 地址。默认为 `\"[::]:8080\"。" +msgid ":py:obj:`TRAIN `\\" +msgstr ":py:obj:`TRAIN `\\" -#: flwr.server.app.start_server:28 flwr.server.driver.driver.Driver:6 of -msgid "" -"Tuple containing root certificate, server certificate, and private key to " -"start a secure SSL-enabled server. The tuple is expected to have three bytes " -"elements in the following order: * CA certificate. * server " -"certificate. * server private key." -msgstr "" -"包含根证书、服务器证书和私钥的元组,用于启动启用 SSL 的安全服务器。元组应按以" -"下顺序包含三个字节元素: * CA 证书,* 服务器证书, * 服务器私钥。" +#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:2 +#, fuzzy +msgid "MessageTypeLegacy" +msgstr "MessageTypeLegacy" -#: flwr.server.app.start_server:28 flwr.server.driver.driver.Driver:6 of -msgid "" -"Tuple containing root certificate, server certificate, and private key to " -"start a secure SSL-enabled server. The tuple is expected to have three bytes " -"elements in the following order:" -msgstr "" -"包含根证书、服务器证书和私钥的元组,用于启动启用 SSL 的安全服务器。元组应按以" -"下顺序包含三个字节元素:" +#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:29::1 +#, fuzzy +msgid ":py:obj:`GET_PARAMETERS `\\" +msgstr ":py:obj:`GET_PARAMETERS `\\" -#: flwr.server.app.start_server:32 flwr.server.driver.driver.Driver:10 of -msgid "CA certificate." -msgstr "CA 证书。" +#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:29::1 +#, fuzzy +msgid ":py:obj:`GET_PROPERTIES `\\" +msgstr ":py:obj:`GET_PROPERTIES `\\" -#: flwr.server.app.start_server:33 flwr.server.driver.driver.Driver:11 of -msgid "server certificate." -msgstr "服务器证书。" +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.run_id:1 flwr.common.message.Metadata:3 of +#, fuzzy +msgid "An identifier for the current run." +msgstr "当前运行的标识符。" -#: flwr.server.app.start_server:34 flwr.server.driver.driver.Driver:12 of -msgid "server private key." -msgstr "服务器私人密钥。" +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.message_id:1 flwr.common.message.Metadata:5 of +#, fuzzy +msgid "An identifier for the current message." +msgstr "当前信息的标识符。" -#: flwr.server.driver.driver.Driver.close:1::1 of +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.src_node_id:1 flwr.common.message.Metadata:7 of #, fuzzy -msgid ":py:obj:`close `\\ \\(\\)" -msgstr "server.strategy.Strategy" +msgid "An identifier for the node sending this message." +msgstr "发送此信息的节点的标识符。" -#: flwr.server.driver.driver.Driver.close:1 -#: flwr.server.driver.driver.Driver.close:1::1 of +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.dst_node_id:1 flwr.common.message.Metadata:9 of #, fuzzy -msgid "Disconnect from the SuperLink if connected." -msgstr "如果已连接,请断开与超级链接的连接。" +msgid "An identifier for the node receiving this message." +msgstr "接收此信息的节点的标识符。" -#: flwr.server.driver.driver.Driver.close:1::1 of +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.reply_to_message:1 flwr.common.message.Metadata:11 of #, fuzzy -msgid "" -":py:obj:`create_message `\\ \\(content\\, " -"message\\_type\\, ...\\)" -msgstr "" -":py:obj:`create_message `\\ \\(content\\, " -"message\\_type\\, ...\\)" +msgid "An identifier for the message this message replies to." +msgstr "该信息回复的信息的标识符。" -#: flwr.server.driver.driver.Driver.close:1::1 -#: flwr.server.driver.driver.Driver.create_message:1 of +#: flwr.common.message.Metadata:13 of #, fuzzy -msgid "Create a new message with specified parameters." -msgstr "使用指定参数创建新信息。" +msgid "" +"An identifier for grouping messages. In some settings, this is used as " +"the FL round." +msgstr "用于分组报文的标识符。在某些设置中,它被用作 FL 轮。" -#: flwr.server.driver.driver.Driver.close:1::1 of +#: flwr.common.message.Metadata:16 of #, fuzzy -msgid ":py:obj:`get_node_ids `\\ \\(\\)" -msgstr ":py:obj:`get_node_ids `\\ \\(\\)" +msgid "Time-to-live for this message in seconds." +msgstr "该信息的有效时间。" -#: flwr.server.driver.driver.Driver.close:1::1 -#: flwr.server.driver.driver.Driver.get_node_ids:1 of +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.message_type:1 flwr.common.message.Metadata:18 of #, fuzzy -msgid "Get node IDs." -msgstr "获取节点 ID。" +msgid "A string that encodes the action to be executed on the receiving end." +msgstr "编码接收端要执行的操作的字符串。" -#: flwr.server.driver.driver.Driver.close:1::1 of +#: flwr.common.message.Metadata:21 of #, fuzzy msgid "" -":py:obj:`pull_messages `\\ " -"\\(message\\_ids\\)" -msgstr "" -":py:obj:`pull_messages `\\ " -"\\(message\\_ids\\)" +"An identifier that can be used when loading a particular data partition " +"for a ClientApp. Making use of this identifier is more relevant when " +"conducting simulations." +msgstr "为 ClientApp 加载特定数据分区时可使用的标识符。在进行模拟时,使用该标识符更有意义。" -#: flwr.server.driver.driver.Driver.close:1::1 -#: flwr.server.driver.driver.Driver.pull_messages:1 of +#: flwr.common.Metadata.created_at:1::1 of #, fuzzy -msgid "Pull messages based on message IDs." -msgstr "根据信息 ID 提取信息。" +msgid ":py:obj:`created_at `\\" +msgstr ":py:obj:`ttl `\\" -#: flwr.server.driver.driver.Driver.close:1::1 of -#, fuzzy -msgid "" -":py:obj:`push_messages `\\ \\(messages\\)" +#: flwr.common.Metadata.created_at:1 +#: flwr.common.Metadata.created_at:1::1 of +msgid "Unix timestamp when the message was created." msgstr "" -":py:obj:`push_messages `\\ \\(messages\\)" -#: flwr.server.driver.driver.Driver.close:1::1 -#: flwr.server.driver.driver.Driver.push_messages:1 of +#: flwr.common.Metadata.created_at:1::1 of #, fuzzy -msgid "Push messages to specified node IDs." -msgstr "向指定的节点 ID 推送信息。" +msgid ":py:obj:`dst_node_id `\\" +msgstr ":py:obj:`dst_node_id `\\" -#: flwr.server.driver.driver.Driver.close:1::1 of +#: flwr.common.Metadata.created_at:1::1 of #, fuzzy -msgid "" -":py:obj:`send_and_receive `\\ " -"\\(messages\\, \\*\\[\\, timeout\\]\\)" -msgstr "" -"Flower 1.0: ``start_server(..., config=flwr.server." -"ServerConfig(num_rounds=3, round_timeout=600.0), ...)``" +msgid ":py:obj:`group_id `\\" +msgstr ":py:obj:`group_id `\\" -#: flwr.server.driver.driver.Driver.close:1::1 -#: flwr.server.driver.driver.Driver.send_and_receive:1 of +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.group_id:1 of #, fuzzy -msgid "Push messages to specified node IDs and pull the reply messages." -msgstr "向指定的节点 ID 推送信息并提取回复信息。" +msgid "An identifier for grouping messages." +msgstr "用于分组信息的标识符。" -#: flwr.server.driver.driver.Driver.create_message:3 of +#: flwr.common.Metadata.created_at:1::1 of #, fuzzy -msgid "" -"This method constructs a new `Message` with given content and metadata. The " -"`run_id` and `src_node_id` will be set automatically." -msgstr "" -"本方法使用给定的内容和元数据构建新的 `Message` 。run_id \"和 \"src_node_id " -"\"将自动设置。" +msgid ":py:obj:`message_id `\\" +msgstr ":py:obj:`message_id `\\" -#: flwr.server.driver.driver.Driver.create_message:6 of +#: flwr.common.Metadata.created_at:1::1 of #, fuzzy -msgid "" -"The content for the new message. This holds records that are to be sent to " -"the destination node." -msgstr "新信息的内容。其中包含要发送到目的节点的记录。" +msgid ":py:obj:`message_type `\\" +msgstr ":py:obj:`message_type `\\" -#: flwr.server.driver.driver.Driver.create_message:9 of +#: flwr.common.Metadata.created_at:1::1 of #, fuzzy -msgid "" -"The type of the message, defining the action to be executed on the receiving " -"end." -msgstr "信息类型,定义接收端要执行的操作。" +msgid ":py:obj:`partition_id `\\" +msgstr ":py:obj:`partition_id `\\" -#: flwr.server.driver.driver.Driver.create_message:12 of +#: flwr.common.Metadata.created_at:1::1 +#: flwr.common.Metadata.partition_id:1 of #, fuzzy -msgid "The ID of the destination node to which the message is being sent." -msgstr "信息发送目的地节点的 ID。" +msgid "An identifier telling which data partition a ClientApp should use." +msgstr "告诉 ClientApp 应使用哪个数据分区的标识符。" -#: flwr.server.driver.driver.Driver.create_message:14 of +#: flwr.common.Metadata.created_at:1::1 of #, fuzzy -msgid "" -"The ID of the group to which this message is associated. In some settings, " -"this is used as the FL round." -msgstr "与该信息相关联的组的 ID。在某些设置中,它被用作 FL 轮。" +msgid ":py:obj:`reply_to_message `\\" +msgstr ":py:obj:`reply_to_message `\\" -#: flwr.server.driver.driver.Driver.create_message:17 of +#: flwr.common.Metadata.created_at:1::1 of #, fuzzy -msgid "" -"Time-to-live for the round trip of this message, i.e., the time from sending " -"this message to receiving a reply. It specifies the duration for which the " -"message and its potential reply are considered valid." -msgstr "" -"此报文往返的有效时间,即从发送此报文到收到回复的时间。它规定了信息及其潜在回" -"复被视为有效的持续时间。" +msgid ":py:obj:`run_id `\\" +msgstr ":py:obj:`run_id `\\" -#: flwr.server.driver.driver.Driver.create_message:22 of +#: flwr.common.Metadata.created_at:1::1 of #, fuzzy -msgid "" -"**message** -- A new `Message` instance with the specified content and " -"metadata." -msgstr "**message** -- 具有指定内容和元数据的新 \"信息 \"实例。" +msgid ":py:obj:`src_node_id `\\" +msgstr ":py:obj:`src_node_id `\\" -#: flwr.server.driver.driver.Driver.pull_messages:3 of +#: flwr.common.Metadata.created_at:1::1 of #, fuzzy -msgid "" -"This method is used to collect messages from the SuperLink that correspond " -"to a set of given message IDs." -msgstr "该方法用于从超级链接中收集与一组给定消息 ID 相对应的消息。" +msgid ":py:obj:`ttl `\\" +msgstr ":py:obj:`ttl `\\" -#: flwr.server.driver.driver.Driver.pull_messages:6 of +#: flwr.common.Metadata.created_at:1::1 flwr.common.Metadata.ttl:1 +#: of #, fuzzy -msgid "" -"An iterable of message IDs for which reply messages are to be retrieved." -msgstr "要检索回复信息的信息 ID 的可迭代项。" +msgid "Time-to-live for this message." +msgstr "该信息的有效时间。" -#: flwr.server.driver.driver.Driver.pull_messages:9 of +#: ../../source/ref-api/flwr.common.MetricsRecord.rst:2 #, fuzzy -msgid "**messages** -- An iterable of messages received." -msgstr "**messages** -- 收到的信息迭代。" +msgid "MetricsRecord" +msgstr "MetricsRecord" -#: flwr.server.driver.driver.Driver.push_messages:3 of +#: flwr.common.record.metricsrecord.MetricsRecord:1 of #, fuzzy msgid "" -"This method takes an iterable of messages and sends each message to the node " -"specified in `dst_node_id`." +"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " +"[:py:class:`str`, :py:class:`int` | :py:class:`float` | " +":py:class:`~typing.List`\\ [:py:class:`int`] | :py:class:`~typing.List`\\" +" [:py:class:`float`]]" msgstr "" -"该方法接收一个可迭代的消息,并将每条消息发送到 `dst_node_id` 中指定的节点。" +"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " +"[:py:class:`str`, :py:obj:`~typing.Union`\\ [:py:class:`int`, " +":py:class:`float`, :py:class:`~typing.List`\\ [:py:class:`int`], " +":py:class:`~typing.List`\\ [:py:class:`float`]]]" -#: flwr.server.driver.driver.Driver.push_messages:6 -#: flwr.server.driver.driver.Driver.send_and_receive:7 of +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of #, fuzzy -msgid "An iterable of messages to be sent." -msgstr "要发送的信息迭代。" +msgid ":py:obj:`clear `\\ \\(\\)" +msgstr ":py:obj:`clear `\\ \\(\\)" -#: flwr.server.driver.driver.Driver.push_messages:9 of +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of #, fuzzy -msgid "" -"**message_ids** -- An iterable of IDs for the messages that were sent, which " -"can be used to pull replies." -msgstr "**message_ids** -- 已发送信息的可迭代 ID,可用于提取回复信息。" +msgid ":py:obj:`count_bytes `\\ \\(\\)" +msgstr ":py:obj:`count_bytes `\\ \\(\\)" -#: flwr.server.driver.driver.Driver.send_and_receive:3 of +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of #, fuzzy -msgid "" -"This method sends a list of messages to their destination node IDs and then " -"waits for the replies. It continues to pull replies until either all replies " -"are received or the specified timeout duration is exceeded." -msgstr "" -"该方法会向目标节点 ID 发送信息列表,然后等待回复。它会继续提取回复,直到收到" -"所有回复或超过指定的超时时间。" +msgid ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" +msgstr ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" -#: flwr.server.driver.driver.Driver.send_and_receive:9 of +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of #, fuzzy -msgid "" -"The timeout duration in seconds. If specified, the method will wait for " -"replies for this duration. If `None`, there is no time limit and the method " -"will wait until replies for all messages are received." -msgstr "" -"超时时间(秒)。如果指定,该方法将在此期限内等待回复。如果指定为 \"无\",则没" -"有时间限制,该方法将等待直到收到所有信息的回复。" +msgid ":py:obj:`items `\\ \\(\\)" +msgstr ":py:obj:`items `\\ \\(\\)" -#: flwr.server.driver.driver.Driver.send_and_receive:14 of +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of #, fuzzy -msgid "" -"**replies** -- An iterable of reply messages received from the SuperLink." -msgstr "**replies** -- 从超级链接收到的回复信息的迭代。" +msgid ":py:obj:`keys `\\ \\(\\)" +msgstr ":py:obj:`keys `\\ \\(\\)" -#: flwr.server.driver.driver.Driver.send_and_receive:18 -#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:53 -#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:60 -#: of +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of #, fuzzy -msgid "Notes" -msgstr "无" +msgid ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" +msgstr ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" -#: flwr.server.driver.driver.Driver.send_and_receive:19 of +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of #, fuzzy msgid "" -"This method uses `push_messages` to send the messages and `pull_messages` to " -"collect the replies. If `timeout` is set, the method may not return replies " -"for all sent messages. A message remains valid until its TTL, which is not " -"affected by `timeout`." +":py:obj:`update `\\ \\(\\[E\\, " +"\\]\\*\\*F\\)" msgstr "" -"该方法使用 `push_messages` 发送信息,并使用 `pull_messages` 收集回复。如果设" -"置了 `timeout`,该方法可能不会返回所有已发送消息的回复。消息在其 TTL 之前一直" -"有效,不受 `timeout` 影响。" +":py:obj:`update `\\ \\(\\[E\\, " +"\\]\\*\\*F\\)" -#: ../../source/ref-api/flwr.server.History.rst:2 +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of #, fuzzy -msgid "History" -msgstr "历史" +msgid ":py:obj:`values `\\ \\(\\)" +msgstr ":py:obj:`values `\\ \\(\\)" -#: flwr.server.history.History.add_loss_centralized:1::1 of +#: ../../source/ref-api/flwr.common.NDArray.rst:2 #, fuzzy -msgid "" -":py:obj:`add_loss_centralized `\\ " -"\\(server\\_round\\, loss\\)" -msgstr "" -":py:obj:`add_loss_centralized `\\ " -"\\(server\\_round\\, loss\\)" +msgid "NDArray" +msgstr "NDArray" -#: flwr.server.history.History.add_loss_centralized:1 -#: flwr.server.history.History.add_loss_centralized:1::1 of +#: ../../source/ref-api/flwr.common.Parameters.rst:29::1 #, fuzzy -msgid "Add one loss entry (from centralized evaluation)." -msgstr "集中评估" +msgid ":py:obj:`tensors `\\" +msgstr ":py:obj:`tensors `\\" -#: flwr.server.history.History.add_loss_centralized:1::1 of +#: ../../source/ref-api/flwr.common.Parameters.rst:29::1 #, fuzzy -msgid "" -":py:obj:`add_loss_distributed `\\ " -"\\(server\\_round\\, loss\\)" -msgstr "" -":py:obj:`add_loss_distributed `\\ " -"\\(server\\_round\\, loss\\)" +msgid ":py:obj:`tensor_type `\\" +msgstr ":py:obj:`tensor_type `\\" -#: flwr.server.history.History.add_loss_centralized:1::1 -#: flwr.server.history.History.add_loss_distributed:1 of +#: ../../source/ref-api/flwr.common.ParametersRecord.rst:2 #, fuzzy -msgid "Add one loss entry (from distributed evaluation)." -msgstr "增加一个损失条目(来自分布式评估)。" +msgid "ParametersRecord" +msgstr "参数" -#: flwr.server.history.History.add_loss_centralized:1::1 of +#: flwr.common.record.parametersrecord.ParametersRecord:1 of #, fuzzy msgid "" -":py:obj:`add_metrics_centralized `\\ \\(server\\_round\\, metrics\\)" +"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " +"[:py:class:`str`, :py:class:`~flwr.common.record.parametersrecord.Array`]" msgstr "" -":py:obj:`add_metrics_centralized `\\ \\(server\\_round\\, metrics\\)" - -#: flwr.server.history.History.add_loss_centralized:1::1 -#: flwr.server.history.History.add_metrics_centralized:1 of -#, fuzzy -msgid "Add metrics entries (from centralized evaluation)." -msgstr "集中评估" +"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " +"[:py:class:`str`, :py:class:`~flwr.common.record.parametersrecord.Array`]" -#: flwr.server.history.History.add_loss_centralized:1::1 of +#: flwr.common.record.parametersrecord.ParametersRecord:3 of #, fuzzy msgid "" -":py:obj:`add_metrics_distributed `\\ \\(server\\_round\\, metrics\\)" +"A dataclass storing named Arrays in order. This means that it holds " +"entries as an OrderedDict[str, Array]. ParametersRecord objects can be " +"viewed as an equivalent to PyTorch's state_dict, but holding serialised " +"tensors instead." msgstr "" -":py:obj:`add_metrics_distributed `\\ \\(server\\_round\\, metrics\\)" +"按顺序存储命名数组的数据类。这意味着它以 OrderedDict[str, Array] 的形式保存条目。ParametersRecord " +"对象相当于 PyTorch 的 state_dict,但它保存的是序列化的张量。" -#: flwr.server.history.History.add_loss_centralized:1::1 -#: flwr.server.history.History.add_metrics_distributed:1 of +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of #, fuzzy -msgid "Add metrics entries (from distributed evaluation)." -msgstr "定制的集中/分布式评估" +msgid ":py:obj:`clear `\\ \\(\\)" +msgstr ":py:obj:`clear `\\ \\(\\)" -#: flwr.server.history.History.add_loss_centralized:1::1 of +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of #, fuzzy -msgid "" -":py:obj:`add_metrics_distributed_fit `\\ \\(server\\_round\\, ...\\)" -msgstr "" -":py:obj:`add_metrics_distributed_fit `\\ \\(server\\_round\\, ...\\)" +msgid ":py:obj:`count_bytes `\\ \\(\\)" +msgstr ":py:obj:`count_bytes `\\ \\(\\)" -#: flwr.server.history.History.add_loss_centralized:1::1 -#: flwr.server.history.History.add_metrics_distributed_fit:1 of +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of #, fuzzy -msgid "Add metrics entries (from distributed fit)." -msgstr "添加度量条目(来自分布式拟合)。" +msgid ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" +msgstr ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" -#: ../../source/ref-api/flwr.server.LegacyContext.rst:2 +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of #, fuzzy -msgid "LegacyContext" -msgstr "遗留上下文" +msgid ":py:obj:`items `\\ \\(\\)" +msgstr ":py:obj:`items `\\ \\(\\)" -#: flwr.server.compat.legacy_context.LegacyContext:1 of +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of #, fuzzy -msgid "Bases: :py:class:`~flwr.common.context.Context`" -msgstr "Bases: :py:class:`~flwr.common.context.Context`" +msgid ":py:obj:`keys `\\ \\(\\)" +msgstr ":py:obj:`keys `\\ \\(\\)" -#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of #, fuzzy -msgid ":py:obj:`config `\\" -msgstr "server.strategy.Strategy" +msgid ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" +msgstr ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" -#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of #, fuzzy -msgid ":py:obj:`strategy `\\" -msgstr "server.strategy.Strategy" +msgid "" +":py:obj:`update `\\ \\(\\[E\\, " +"\\]\\*\\*F\\)" +msgstr "" +":py:obj:`update `\\ \\(\\[E\\, " +"\\]\\*\\*F\\)" -#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of #, fuzzy -msgid ":py:obj:`client_manager `\\" -msgstr ":py:obj:`client_manager `\\" +msgid ":py:obj:`values `\\ \\(\\)" +msgstr ":py:obj:`values `\\ \\(\\)" -#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +#: flwr.common.record.parametersrecord.ParametersRecord.count_bytes:3 of #, fuzzy -msgid ":py:obj:`history `\\" -msgstr "server.strategy.Strategy" +msgid "" +"Note that a small amount of Bytes might also be included in this counting" +" that correspond to metadata of the serialized object (e.g. of NumPy " +"array) needed for deseralization." +msgstr "请注意,该计数中还可能包含少量字节,这些字节与序列化对象(如 NumPy 数组)的元数据相对应,需要进行去eralization。" -#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +#: ../../source/ref-api/flwr.common.ReconnectIns.rst:2 #, fuzzy -msgid ":py:obj:`state `\\" -msgstr "server.strategy.Strategy" +msgid "ReconnectIns" +msgstr "启用 SSL 连接" -#: flwr.server.server.Server.client_manager:1::1 of +#: ../../source/ref-api/flwr.common.ReconnectIns.rst:28::1 #, fuzzy -msgid ":py:obj:`client_manager `\\ \\(\\)" -msgstr ":py:obj:`client_manager `\\ \\(\\)" +msgid ":py:obj:`seconds `\\" +msgstr ":py:obj:`seconds `\\" -#: flwr.server.server.Server.client_manager:1 -#: flwr.server.server.Server.client_manager:1::1 of +#: ../../source/ref-api/flwr.common.RecordSet.rst:2 #, fuzzy -msgid "Return ClientManager." -msgstr "返回客户端(本身)。" +msgid "RecordSet" +msgstr "RecordSet" -#: flwr.server.server.Server.client_manager:1::1 of +#: flwr.common.RecordSet.configs_records:1::1 of #, fuzzy -msgid "" -":py:obj:`disconnect_all_clients `\\ \\(timeout\\)" -msgstr "" -":py:obj:`disconnect_all_clients `\\ \\(timeout\\)" +msgid ":py:obj:`configs_records `\\" +msgstr ":py:obj:`configs_records `\\" -#: flwr.server.server.Server.client_manager:1::1 -#: flwr.server.server.Server.disconnect_all_clients:1 of +#: flwr.common.RecordSet.configs_records:1 +#: flwr.common.RecordSet.configs_records:1::1 of #, fuzzy -msgid "Send shutdown signal to all clients." -msgstr "向所有客户端发送关闭信号。" +msgid "Dictionary holding ConfigsRecord instances." +msgstr "包含 ConfigsRecord 实例的字典。" -#: flwr.server.server.Server.client_manager:1::1 of +#: flwr.common.RecordSet.configs_records:1::1 of #, fuzzy -msgid "" -":py:obj:`evaluate_round `\\ " -"\\(server\\_round\\, timeout\\)" -msgstr "" -":py:obj:`evaluate_round `\\ " -"\\(server\\_round\\, timeout\\)" +msgid ":py:obj:`metrics_records `\\" +msgstr ":py:obj:`metrics_records `\\" -#: flwr.server.server.Server.client_manager:1::1 -#: flwr.server.server.Server.evaluate_round:1 of +#: flwr.common.RecordSet.configs_records:1::1 +#: flwr.common.RecordSet.metrics_records:1 of #, fuzzy -msgid "Validate current global model on a number of clients." -msgstr "当前(全局)模型参数。" +msgid "Dictionary holding MetricsRecord instances." +msgstr "保存 MetricsRecord 实例的字典。" -#: flwr.server.server.Server.client_manager:1::1 of +#: flwr.common.RecordSet.configs_records:1::1 of #, fuzzy -msgid ":py:obj:`fit `\\ \\(num\\_rounds\\, timeout\\)" -msgstr ":py:obj:`fit `\\ \\(num\\_rounds\\, timeout\\)" +msgid ":py:obj:`parameters_records `\\" +msgstr ":py:obj:`parameters_records `\\" -#: flwr.server.server.Server.client_manager:1::1 -#: flwr.server.server.Server.fit:1 of +#: flwr.common.RecordSet.configs_records:1::1 +#: flwr.common.RecordSet.parameters_records:1 of #, fuzzy -msgid "Run federated averaging for a number of rounds." -msgstr "联邦平均动量策略。" +msgid "Dictionary holding ParametersRecord instances." +msgstr "存放 ParametersRecord 实例的字典。" -#: flwr.server.server.Server.client_manager:1::1 of +#: ../../source/ref-api/flwr.common.ServerMessage.rst:2 #, fuzzy -msgid "" -":py:obj:`fit_round `\\ \\(server\\_round\\, " -"timeout\\)" -msgstr "" -":py:obj:`fit_round `\\ \\(server\\_round\\, " -"timeout\\)" +msgid "ServerMessage" +msgstr "服务器端" -#: flwr.server.server.Server.client_manager:1::1 -#: flwr.server.server.Server.fit_round:1 of +#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 #, fuzzy -msgid "Perform a single round of federated averaging." -msgstr "本轮联邦学习。" +msgid ":py:obj:`evaluate_ins `\\" +msgstr ":py:obj:`evaluate_ins `\\" -#: flwr.server.server.Server.client_manager:1::1 of +#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 #, fuzzy -msgid "" -":py:obj:`set_max_workers `\\ " -"\\(max\\_workers\\)" -msgstr "" -":py:obj:`set_max_workers `\\ " -"\\(max\\_workers\\)" +msgid ":py:obj:`fit_ins `\\" +msgstr ":py:obj:`fit_ins `\\" -#: flwr.server.server.Server.client_manager:1::1 -#: flwr.server.server.Server.set_max_workers:1 of +#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 #, fuzzy -msgid "Set the max_workers used by ThreadPoolExecutor." -msgstr "设置 ThreadPoolExecutor 使用的最大工作器数。" +msgid "" +":py:obj:`get_parameters_ins " +"`\\" +msgstr "" +":py:obj:`get_parameters_ins " +"`\\" -#: flwr.server.server.Server.client_manager:1::1 of +#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 #, fuzzy msgid "" -":py:obj:`set_strategy `\\ \\(strategy\\)" +":py:obj:`get_properties_ins " +"`\\" msgstr "" -":py:obj:`set_strategy `\\ \\(strategy\\)" +":py:obj:`get_properties_ins " +"`\\" -#: flwr.server.server.Server.client_manager:1::1 -#: flwr.server.server.Server.set_strategy:1 of +#: ../../source/ref-api/flwr.common.Status.rst:2 #, fuzzy -msgid "Replace server strategy." -msgstr "server.strategy" +msgid "Status" +msgstr "客户端状态。" -#: ../../source/ref-api/flwr.server.ServerApp.rst:2 +#: ../../source/ref-api/flwr.common.Status.rst:29::1 #, fuzzy -msgid "ServerApp" -msgstr "服务器" +msgid ":py:obj:`code `\\" +msgstr ":py:obj:`code `\\" -#: flwr.server.server_app.ServerApp:5 of +#: ../../source/ref-api/flwr.common.Status.rst:29::1 #, fuzzy -msgid "Use the `ServerApp` with an existing `Strategy`:" -msgstr "使用现有策略" +msgid ":py:obj:`message `\\" +msgstr ":py:obj:`message `\\" -#: flwr.server.server_app.ServerApp:15 of +#: ../../source/ref-api/flwr.common.array_from_numpy.rst:2 #, fuzzy -msgid "Use the `ServerApp` with a custom main function:" -msgstr "使用带有自定义主函数的 `ServerApp`:" +msgid "array\\_from\\_numpy" +msgstr "array\\_from\\_numpy" -#: flwr.server.server_app.ServerApp.main:1::1 of +#: ../../source/ref-api/flwr.common.bytes_to_ndarray.rst:2 #, fuzzy -msgid ":py:obj:`main `\\ \\(\\)" -msgstr "server.strategy.Strategy" +msgid "bytes\\_to\\_ndarray" +msgstr "bytes\\_to\\_ndarray" -#: flwr.server.server_app.ServerApp.main:1 -#: flwr.server.server_app.ServerApp.main:1::1 of +#: ../../source/ref-api/flwr.common.configure.rst:2 #, fuzzy -msgid "Return a decorator that registers the main fn with the server app." -msgstr "返回向服务器应用程序注册 main fn 的装饰器。" +msgid "configure" +msgstr "配置日志记录" -#: ../../source/ref-api/flwr.server.ServerConfig.rst:2 +#: ../../source/ref-api/flwr.common.event.rst:2 #, fuzzy -msgid "ServerConfig" -msgstr "服务器" +msgid "event" +msgstr "事件" -#: flwr.server.server_config.ServerConfig:3 of +#: ../../source/ref-api/flwr.common.log.rst:2 #, fuzzy +msgid "log" +msgstr "登录" + +#: logging.Logger.log:3 of msgid "" -"All attributes have default values which allows users to configure just the " -"ones they care about." -msgstr "所有属性都有默认值,用户只需配置自己关心的属性即可。" +"To pass exception information, use the keyword argument exc_info with a " +"true value, e.g." +msgstr "要传递异常信息,请使用带 true 值的关键字参数 exc_info,例如。" -#: ../../source/ref-api/flwr.server.ServerConfig.rst:29::1 -#, fuzzy -msgid ":py:obj:`num_rounds `\\" -msgstr ":py:obj:`num_rounds `\\" +#: logging.Logger.log:6 of +#, python-format +msgid "logger.log(level, \"We have a %s\", \"mysterious problem\", exc_info=1)" +msgstr "logger.log(level, \"We have a %s\", \"mysterious problem\", exc_info=1)" -#: ../../source/ref-api/flwr.server.ServerConfig.rst:29::1 +#: ../../source/ref-api/flwr.common.ndarray_to_bytes.rst:2 #, fuzzy -msgid ":py:obj:`round_timeout `\\" -msgstr ":py:obj:`round_timeout `\\" +msgid "ndarray\\_to\\_bytes" +msgstr "ndarray\\_to\\_bytes" -#: ../../source/ref-api/flwr.server.SimpleClientManager.rst:2 +#: ../../source/ref-api/flwr.common.ndarrays_to_parameters.rst:2 #, fuzzy -msgid "SimpleClientManager" -msgstr "SimpleClientManager" +msgid "ndarrays\\_to\\_parameters" +msgstr "ndarrays\\_to\\_parameters" -#: flwr.server.client_manager.SimpleClientManager:1 of +#: ../../source/ref-api/flwr.common.now.rst:2 #, fuzzy -msgid "Bases: :py:class:`~flwr.server.client_manager.ClientManager`" -msgstr "Bases: :py:class:`~flwr.server.client_manager.ClientManager`" +msgid "now" +msgstr "现在" -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +#: ../../source/ref-api/flwr.common.parameters_to_ndarrays.rst:2 #, fuzzy -msgid ":py:obj:`all `\\ \\(\\)" -msgstr ":py:obj:`all `\\ \\(\\)" +msgid "parameters\\_to\\_ndarrays" +msgstr "parameters\\_to\\_ndarrays" -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +#: ../../source/ref-api/flwr.server.rst:2 +msgid "server" +msgstr "服务器" + +#: ../../source/ref-api/flwr.server.rst:26::1 #, fuzzy -msgid "" -":py:obj:`num_available `\\ " -"\\(\\)" -msgstr "" -":py:obj:`num_available `\\ " -"\\(\\)" +msgid ":py:obj:`run_driver_api `\\ \\(\\)" +msgstr ":py:obj:`run_driver_api `\\ \\(\\)" -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +#: ../../source/ref-api/flwr.server.rst:26::1 +#: flwr.server.app.run_driver_api:1 of #, fuzzy -msgid "" -":py:obj:`register `\\ \\(client\\)" -msgstr "" -":py:obj:`register `\\ \\(client\\)" +msgid "Run Flower server (Driver API)." +msgstr "flower-driver-api" -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +#: ../../source/ref-api/flwr.server.rst:26::1 #, fuzzy -msgid "" -":py:obj:`sample `\\ " -"\\(num\\_clients\\[\\, min\\_num\\_clients\\, criterion\\]\\)" -msgstr "" -":py:obj:`sample `\\ " -"\\(num\\_clients\\[\\, min\\_num\\_clients\\, criterion\\]\\)" +msgid ":py:obj:`run_fleet_api `\\ \\(\\)" +msgstr ":py:obj:`run_fleet_api `\\ \\(\\)" -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +#: ../../source/ref-api/flwr.server.rst:26::1 +#: flwr.server.app.run_fleet_api:1 of #, fuzzy -msgid "" -":py:obj:`unregister `\\ " -"\\(client\\)" -msgstr "" -":py:obj:`unregister `\\ " -"\\(client\\)" +msgid "Run Flower server (Fleet API)." +msgstr "Flower 服务器。" -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +#: ../../source/ref-api/flwr.server.rst:26::1 #, fuzzy -msgid "" -":py:obj:`wait_for `\\ " -"\\(num\\_clients\\[\\, timeout\\]\\)" -msgstr "" -":py:obj:`wait_for `\\ " -"\\(num\\_clients\\[\\, timeout\\]\\)" +msgid ":py:obj:`run_server_app `\\ \\(\\)" +msgstr ":py:obj:`run_server_app `\\ \\(\\)" -#: flwr.server.client_manager.SimpleClientManager.wait_for:3 of +#: ../../source/ref-api/flwr.server.rst:26::1 +#: flwr.server.run_serverapp.run_server_app:1 of #, fuzzy -msgid "" -"Blocks until the requested number of clients is available or until a timeout " -"is reached. Current timeout default: 1 day." -msgstr "阻塞,直到请求的客户端数量可用或达到超时为止。当前超时默认值:1 天。" +msgid "Run Flower server app." +msgstr "Flower 服务器。" -#: flwr.server.client_manager.SimpleClientManager.wait_for:6 of +#: ../../source/ref-api/flwr.server.rst:26::1 #, fuzzy -msgid "The number of clients to wait for." -msgstr "需要等待的客户数量。" +msgid ":py:obj:`run_superlink `\\ \\(\\)" +msgstr ":py:obj:`run_superlink `\\ \\(\\)" -#: flwr.server.client_manager.SimpleClientManager.wait_for:8 of +#: ../../source/ref-api/flwr.server.rst:26::1 +#: flwr.server.app.run_superlink:1 of #, fuzzy -msgid "The time in seconds to wait for, defaults to 86400 (24h)." -msgstr "以秒为单位的等待时间,默认为 86400(24 小时)。" +msgid "Run Flower SuperLink (Driver API and Fleet API)." +msgstr "运行 Flower 服务器(Driver API 和 Fleet API)。" -#: flwr.server.client_manager.SimpleClientManager.wait_for:11 of +#: ../../source/ref-api/flwr.server.rst:26::1 #, fuzzy -msgid "**success**" -msgstr "**success**" +msgid "" +":py:obj:`start_server `\\ \\(\\*\\[\\, " +"server\\_address\\, server\\, ...\\]\\)" +msgstr "" +":py:obj:`start_server `\\ \\(\\*\\[\\, " +"server\\_address\\, server\\, ...\\]\\)" -#: ../../source/ref-api/flwr.server.run_driver_api.rst:2 +#: ../../source/ref-api/flwr.server.rst:26::1 +#: flwr.server.app.start_server:1 of +msgid "Start a Flower server using the gRPC transport layer." +msgstr "使用 gRPC 传输层启动 Flower 服务器。" + +#: ../../source/ref-api/flwr.server.rst:40::1 #, fuzzy -msgid "run\\_driver\\_api" -msgstr "flower-driver-api" +msgid ":py:obj:`ClientManager `\\ \\(\\)" +msgstr ":py:obj:`ClientManager `\\ \\(\\)" -#: ../../source/ref-api/flwr.server.run_fleet_api.rst:2 +#: ../../source/ref-api/flwr.server.rst:40::1 +#: flwr.server.client_manager.ClientManager:1 of #, fuzzy -msgid "run\\_fleet\\_api" -msgstr "run\\_fleet\\_api" +msgid "Abstract base class for managing Flower clients." +msgstr "Flower 客户端的抽象基类。" -#: ../../source/ref-api/flwr.server.run_server_app.rst:2 +#: ../../source/ref-api/flwr.server.rst:40::1 #, fuzzy -msgid "run\\_server\\_app" -msgstr "run\\_server\\_app" +msgid ":py:obj:`Driver `\\ \\(\\)" +msgstr ":py:obj:`run_driver_api `\\ \\(\\)" -#: ../../source/ref-api/flwr.server.run_superlink.rst:2 +#: ../../source/ref-api/flwr.server.rst:40::1 +#: flwr.server.driver.driver.Driver:1 of #, fuzzy -msgid "run\\_superlink" -msgstr "flower-superlink" +msgid "Abstract base Driver class for the Driver API." +msgstr "Flower 客户端的抽象基类。" -#: ../../source/ref-api/flwr.server.start_driver.rst:2 +#: ../../source/ref-api/flwr.server.rst:40::1 #, fuzzy -msgid "start\\_driver" -msgstr "启动客户端" +msgid ":py:obj:`History `\\ \\(\\)" +msgstr ":py:obj:`History `\\ \\(\\)" -#: flwr.server.compat.app.start_driver:3 of +#: ../../source/ref-api/flwr.server.rst:40::1 +#: flwr.server.history.History:1 of #, fuzzy -msgid "" -"The IPv4 or IPv6 address of the Driver API server. Defaults to " -"`\"[::]:8080\"`." -msgstr "服务器的 IPv4 或 IPv6 地址。默认为 `\"[::]:8080\"。" +msgid "History class for training and/or evaluation metrics collection." +msgstr "**hist** -- 包含训练和评估指标的对象。" -#: flwr.server.compat.app.start_driver:6 of +#: ../../source/ref-api/flwr.server.rst:40::1 #, fuzzy msgid "" -"A server implementation, either `flwr.server.Server` or a subclass thereof. " -"If no instance is provided, then `start_driver` will create one." +":py:obj:`LegacyContext `\\ \\(state\\[\\, " +"config\\, strategy\\, ...\\]\\)" msgstr "" -"服务器实现,可以是 `flwr.server.Server` 或其子类。如果没有提供实例," -"`start_server` 将创建一个。" +":py:obj:`LegacyContext `\\ \\(state\\[\\, " +"config\\, strategy\\, ...\\]\\)" + +#: ../../source/ref-api/flwr.server.rst:40::1 +#: flwr.server.compat.legacy_context.LegacyContext:1 of +#, fuzzy +msgid "Legacy Context." +msgstr "传承背景。" -#: flwr.server.app.start_server:9 flwr.server.compat.app.start_driver:10 -#: flwr.simulation.app.start_simulation:28 of +#: ../../source/ref-api/flwr.server.rst:40::1 +#, fuzzy msgid "" -"Currently supported values are `num_rounds` (int, default: 1) and " -"`round_timeout` in seconds (float, default: None)." +":py:obj:`Server `\\ \\(\\*\\, client\\_manager\\[\\, " +"strategy\\]\\)" msgstr "" -"目前支持的值有:`num_rounds`(int,默认值:1)和以秒为单位的`round_timeout`" -"(float,默认值:无)。" +":py:obj:`Server `\\ \\(\\*\\, client\\_manager\\[\\, " +"strategy\\]\\)" -#: flwr.server.app.start_server:12 flwr.server.compat.app.start_driver:13 of +#: ../../source/ref-api/flwr.server.rst:40::1 +#, fuzzy msgid "" -"An implementation of the abstract base class `flwr.server.strategy." -"Strategy`. If no strategy is provided, then `start_server` will use `flwr." -"server.strategy.FedAvg`." -msgstr "" -"抽象基类 `flwr.server.strategy.Strategy` 的实现。如果没有提供策略," -"`start_server` 将使用 `flwr.server.strategy.FedAvg`。" +":py:obj:`ServerApp `\\ \\(\\[server\\, config\\, " +"strategy\\, ...\\]\\)" +msgstr "server.strategy.Strategy" + +#: ../../source/ref-api/flwr.server.rst:40::1 +#: flwr.server.server_app.ServerApp:1 of +#, fuzzy +msgid "Flower ServerApp." +msgstr "Flower 服务器。" -#: flwr.server.compat.app.start_driver:17 of +#: ../../source/ref-api/flwr.server.rst:40::1 #, fuzzy msgid "" -"An implementation of the class `flwr.server.ClientManager`. If no " -"implementation is provided, then `start_driver` will use `flwr.server." -"SimpleClientManager`." +":py:obj:`ServerConfig `\\ \\(\\[num\\_rounds\\," +" round\\_timeout\\]\\)" msgstr "" -"抽象基类 `flwr.server.ClientManager` 的实现。如果没有提供实现," -"`start_server` 将使用 `flwr.server.client_manager.SimpleClientManager`。" +"Flower 1.0: ``start_server(..., " +"config=flwr.server.ServerConfig(num_rounds=3, round_timeout=600.0), " +"...)``" -#: flwr.server.compat.app.start_driver:25 of +#: ../../source/ref-api/flwr.server.rst:40::1 +#: flwr.server.server_config.ServerConfig:1 of #, fuzzy -msgid "The Driver object to use." -msgstr "要使用的驱动程序对象。" +msgid "Flower server config." +msgstr "Flower 服务器。" -#: flwr.server.app.start_server:37 flwr.server.compat.app.start_driver:28 of -msgid "**hist** -- Object containing training and evaluation metrics." -msgstr "**hist** -- 包含训练和评估指标的对象。" +#: ../../source/ref-api/flwr.server.rst:40::1 +#, fuzzy +msgid ":py:obj:`SimpleClientManager `\\ \\(\\)" +msgstr ":py:obj:`SimpleClientManager `\\ \\(\\)" -#: flwr.server.compat.app.start_driver:33 of +#: ../../source/ref-api/flwr.server.rst:40::1 +#: flwr.server.client_manager.SimpleClientManager:1 of #, fuzzy -msgid "Starting a driver that connects to an insecure server:" -msgstr "启动不安全的服务器:" +msgid "Provides a pool of available clients." +msgstr "使用部分可用客户进行评估。" -#: flwr.server.compat.app.start_driver:37 of +#: ../../source/ref-api/flwr.server.rst:59::1 #, fuzzy -msgid "Starting a driver that connects to an SSL-enabled server:" -msgstr "启动支持 SSL 的服务器:" +msgid ":py:obj:`flwr.server.strategy `\\" +msgstr "server.strategy.Strategy" -#: ../../source/ref-api/flwr.server.start_server.rst:2 +#: ../../source/ref-api/flwr.server.rst:59::1 +#: flwr.server.strategy:1 of +msgid "Contains the strategy abstraction and different implementations." +msgstr "包含策略抽象和不同的实现方法。" + +#: ../../source/ref-api/flwr.server.rst:59::1 #, fuzzy -msgid "start\\_server" -msgstr "server.start_server" +msgid ":py:obj:`flwr.server.workflow `\\" +msgstr "server.strategy.Strategy" -#: flwr.server.app.start_server:3 of -msgid "The IPv4 or IPv6 address of the server. Defaults to `\"[::]:8080\"`." -msgstr "服务器的 IPv4 或 IPv6 地址。默认为 `\"[::]:8080\"。" +#: ../../source/ref-api/flwr.server.rst:59::1 +#: flwr.server.workflow:1 of +#, fuzzy +msgid "Workflows." +msgstr "工作流程" -#: flwr.server.app.start_server:5 of -msgid "" -"A server implementation, either `flwr.server.Server` or a subclass thereof. " -"If no instance is provided, then `start_server` will create one." -msgstr "" -"服务器实现,可以是 `flwr.server.Server` 或其子类。如果没有提供实例," -"`start_server` 将创建一个。" +#: ../../source/ref-api/flwr.server.ClientManager.rst:2 +#, fuzzy +msgid "ClientManager" +msgstr "客户端" -#: flwr.server.app.start_server:16 of -msgid "" -"An implementation of the abstract base class `flwr.server.ClientManager`. If " -"no implementation is provided, then `start_server` will use `flwr.server." -"client_manager.SimpleClientManager`." -msgstr "" -"抽象基类 `flwr.server.ClientManager` 的实现。如果没有提供实现," -"`start_server` 将使用 `flwr.server.client_manager.SimpleClientManager`。" +#: flwr.server.client_manager.ClientManager.all:1::1 of +#, fuzzy +msgid ":py:obj:`all `\\ \\(\\)" +msgstr ":py:obj:`all `\\ \\(\\)" -#: flwr.server.app.start_server:21 of -msgid "" -"The maximum length of gRPC messages that can be exchanged with the Flower " -"clients. The default should be sufficient for most models. Users who train " -"very large models might need to increase this value. Note that the Flower " -"clients need to be started with the same value (see `flwr.client." -"start_client`), otherwise clients will not know about the increased limit " -"and block larger messages." -msgstr "" -"可与 Flower 客户端交换的 gRPC 消息的最大长度:默认值对大多数模型都足够了。训" -"练超大模型的用户可能需要增加该值。请注意,Flower 客户端需要以相同的值启动(请" -"参阅 `flwr.client.start_client`),否则客户端将不知道已增加的限制并阻止更大的" -"消息。" +#: flwr.server.client_manager.ClientManager.all:1 +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.all:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +#, fuzzy +msgid "Return all available clients." +msgstr "返回所有可用客户。" -#: flwr.server.app.start_server:42 of -msgid "Starting an insecure server:" -msgstr "启动不安全的服务器:" +#: flwr.server.client_manager.ClientManager.all:1::1 of +#, fuzzy +msgid ":py:obj:`num_available `\\ \\(\\)" +msgstr ":py:obj:`num_available `\\ \\(\\)" -#: flwr.server.app.start_server:46 of -msgid "Starting an SSL-enabled server:" -msgstr "启动支持 SSL 的服务器:" +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.ClientManager.num_available:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.num_available:1 of +#, fuzzy +msgid "Return the number of available clients." +msgstr "返回样本大小和所需的可用客户数量。" -#: ../../source/ref-api/flwr.server.strategy.rst:2 +#: flwr.server.client_manager.ClientManager.all:1::1 of #, fuzzy -msgid "strategy" -msgstr "Krum 策略。" +msgid ":py:obj:`register `\\ \\(client\\)" +msgstr ":py:obj:`register `\\ \\(client\\)" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.ClientManager.register:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.register:1 of +#, fuzzy +msgid "Register Flower ClientProxy instance." +msgstr "注册 Flower ClientProxy 实例。" + +#: flwr.server.client_manager.ClientManager.all:1::1 of #, fuzzy msgid "" -":py:obj:`Bulyan `\\ \\(\\*\\, fraction\\_fit\\, " -"fraction\\_evaluate\\, ...\\)" +":py:obj:`sample `\\ " +"\\(num\\_clients\\[\\, min\\_num\\_clients\\, criterion\\]\\)" msgstr "" -":py:obj:`Bulyan `\\ \\(\\*\\, fraction\\_fit\\, " -"fraction\\_evaluate\\, ...\\)" +":py:obj:`sample `\\ " +"\\(num\\_clients\\[\\, min\\_num\\_clients\\, criterion\\]\\)" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.bulyan.Bulyan:1 of -msgid "Bulyan strategy." -msgstr "Bulyan 策略。" +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.ClientManager.sample:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.sample:1 of +#, fuzzy +msgid "Sample a number of Flower ClientProxy instances." +msgstr "取样若干 Flower ClientProxy 实例。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.client_manager.ClientManager.all:1::1 of #, fuzzy -msgid "" -":py:obj:`DPFedAvgAdaptive `\\ " -"\\(strategy\\, num\\_sampled\\_clients\\)" -msgstr "" -":py:obj:`DPFedAvgAdaptive `\\ " -"\\(strategy\\, num\\_sampled\\_clients\\)" +msgid ":py:obj:`unregister `\\ \\(client\\)" +msgstr ":py:obj:`unregister `\\ \\(client\\)" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive:1 of -msgid "Wrapper for configuring a Strategy for DP with Adaptive Clipping." -msgstr "用于配置具有自适应剪切功能的 DP 策略的包装器。" +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.ClientManager.unregister:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.unregister:1 of +#, fuzzy +msgid "Unregister Flower ClientProxy instance." +msgstr "取消注册 Flower ClientProxy 实例。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.client_manager.ClientManager.all:1::1 of #, fuzzy msgid "" -":py:obj:`DPFedAvgFixed `\\ " -"\\(strategy\\, num\\_sampled\\_clients\\, ...\\)" +":py:obj:`wait_for `\\ " +"\\(num\\_clients\\, timeout\\)" msgstr "" -":py:obj:`DPFedAvgFixed `\\ " -"\\(strategy\\, num\\_sampled\\_clients\\, ...\\)" - -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed:1 of -msgid "Wrapper for configuring a Strategy for DP with Fixed Clipping." -msgstr "封装器,用于为具有固定剪切功能的 DP 配置策略。" +":py:obj:`wait_for `\\ " +"\\(num\\_clients\\, timeout\\)" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.ClientManager.wait_for:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.wait_for:1 of #, fuzzy -msgid "" -":py:obj:`DifferentialPrivacyClientSideAdaptiveClipping `\\ \\(...\\)" -msgstr "" -":py:obj:`DifferentialPrivacyClientSideAdaptiveClipping `\\ \\(...\\)" +msgid "Wait until at least `num_clients` are available." +msgstr "等待至少 `num_clients` 可用。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:1 -#: of +#: flwr.server.client_manager.ClientManager.num_available:3 +#: flwr.server.client_manager.SimpleClientManager.num_available:3 of #, fuzzy -msgid "Strategy wrapper for central DP with client-side adaptive clipping." -msgstr "用于配置具有自适应剪切功能的 DP 策略的包装器。" +msgid "**num_available** -- The number of currently available clients." +msgstr "**num_available** -- 当前可用客户端的数量。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.client_manager.ClientManager.register:6 +#: flwr.server.client_manager.SimpleClientManager.register:6 of #, fuzzy msgid "" -":py:obj:`DifferentialPrivacyServerSideAdaptiveClipping `\\ \\(...\\)" -msgstr "" -":py:obj:`DifferentialPrivacyServerSideAdaptiveClipping `\\ \\(...\\)" +"**success** -- Indicating if registration was successful. False if " +"ClientProxy is already registered or can not be registered for any " +"reason." +msgstr "**success** -- 表示注册是否成功。如果 ClientProxy 已注册或因故无法注册,则为 False。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:1 -#: of +#: flwr.server.client_manager.ClientManager.unregister:3 +#: flwr.server.client_manager.SimpleClientManager.unregister:3 of #, fuzzy -msgid "Strategy wrapper for central DP with server-side adaptive clipping." -msgstr "用于配置具有自适应剪切功能的 DP 策略的包装器。" +msgid "This method is idempotent." +msgstr "这种方法是幂等的。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: ../../source/ref-api/flwr.server.Driver.rst:2 +#, fuzzy +msgid "Driver" +msgstr "服务器" + +#: flwr.server.driver.driver.Driver.create_message:1::1 of #, fuzzy msgid "" -":py:obj:`DifferentialPrivacyClientSideFixedClipping `\\ \\(...\\)" +":py:obj:`create_message `\\ " +"\\(content\\, message\\_type\\, ...\\[\\, ttl\\]\\)" msgstr "" -":py:obj:`DifferentialPrivacyClientSideFixedClipping `\\ \\(...\\)" +":py:obj:`create_message `\\ " +"\\(content\\, message\\_type\\, ...\\)" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:1 -#: of +#: flwr.server.driver.driver.Driver.create_message:1 +#: flwr.server.driver.driver.Driver.create_message:1::1 of #, fuzzy -msgid "Strategy wrapper for central DP with client-side fixed clipping." -msgstr "封装器,用于为具有固定剪切功能的 DP 配置策略。" +msgid "Create a new message with specified parameters." +msgstr "使用指定参数创建新信息。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.create_message:1::1 of #, fuzzy -msgid "" -":py:obj:`DifferentialPrivacyServerSideFixedClipping `\\ \\(...\\)" -msgstr "" -":py:obj:`DifferentialPrivacyServerSideFixedClipping `\\ \\(...\\)" +msgid ":py:obj:`get_node_ids `\\ \\(\\)" +msgstr ":py:obj:`get_node_ids `\\ \\(\\)" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:1 -#: of +#: flwr.server.driver.driver.Driver.create_message:1::1 +#: flwr.server.driver.driver.Driver.get_node_ids:1 of #, fuzzy -msgid "Strategy wrapper for central DP with server-side fixed clipping." -msgstr "封装器,用于为具有固定剪切功能的 DP 配置策略。" +msgid "Get node IDs." +msgstr "获取节点 ID。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.create_message:1::1 of #, fuzzy msgid "" -":py:obj:`FedAdagrad `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +":py:obj:`pull_messages `\\ " +"\\(message\\_ids\\)" msgstr "" -":py:obj:`FedAdagrad `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +":py:obj:`pull_messages `\\ " +"\\(message\\_ids\\)" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedadagrad.FedAdagrad:1 of -msgid "FedAdagrad strategy - Adaptive Federated Optimization using Adagrad." -msgstr "FedAdagrad 策略 - 使用 Adagrad 进行自适应联合优化。" +#: flwr.server.driver.driver.Driver.create_message:1::1 +#: flwr.server.driver.driver.Driver.pull_messages:1 of +#, fuzzy +msgid "Pull messages based on message IDs." +msgstr "根据信息 ID 提取信息。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.create_message:1::1 of #, fuzzy msgid "" -":py:obj:`FedAdam `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +":py:obj:`push_messages `\\ " +"\\(messages\\)" msgstr "" -":py:obj:`FedAdam `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +":py:obj:`push_messages `\\ " +"\\(messages\\)" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedadam.FedAdam:1 of -msgid "FedAdam - Adaptive Federated Optimization using Adam." -msgstr "FedAdam - 使用 Adam 进行自适应联合优化。" +#: flwr.server.driver.driver.Driver.create_message:1::1 +#: flwr.server.driver.driver.Driver.push_messages:1 of +#, fuzzy +msgid "Push messages to specified node IDs." +msgstr "向指定的节点 ID 推送信息。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.create_message:1::1 of #, fuzzy msgid "" -":py:obj:`FedAvg `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" +":py:obj:`send_and_receive `\\ " +"\\(messages\\, \\*\\[\\, timeout\\]\\)" msgstr "" -":py:obj:`FedAvg `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" +"Flower 1.0: ``start_server(..., " +"config=flwr.server.ServerConfig(num_rounds=3, round_timeout=600.0), " +"...)``" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedavg.FedAvg:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:1 of -msgid "Federated Averaging strategy." -msgstr "联邦平均策略。" +#: flwr.server.driver.driver.Driver.create_message:1::1 +#: flwr.server.driver.driver.Driver.send_and_receive:1 of +#, fuzzy +msgid "Push messages to specified node IDs and pull the reply messages." +msgstr "向指定的节点 ID 推送信息并提取回复信息。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.create_message:3 of #, fuzzy msgid "" -":py:obj:`FedAvgAndroid `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" -msgstr "" -":py:obj:`FedAvgAndroid `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +"This method constructs a new `Message` with given content and metadata. " +"The `run_id` and `src_node_id` will be set automatically." +msgstr "本方法使用给定的内容和元数据构建新的 `Message` 。run_id \"和 \"src_node_id \"将自动设置。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.create_message:6 of #, fuzzy msgid "" -":py:obj:`FedAvgM `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" -msgstr "" -":py:obj:`FedAvgM `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" - -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedavgm.FedAvgM:1 of -msgid "Federated Averaging with Momentum strategy." -msgstr "联邦平均动量策略。" +"The content for the new message. This holds records that are to be sent " +"to the destination node." +msgstr "新信息的内容。其中包含要发送到目的节点的记录。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.create_message:9 of #, fuzzy msgid "" -":py:obj:`FedMedian `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" -msgstr "" -":py:obj:`FedMedian `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +"The type of the message, defining the action to be executed on the " +"receiving end." +msgstr "信息类型,定义接收端要执行的操作。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedmedian.FedMedian:1 of +#: flwr.server.driver.driver.Driver.create_message:12 of #, fuzzy -msgid "Configurable FedMedian strategy implementation." -msgstr "可配置的 FedAvg 策略实施。" +msgid "The ID of the destination node to which the message is being sent." +msgstr "信息发送目的地节点的 ID。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.create_message:14 of #, fuzzy msgid "" -":py:obj:`FedOpt `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" -msgstr "" -":py:obj:`FedOpt `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" +"The ID of the group to which this message is associated. In some " +"settings, this is used as the FL round." +msgstr "与该信息相关联的组的 ID。在某些设置中,它被用作 FL 轮。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedopt.FedOpt:1 of +#: flwr.server.driver.driver.Driver.create_message:17 of #, fuzzy -msgid "Federated Optim strategy." -msgstr "联邦优化策略。" +msgid "" +"Time-to-live for the round trip of this message, i.e., the time from " +"sending this message to receiving a reply. It specifies in seconds the " +"duration for which the message and its potential reply are considered " +"valid. If unset, the default TTL (i.e., `common.DEFAULT_TTL`) will be " +"used." +msgstr "此报文往返的有效时间,即从发送此报文到收到回复的时间。它规定了信息及其潜在回复被视为有效的持续时间。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.create_message:23 of #, fuzzy msgid "" -":py:obj:`FedProx `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" -msgstr "" -":py:obj:`FedProx `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" - -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedprox.FedProx:1 of -msgid "Federated Optimization strategy." -msgstr "联邦优化策略。" +"**message** -- A new `Message` instance with the specified content and " +"metadata." +msgstr "**message** -- 具有指定内容和元数据的新 \"信息 \"实例。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.pull_messages:3 of #, fuzzy msgid "" -":py:obj:`FedTrimmedAvg `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" -msgstr "" -":py:obj:`FedTrimmedAvg `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" - -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:1 of -msgid "Federated Averaging with Trimmed Mean [Dong Yin, et al., 2021]." -msgstr "带修剪均值的联邦平均法[Dong Yin 等,2021]。" +"This method is used to collect messages from the SuperLink that " +"correspond to a set of given message IDs." +msgstr "该方法用于从超级链接中收集与一组给定消息 ID 相对应的消息。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.pull_messages:6 of #, fuzzy -msgid "" -":py:obj:`FedXgbBagging `\\ " -"\\(\\[evaluate\\_function\\]\\)" -msgstr "" -":py:obj:`FedXgbBagging `\\ " -"\\(\\[evaluate\\_function\\]\\)" +msgid "An iterable of message IDs for which reply messages are to be retrieved." +msgstr "要检索回复信息的信息 ID 的可迭代项。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging:1 of +#: flwr.server.driver.driver.Driver.pull_messages:9 of #, fuzzy -msgid "Configurable FedXgbBagging strategy implementation." -msgstr "可配置的 FedXgbNAvg 策略实施。" +msgid "**messages** -- An iterable of messages received." +msgstr "**messages** -- 收到的信息迭代。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.push_messages:3 of #, fuzzy msgid "" -":py:obj:`FedXgbCyclic `\\ " -"\\(\\*\\*kwargs\\)" -msgstr "" -":py:obj:`FedXgbCyclic `\\ " -"\\(\\*\\*kwargs\\)" +"This method takes an iterable of messages and sends each message to the " +"node specified in `dst_node_id`." +msgstr "该方法接收一个可迭代的消息,并将每条消息发送到 `dst_node_id` 中指定的节点。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic:1 of +#: flwr.server.driver.driver.Driver.push_messages:6 +#: flwr.server.driver.driver.Driver.send_and_receive:7 of #, fuzzy -msgid "Configurable FedXgbCyclic strategy implementation." -msgstr "可配置的 FedAvg 策略实施。" +msgid "An iterable of messages to be sent." +msgstr "要发送的信息迭代。" -#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.driver.driver.Driver.push_messages:9 of #, fuzzy msgid "" -":py:obj:`FedXgbNnAvg `\\ \\(\\*args\\, " -"\\*\\*kwargs\\)" -msgstr "" +"**message_ids** -- An iterable of IDs for the messages that were sent, " +"which can be used to pull replies." +msgstr "**message_ids** -- 已发送信息的可迭代 ID,可用于提取回复信息。" + +#: flwr.server.driver.driver.Driver.send_and_receive:3 of +#, fuzzy +msgid "" +"This method sends a list of messages to their destination node IDs and " +"then waits for the replies. It continues to pull replies until either all" +" replies are received or the specified timeout duration is exceeded." +msgstr "该方法会向目标节点 ID 发送信息列表,然后等待回复。它会继续提取回复,直到收到所有回复或超过指定的超时时间。" + +#: flwr.server.driver.driver.Driver.send_and_receive:9 of +#, fuzzy +msgid "" +"The timeout duration in seconds. If specified, the method will wait for " +"replies for this duration. If `None`, there is no time limit and the " +"method will wait until replies for all messages are received." +msgstr "超时时间(秒)。如果指定,该方法将在此期限内等待回复。如果指定为 \"无\",则没有时间限制,该方法将等待直到收到所有信息的回复。" + +#: flwr.server.driver.driver.Driver.send_and_receive:14 of +#, fuzzy +msgid "**replies** -- An iterable of reply messages received from the SuperLink." +msgstr "**replies** -- 从超级链接收到的回复信息的迭代。" + +#: flwr.server.driver.driver.Driver.send_and_receive:18 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:53 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:60 +#: of +#, fuzzy +msgid "Notes" +msgstr "无" + +#: flwr.server.driver.driver.Driver.send_and_receive:19 of +#, fuzzy +msgid "" +"This method uses `push_messages` to send the messages and `pull_messages`" +" to collect the replies. If `timeout` is set, the method may not return " +"replies for all sent messages. A message remains valid until its TTL, " +"which is not affected by `timeout`." +msgstr "" +"该方法使用 `push_messages` 发送信息,并使用 `pull_messages` 收集回复。如果设置了 " +"`timeout`,该方法可能不会返回所有已发送消息的回复。消息在其 TTL 之前一直有效,不受 `timeout` 影响。" + +#: ../../source/ref-api/flwr.server.History.rst:2 +#, fuzzy +msgid "History" +msgstr "历史" + +#: flwr.server.history.History.add_loss_centralized:1::1 of +#, fuzzy +msgid "" +":py:obj:`add_loss_centralized " +"`\\ \\(server\\_round\\, " +"loss\\)" +msgstr "" +":py:obj:`add_loss_centralized " +"`\\ \\(server\\_round\\, " +"loss\\)" + +#: flwr.server.history.History.add_loss_centralized:1 +#: flwr.server.history.History.add_loss_centralized:1::1 of +#, fuzzy +msgid "Add one loss entry (from centralized evaluation)." +msgstr "集中评估" + +#: flwr.server.history.History.add_loss_centralized:1::1 of +#, fuzzy +msgid "" +":py:obj:`add_loss_distributed " +"`\\ \\(server\\_round\\, " +"loss\\)" +msgstr "" +":py:obj:`add_loss_distributed " +"`\\ \\(server\\_round\\, " +"loss\\)" + +#: flwr.server.history.History.add_loss_centralized:1::1 +#: flwr.server.history.History.add_loss_distributed:1 of +#, fuzzy +msgid "Add one loss entry (from distributed evaluation)." +msgstr "增加一个损失条目(来自分布式评估)。" + +#: flwr.server.history.History.add_loss_centralized:1::1 of +#, fuzzy +msgid "" +":py:obj:`add_metrics_centralized " +"`\\ \\(server\\_round\\, " +"metrics\\)" +msgstr "" +":py:obj:`add_metrics_centralized " +"`\\ \\(server\\_round\\, " +"metrics\\)" + +#: flwr.server.history.History.add_loss_centralized:1::1 +#: flwr.server.history.History.add_metrics_centralized:1 of +#, fuzzy +msgid "Add metrics entries (from centralized evaluation)." +msgstr "集中评估" + +#: flwr.server.history.History.add_loss_centralized:1::1 of +#, fuzzy +msgid "" +":py:obj:`add_metrics_distributed " +"`\\ \\(server\\_round\\, " +"metrics\\)" +msgstr "" +":py:obj:`add_metrics_distributed " +"`\\ \\(server\\_round\\, " +"metrics\\)" + +#: flwr.server.history.History.add_loss_centralized:1::1 +#: flwr.server.history.History.add_metrics_distributed:1 of +#, fuzzy +msgid "Add metrics entries (from distributed evaluation)." +msgstr "定制的集中/分布式评估" + +#: flwr.server.history.History.add_loss_centralized:1::1 of +#, fuzzy +msgid "" +":py:obj:`add_metrics_distributed_fit " +"`\\ \\(server\\_round\\," +" ...\\)" +msgstr "" +":py:obj:`add_metrics_distributed_fit " +"`\\ \\(server\\_round\\," +" ...\\)" + +#: flwr.server.history.History.add_loss_centralized:1::1 +#: flwr.server.history.History.add_metrics_distributed_fit:1 of +#, fuzzy +msgid "Add metrics entries (from distributed fit)." +msgstr "添加度量条目(来自分布式拟合)。" + +#: ../../source/ref-api/flwr.server.LegacyContext.rst:2 +#, fuzzy +msgid "LegacyContext" +msgstr "遗留上下文" + +#: flwr.server.compat.legacy_context.LegacyContext:1 of +#, fuzzy +msgid "Bases: :py:class:`~flwr.common.context.Context`" +msgstr "Bases: :py:class:`~flwr.common.context.Context`" + +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +#, fuzzy +msgid ":py:obj:`config `\\" +msgstr "server.strategy.Strategy" + +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +#, fuzzy +msgid ":py:obj:`strategy `\\" +msgstr "server.strategy.Strategy" + +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +#, fuzzy +msgid ":py:obj:`client_manager `\\" +msgstr ":py:obj:`client_manager `\\" + +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +#, fuzzy +msgid ":py:obj:`history `\\" +msgstr "server.strategy.Strategy" + +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +#, fuzzy +msgid ":py:obj:`state `\\" +msgstr "server.strategy.Strategy" + +#: ../../source/ref-api/flwr.server.Server.rst:2 +msgid "Server" +msgstr "服务器" + +#: flwr.server.server.Server.client_manager:1::1 of +#, fuzzy +msgid ":py:obj:`client_manager `\\ \\(\\)" +msgstr ":py:obj:`client_manager `\\ \\(\\)" + +#: flwr.server.server.Server.client_manager:1 +#: flwr.server.server.Server.client_manager:1::1 of +#, fuzzy +msgid "Return ClientManager." +msgstr "返回客户端(本身)。" + +#: flwr.server.server.Server.client_manager:1::1 of +#, fuzzy +msgid "" +":py:obj:`disconnect_all_clients " +"`\\ \\(timeout\\)" +msgstr "" +":py:obj:`disconnect_all_clients " +"`\\ \\(timeout\\)" + +#: flwr.server.server.Server.client_manager:1::1 +#: flwr.server.server.Server.disconnect_all_clients:1 of +#, fuzzy +msgid "Send shutdown signal to all clients." +msgstr "向所有客户端发送关闭信号。" + +#: flwr.server.server.Server.client_manager:1::1 of +#, fuzzy +msgid "" +":py:obj:`evaluate_round `\\ " +"\\(server\\_round\\, timeout\\)" +msgstr "" +":py:obj:`evaluate_round `\\ " +"\\(server\\_round\\, timeout\\)" + +#: flwr.server.server.Server.client_manager:1::1 +#: flwr.server.server.Server.evaluate_round:1 of +#, fuzzy +msgid "Validate current global model on a number of clients." +msgstr "当前(全局)模型参数。" + +#: flwr.server.server.Server.client_manager:1::1 of +#, fuzzy +msgid ":py:obj:`fit `\\ \\(num\\_rounds\\, timeout\\)" +msgstr ":py:obj:`fit `\\ \\(num\\_rounds\\, timeout\\)" + +#: flwr.server.server.Server.client_manager:1::1 +#: flwr.server.server.Server.fit:1 of +#, fuzzy +msgid "Run federated averaging for a number of rounds." +msgstr "联邦平均动量策略。" + +#: flwr.server.server.Server.client_manager:1::1 of +#, fuzzy +msgid "" +":py:obj:`fit_round `\\ \\(server\\_round\\," +" timeout\\)" +msgstr "" +":py:obj:`fit_round `\\ \\(server\\_round\\," +" timeout\\)" + +#: flwr.server.server.Server.client_manager:1::1 +#: flwr.server.server.Server.fit_round:1 of +#, fuzzy +msgid "Perform a single round of federated averaging." +msgstr "本轮联邦学习。" + +#: flwr.server.server.Server.client_manager:1::1 of +#, fuzzy +msgid "" +":py:obj:`set_max_workers `\\ " +"\\(max\\_workers\\)" +msgstr "" +":py:obj:`set_max_workers `\\ " +"\\(max\\_workers\\)" + +#: flwr.server.server.Server.client_manager:1::1 +#: flwr.server.server.Server.set_max_workers:1 of +#, fuzzy +msgid "Set the max_workers used by ThreadPoolExecutor." +msgstr "设置 ThreadPoolExecutor 使用的最大工作器数。" + +#: flwr.server.server.Server.client_manager:1::1 of +#, fuzzy +msgid ":py:obj:`set_strategy `\\ \\(strategy\\)" +msgstr ":py:obj:`set_strategy `\\ \\(strategy\\)" + +#: flwr.server.server.Server.client_manager:1::1 +#: flwr.server.server.Server.set_strategy:1 of +#, fuzzy +msgid "Replace server strategy." +msgstr "server.strategy" + +#: ../../source/ref-api/flwr.server.ServerApp.rst:2 +#, fuzzy +msgid "ServerApp" +msgstr "服务器" + +#: flwr.server.server_app.ServerApp:5 of +#, fuzzy +msgid "Use the `ServerApp` with an existing `Strategy`:" +msgstr "使用现有策略" + +#: flwr.server.server_app.ServerApp:15 of +#, fuzzy +msgid "Use the `ServerApp` with a custom main function:" +msgstr "使用带有自定义主函数的 `ServerApp`:" + +#: flwr.server.server_app.ServerApp.main:1::1 of +#, fuzzy +msgid ":py:obj:`main `\\ \\(\\)" +msgstr "server.strategy.Strategy" + +#: flwr.server.server_app.ServerApp.main:1 +#: flwr.server.server_app.ServerApp.main:1::1 of +#, fuzzy +msgid "Return a decorator that registers the main fn with the server app." +msgstr "返回向服务器应用程序注册 main fn 的装饰器。" + +#: ../../source/ref-api/flwr.server.ServerConfig.rst:2 +#, fuzzy +msgid "ServerConfig" +msgstr "服务器" + +#: flwr.server.server_config.ServerConfig:3 of +#, fuzzy +msgid "" +"All attributes have default values which allows users to configure just " +"the ones they care about." +msgstr "所有属性都有默认值,用户只需配置自己关心的属性即可。" + +#: ../../source/ref-api/flwr.server.ServerConfig.rst:29::1 +#, fuzzy +msgid ":py:obj:`num_rounds `\\" +msgstr ":py:obj:`num_rounds `\\" + +#: ../../source/ref-api/flwr.server.ServerConfig.rst:29::1 +#, fuzzy +msgid ":py:obj:`round_timeout `\\" +msgstr ":py:obj:`round_timeout `\\" + +#: ../../source/ref-api/flwr.server.SimpleClientManager.rst:2 +#, fuzzy +msgid "SimpleClientManager" +msgstr "SimpleClientManager" + +#: flwr.server.client_manager.SimpleClientManager:1 of +#, fuzzy +msgid "Bases: :py:class:`~flwr.server.client_manager.ClientManager`" +msgstr "Bases: :py:class:`~flwr.server.client_manager.ClientManager`" + +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +#, fuzzy +msgid ":py:obj:`all `\\ \\(\\)" +msgstr ":py:obj:`all `\\ \\(\\)" + +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +#, fuzzy +msgid "" +":py:obj:`num_available `\\" +" \\(\\)" +msgstr "" +":py:obj:`num_available `\\" +" \\(\\)" + +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +#, fuzzy +msgid "" +":py:obj:`register `\\ " +"\\(client\\)" +msgstr "" +":py:obj:`register `\\ " +"\\(client\\)" + +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +#, fuzzy +msgid "" +":py:obj:`sample `\\ " +"\\(num\\_clients\\[\\, min\\_num\\_clients\\, criterion\\]\\)" +msgstr "" +":py:obj:`sample `\\ " +"\\(num\\_clients\\[\\, min\\_num\\_clients\\, criterion\\]\\)" + +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +#, fuzzy +msgid "" +":py:obj:`unregister `\\ " +"\\(client\\)" +msgstr "" +":py:obj:`unregister `\\ " +"\\(client\\)" + +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +#, fuzzy +msgid "" +":py:obj:`wait_for `\\ " +"\\(num\\_clients\\[\\, timeout\\]\\)" +msgstr "" +":py:obj:`wait_for `\\ " +"\\(num\\_clients\\[\\, timeout\\]\\)" + +#: flwr.server.client_manager.SimpleClientManager.wait_for:3 of +#, fuzzy +msgid "" +"Blocks until the requested number of clients is available or until a " +"timeout is reached. Current timeout default: 1 day." +msgstr "阻塞,直到请求的客户端数量可用或达到超时为止。当前超时默认值:1 天。" + +#: flwr.server.client_manager.SimpleClientManager.wait_for:6 of +#, fuzzy +msgid "The number of clients to wait for." +msgstr "需要等待的客户数量。" + +#: flwr.server.client_manager.SimpleClientManager.wait_for:8 of +#, fuzzy +msgid "The time in seconds to wait for, defaults to 86400 (24h)." +msgstr "以秒为单位的等待时间,默认为 86400(24 小时)。" + +#: flwr.server.client_manager.SimpleClientManager.wait_for:11 of +#, fuzzy +msgid "**success**" +msgstr "**success**" + +#: ../../source/ref-api/flwr.server.run_driver_api.rst:2 +#, fuzzy +msgid "run\\_driver\\_api" +msgstr "flower-driver-api" + +#: ../../source/ref-api/flwr.server.run_fleet_api.rst:2 +#, fuzzy +msgid "run\\_fleet\\_api" +msgstr "run\\_fleet\\_api" + +#: ../../source/ref-api/flwr.server.run_server_app.rst:2 +#, fuzzy +msgid "run\\_server\\_app" +msgstr "run\\_server\\_app" + +#: ../../source/ref-api/flwr.server.run_superlink.rst:2 +#, fuzzy +msgid "run\\_superlink" +msgstr "flower-superlink" + +#: ../../source/ref-api/flwr.server.start_server.rst:2 +#, fuzzy +msgid "start\\_server" +msgstr "server.start_server" + +#: flwr.server.app.start_server:3 of +msgid "The IPv4 or IPv6 address of the server. Defaults to `\"[::]:8080\"`." +msgstr "服务器的 IPv4 或 IPv6 地址。默认为 `\"[::]:8080\"。" + +#: flwr.server.app.start_server:5 of +msgid "" +"A server implementation, either `flwr.server.Server` or a subclass " +"thereof. If no instance is provided, then `start_server` will create one." +msgstr "服务器实现,可以是 `flwr.server.Server` 或其子类。如果没有提供实例,`start_server` 将创建一个。" + +#: flwr.server.app.start_server:9 flwr.simulation.app.start_simulation:28 of +msgid "" +"Currently supported values are `num_rounds` (int, default: 1) and " +"`round_timeout` in seconds (float, default: None)." +msgstr "目前支持的值有:`num_rounds`(int,默认值:1)和以秒为单位的`round_timeout`(float,默认值:无)。" + +#: flwr.server.app.start_server:12 of +msgid "" +"An implementation of the abstract base class " +"`flwr.server.strategy.Strategy`. If no strategy is provided, then " +"`start_server` will use `flwr.server.strategy.FedAvg`." +msgstr "" +"抽象基类 `flwr.server.strategy.Strategy` 的实现。如果没有提供策略,`start_server` 将使用 " +"`flwr.server.strategy.FedAvg`。" + +#: flwr.server.app.start_server:16 of +msgid "" +"An implementation of the abstract base class `flwr.server.ClientManager`." +" If no implementation is provided, then `start_server` will use " +"`flwr.server.client_manager.SimpleClientManager`." +msgstr "" +"抽象基类 `flwr.server.ClientManager` 的实现。如果没有提供实现,`start_server` 将使用 " +"`flwr.server.client_manager.SimpleClientManager`。" + +#: flwr.server.app.start_server:21 of +msgid "" +"The maximum length of gRPC messages that can be exchanged with the Flower" +" clients. The default should be sufficient for most models. Users who " +"train very large models might need to increase this value. Note that the " +"Flower clients need to be started with the same value (see " +"`flwr.client.start_client`), otherwise clients will not know about the " +"increased limit and block larger messages." +msgstr "" +"可与 Flower 客户端交换的 gRPC 消息的最大长度:默认值对大多数模型都足够了。训练超大模型的用户可能需要增加该值。请注意,Flower " +"客户端需要以相同的值启动(请参阅 `flwr.client.start_client`),否则客户端将不知道已增加的限制并阻止更大的消息。" + +#: flwr.server.app.start_server:28 of +msgid "" +"Tuple containing root certificate, server certificate, and private key to" +" start a secure SSL-enabled server. The tuple is expected to have three " +"bytes elements in the following order: * CA certificate. * " +"server certificate. * server private key." +msgstr "" +"包含根证书、服务器证书和私钥的元组,用于启动启用 SSL 的安全服务器。元组应按以下顺序包含三个字节元素: * CA 证书,* 服务器证书, * " +"服务器私钥。" + +#: flwr.server.app.start_server:28 of +msgid "" +"Tuple containing root certificate, server certificate, and private key to" +" start a secure SSL-enabled server. The tuple is expected to have three " +"bytes elements in the following order:" +msgstr "包含根证书、服务器证书和私钥的元组,用于启动启用 SSL 的安全服务器。元组应按以下顺序包含三个字节元素:" + +#: flwr.server.app.start_server:32 of +msgid "CA certificate." +msgstr "CA 证书。" + +#: flwr.server.app.start_server:33 of +msgid "server certificate." +msgstr "服务器证书。" + +#: flwr.server.app.start_server:34 of +msgid "server private key." +msgstr "服务器私人密钥。" + +#: flwr.server.app.start_server:37 of +msgid "**hist** -- Object containing training and evaluation metrics." +msgstr "**hist** -- 包含训练和评估指标的对象。" + +#: flwr.server.app.start_server:42 of +msgid "Starting an insecure server:" +msgstr "启动不安全的服务器:" + +#: flwr.server.app.start_server:46 of +msgid "Starting an SSL-enabled server:" +msgstr "启动支持 SSL 的服务器:" + +#: ../../source/ref-api/flwr.server.strategy.rst:2 +#, fuzzy +msgid "strategy" +msgstr "Krum 策略。" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#, fuzzy +msgid "" +":py:obj:`Bulyan `\\ \\(\\*\\, " +"fraction\\_fit\\, fraction\\_evaluate\\, ...\\)" +msgstr "" +":py:obj:`Bulyan `\\ \\(\\*\\, " +"fraction\\_fit\\, fraction\\_evaluate\\, ...\\)" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.bulyan.Bulyan:1 of +msgid "Bulyan strategy." +msgstr "Bulyan 策略。" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#, fuzzy +msgid "" +":py:obj:`DPFedAvgAdaptive `\\ " +"\\(strategy\\, num\\_sampled\\_clients\\)" +msgstr "" +":py:obj:`DPFedAvgAdaptive `\\ " +"\\(strategy\\, num\\_sampled\\_clients\\)" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive:1 of +msgid "Wrapper for configuring a Strategy for DP with Adaptive Clipping." +msgstr "用于配置具有自适应剪切功能的 DP 策略的包装器。" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#, fuzzy +msgid "" +":py:obj:`DPFedAvgFixed `\\ " +"\\(strategy\\, num\\_sampled\\_clients\\, ...\\)" +msgstr "" +":py:obj:`DPFedAvgFixed `\\ " +"\\(strategy\\, num\\_sampled\\_clients\\, ...\\)" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed:1 of +msgid "Wrapper for configuring a Strategy for DP with Fixed Clipping." +msgstr "封装器,用于为具有固定剪切功能的 DP 配置策略。" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#, fuzzy +msgid "" +":py:obj:`DifferentialPrivacyClientSideAdaptiveClipping " +"`\\ " +"\\(...\\)" +msgstr "" +":py:obj:`DifferentialPrivacyClientSideAdaptiveClipping " +"`\\ " +"\\(...\\)" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:1 +#: of +#, fuzzy +msgid "Strategy wrapper for central DP with client-side adaptive clipping." +msgstr "用于配置具有自适应剪切功能的 DP 策略的包装器。" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#, fuzzy +msgid "" +":py:obj:`DifferentialPrivacyServerSideAdaptiveClipping " +"`\\ " +"\\(...\\)" +msgstr "" +":py:obj:`DifferentialPrivacyServerSideAdaptiveClipping " +"`\\ " +"\\(...\\)" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:1 +#: of +#, fuzzy +msgid "Strategy wrapper for central DP with server-side adaptive clipping." +msgstr "用于配置具有自适应剪切功能的 DP 策略的包装器。" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#, fuzzy +msgid "" +":py:obj:`DifferentialPrivacyClientSideFixedClipping " +"`\\ " +"\\(...\\)" +msgstr "" +":py:obj:`DifferentialPrivacyClientSideFixedClipping " +"`\\ " +"\\(...\\)" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:1 +#: of +#, fuzzy +msgid "Strategy wrapper for central DP with client-side fixed clipping." +msgstr "封装器,用于为具有固定剪切功能的 DP 配置策略。" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#, fuzzy +msgid "" +":py:obj:`DifferentialPrivacyServerSideFixedClipping " +"`\\ " +"\\(...\\)" +msgstr "" +":py:obj:`DifferentialPrivacyServerSideFixedClipping " +"`\\ " +"\\(...\\)" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:1 +#: of +#, fuzzy +msgid "Strategy wrapper for central DP with server-side fixed clipping." +msgstr "封装器,用于为具有固定剪切功能的 DP 配置策略。" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#, fuzzy +msgid "" +":py:obj:`FedAdagrad `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" +msgstr "" +":py:obj:`FedAdagrad `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedadagrad.FedAdagrad:1 of +msgid "FedAdagrad strategy - Adaptive Federated Optimization using Adagrad." +msgstr "FedAdagrad 策略 - 使用 Adagrad 进行自适应联合优化。" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#, fuzzy +msgid "" +":py:obj:`FedAdam `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" +msgstr "" +":py:obj:`FedAdam `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedadam.FedAdam:1 of +msgid "FedAdam - Adaptive Federated Optimization using Adam." +msgstr "FedAdam - 使用 Adam 进行自适应联合优化。" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#, fuzzy +msgid "" +":py:obj:`FedAvg `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" +msgstr "" +":py:obj:`FedAvg `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedavg.FedAvg:1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:1 of +msgid "Federated Averaging strategy." +msgstr "联邦平均策略。" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#, fuzzy +msgid "" +":py:obj:`FedAvgAndroid `\\ " +"\\(\\*\\[\\, fraction\\_fit\\, ...\\]\\)" +msgstr "" +":py:obj:`FedAvgAndroid `\\ " +"\\(\\*\\[\\, fraction\\_fit\\, ...\\]\\)" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#, fuzzy +msgid "" +":py:obj:`FedAvgM `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" +msgstr "" +":py:obj:`FedAvgM `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedavgm.FedAvgM:1 of +msgid "Federated Averaging with Momentum strategy." +msgstr "联邦平均动量策略。" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#, fuzzy +msgid "" +":py:obj:`FedMedian `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" +msgstr "" +":py:obj:`FedMedian `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedmedian.FedMedian:1 of +#, fuzzy +msgid "Configurable FedMedian strategy implementation." +msgstr "可配置的 FedAvg 策略实施。" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#, fuzzy +msgid "" +":py:obj:`FedOpt `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" +msgstr "" +":py:obj:`FedOpt `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedopt.FedOpt:1 of +#, fuzzy +msgid "Federated Optim strategy." +msgstr "联邦优化策略。" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#, fuzzy +msgid "" +":py:obj:`FedProx `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" +msgstr "" +":py:obj:`FedProx `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedprox.FedProx:1 of +msgid "Federated Optimization strategy." +msgstr "联邦优化策略。" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#, fuzzy +msgid "" +":py:obj:`FedTrimmedAvg `\\ " +"\\(\\*\\[\\, fraction\\_fit\\, ...\\]\\)" +msgstr "" +":py:obj:`FedTrimmedAvg `\\ " +"\\(\\*\\[\\, fraction\\_fit\\, ...\\]\\)" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:1 of +msgid "Federated Averaging with Trimmed Mean [Dong Yin, et al., 2021]." +msgstr "带修剪均值的联邦平均法[Dong Yin 等,2021]。" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#, fuzzy +msgid "" +":py:obj:`FedXgbBagging `\\ " +"\\(\\[evaluate\\_function\\]\\)" +msgstr "" +":py:obj:`FedXgbBagging `\\ " +"\\(\\[evaluate\\_function\\]\\)" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging:1 of +#, fuzzy +msgid "Configurable FedXgbBagging strategy implementation." +msgstr "可配置的 FedXgbNAvg 策略实施。" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#, fuzzy +msgid "" +":py:obj:`FedXgbCyclic `\\ " +"\\(\\*\\*kwargs\\)" +msgstr "" +":py:obj:`FedXgbCyclic `\\ " +"\\(\\*\\*kwargs\\)" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic:1 of +#, fuzzy +msgid "Configurable FedXgbCyclic strategy implementation." +msgstr "可配置的 FedAvg 策略实施。" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#, fuzzy +msgid "" +":py:obj:`FedXgbNnAvg `\\ \\(\\*args\\, " +"\\*\\*kwargs\\)" +msgstr "" ":py:obj:`FedXgbNnAvg `\\ \\(\\*args\\, " "\\*\\*kwargs\\)" @@ -11553,11 +12267,13 @@ msgstr "FedYogi [Reddi 等人,2020] 策略。" #: ../../source/ref-api/flwr.server.strategy.rst:45::1 #, fuzzy msgid "" -":py:obj:`FaultTolerantFedAvg `\\ " -"\\(\\*\\[\\, fraction\\_fit\\, ...\\]\\)" +":py:obj:`FaultTolerantFedAvg " +"`\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" msgstr "" -":py:obj:`FaultTolerantFedAvg `\\ " -"\\(\\*\\[\\, fraction\\_fit\\, ...\\]\\)" +":py:obj:`FaultTolerantFedAvg " +"`\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" #: ../../source/ref-api/flwr.server.strategy.rst:45::1 #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg:1 of @@ -11567,11 +12283,11 @@ msgstr "可配置的容错 FedAvg 策略实施。" #: ../../source/ref-api/flwr.server.strategy.rst:45::1 #, fuzzy msgid "" -":py:obj:`Krum `\\ \\(\\*\\[\\, fraction\\_fit\\, " -"fraction\\_evaluate\\, ...\\]\\)" +":py:obj:`Krum `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" msgstr "" -":py:obj:`Krum `\\ \\(\\*\\[\\, fraction\\_fit\\, " -"fraction\\_evaluate\\, ...\\]\\)" +":py:obj:`Krum `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" #: ../../source/ref-api/flwr.server.strategy.rst:45::1 #: flwr.server.strategy.krum.Krum:1 of @@ -11582,11 +12298,11 @@ msgstr "FedYogi [Reddi 等人,2020] 策略。" #: ../../source/ref-api/flwr.server.strategy.rst:45::1 #, fuzzy msgid "" -":py:obj:`QFedAvg `\\ \\(\\*\\[\\, q\\_param\\, " -"qffl\\_learning\\_rate\\, ...\\]\\)" +":py:obj:`QFedAvg `\\ \\(\\*\\[\\, " +"q\\_param\\, qffl\\_learning\\_rate\\, ...\\]\\)" msgstr "" -":py:obj:`QFedAvg `\\ \\(\\*\\[\\, q\\_param\\, " -"qffl\\_learning\\_rate\\, ...\\]\\)" +":py:obj:`QFedAvg `\\ \\(\\*\\[\\, " +"q\\_param\\, qffl\\_learning\\_rate\\, ...\\]\\)" #: ../../source/ref-api/flwr.server.strategy.rst:45::1 #: flwr.server.strategy.qfedavg.QFedAvg:1 of @@ -11756,8 +12472,8 @@ msgstr "初始全局模型参数。" #: flwr.server.strategy.bulyan.Bulyan:27 of msgid "" -"Byzantine resilient aggregation rule that is used as the first step of the " -"Bulyan (e.g., Krum)" +"Byzantine resilient aggregation rule that is used as the first step of " +"the Bulyan (e.g., Krum)" msgstr "Byzantine弹性聚合规则,用作 Bulyan 的第一步(如 Krum)" #: flwr.server.strategy.bulyan.Bulyan:29 of @@ -11767,11 +12483,13 @@ msgstr "第一聚类规则的参数" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\, " +"results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\, " +"results\\, ...\\)" #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1 #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 @@ -11801,11 +12519,13 @@ msgstr "使用 Bulyan 技术汇总拟合结果。" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.configure_evaluate:1 @@ -11890,11 +12610,13 @@ msgstr "使用评估函数评估模型参数。" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 @@ -11911,11 +12633,13 @@ msgstr "初始化全局模型参数。" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 @@ -11932,11 +12656,11 @@ msgstr "使用部分可用客户进行评估。" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_fit_clients `\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients `\\" +" \\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_fit_clients `\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients `\\" +" \\(num\\_available\\_clients\\)" #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 @@ -11969,11 +12693,13 @@ msgstr "该类已被弃用,将在以后的版本中删除。" #: of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1 #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 @@ -11993,11 +12719,13 @@ msgstr "使用给定的策略汇总评估损失。" #: of #, fuzzy msgid "" -":py:obj:`aggregate_fit `\\ \\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit " +"`\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -":py:obj:`aggregate_fit `\\ \\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit " +"`\\ " +"\\(server\\_round\\, results\\, failures\\)" #: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive.aggregate_fit:1 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 @@ -12009,11 +12737,13 @@ msgstr "汇总 DPFedAvgFixed 中的训练结果并更新片段标准。" #: of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate:1 of @@ -12024,11 +12754,13 @@ msgstr "使用指定策略配置下一轮评估。" #: of #, fuzzy msgid "" -":py:obj:`configure_fit `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_fit `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 #: of @@ -12050,19 +12782,20 @@ msgstr "" #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.evaluate:1 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.evaluate:1 of -msgid "" -"Evaluate model parameters using an evaluation function from the strategy." +msgid "Evaluate model parameters using an evaluation function from the strategy." msgstr "使用策略中的评估函数评估模型参数。" #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.initialize_parameters:1 @@ -12100,14 +12833,13 @@ msgstr "客户端管理器,用于管理当前连接的所有客户端。" msgid "" "**evaluate_configuration** -- A list of tuples. Each tuple in the list " "identifies a `ClientProxy` and the `EvaluateIns` for this particular " -"`ClientProxy`. If a particular `ClientProxy` is not included in this list, " -"it means that this `ClientProxy` will not participate in the next round of " -"federated evaluation." +"`ClientProxy`. If a particular `ClientProxy` is not included in this " +"list, it means that this `ClientProxy` will not participate in the next " +"round of federated evaluation." msgstr "" -"**evaluate_configuration** -- 一个元组列表。列表中的每个元组都标识了一个" -"`ClientProxy`和该特定`ClientProxy`的`EvaluateIns`。如果某个特定的 " -"`ClientProxy` 未包含在此列表中,则表示该 `ClientProxy` 将不参与下一轮联合评" -"估。" +"**evaluate_configuration** -- " +"一个元组列表。列表中的每个元组都标识了一个`ClientProxy`和该特定`ClientProxy`的`EvaluateIns`。如果某个特定的" +" `ClientProxy` 未包含在此列表中,则表示该 `ClientProxy` 将不参与下一轮联合评估。" #: ../../source/ref-api/flwr.server.strategy.DPFedAvgFixed.rst:2 msgid "DPFedAvgFixed" @@ -12128,20 +12860,24 @@ msgstr "Bases: :py:class:`~flwr.server.strategy.strategy.Strategy`" #: of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit " +"`\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit " +"`\\ " "\\(server\\_round\\, results\\, failures\\)" #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 @@ -12153,26 +12889,31 @@ msgstr "使用非加权汇总法汇总训练结果。" #: of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit " +"`\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit " +"`\\ " "\\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:1 of msgid "" -"Configure the next round of training incorporating Differential Privacy (DP)." +"Configure the next round of training incorporating Differential Privacy " +"(DP)." msgstr "配置包含差分隐私 (DP) 的下一轮训练。" #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 @@ -12189,29 +12930,31 @@ msgstr "" #: of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:3 of msgid "" -"Configuration of the next training round includes information related to DP, " -"such as clip norm and noise stddev." +"Configuration of the next training round includes information related to " +"DP, such as clip norm and noise stddev." msgstr "下一轮训练的配置包括与 DP 相关的信息,如片段规范和噪声 stddev。" #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:13 #: flwr.server.strategy.strategy.Strategy.configure_fit:10 of msgid "" -"**fit_configuration** -- A list of tuples. Each tuple in the list identifies " -"a `ClientProxy` and the `FitIns` for this particular `ClientProxy`. If a " -"particular `ClientProxy` is not included in this list, it means that this " -"`ClientProxy` will not participate in the next round of federated learning." +"**fit_configuration** -- A list of tuples. Each tuple in the list " +"identifies a `ClientProxy` and the `FitIns` for this particular " +"`ClientProxy`. If a particular `ClientProxy` is not included in this " +"list, it means that this `ClientProxy` will not participate in the next " +"round of federated learning." msgstr "" -"**fit_configuration** -- 一个元组列表。列表中的每个元组都标识了一个" -"`ClientProxy`和该特定`ClientProxy`的`FitIns'。如果某个特定的`ClientProxy`不在" -"此列表中,则表示该`ClientProxy`将不参加下一轮联合学习。" +"**fit_configuration** -- " +"一个元组列表。列表中的每个元组都标识了一个`ClientProxy`和该特定`ClientProxy`的`FitIns'。如果某个特定的`ClientProxy`不在此列表中,则表示该`ClientProxy`将不参加下一轮联合学习。" #: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyClientSideAdaptiveClipping.rst:2 #, fuzzy @@ -12230,12 +12973,13 @@ msgstr "在客户端使用 \"adaptiveclipping_mod \"修改器。" msgid "" "In comparison to `DifferentialPrivacyServerSideAdaptiveClipping`, which " "performs clipping on the server-side, " -"`DifferentialPrivacyClientSideAdaptiveClipping` expects clipping to happen " -"on the client-side, usually by using the built-in `adaptiveclipping_mod`." +"`DifferentialPrivacyClientSideAdaptiveClipping` expects clipping to " +"happen on the client-side, usually by using the built-in " +"`adaptiveclipping_mod`." msgstr "" -"与在服务器端执行剪切的 `DifferentialPrivacyServerSideAdaptiveClipping` 相比," -"`DifferentialPrivacyClientSideAdaptiveClipping` 希望在客户端进行剪切,通常使" -"用内置的 `adaptiveclipping_mod`。" +"与在服务器端执行剪切的 `DifferentialPrivacyServerSideAdaptiveClipping` " +"相比,`DifferentialPrivacyClientSideAdaptiveClipping` 希望在客户端进行剪切,通常使用内置的 " +"`adaptiveclipping_mod`。" #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:10 #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:3 @@ -12275,8 +13019,7 @@ msgstr "剪切规范的初始值。默认为 0.1。安德鲁等人建议设置 #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:12 #: of #, fuzzy -msgid "" -"The desired quantile of updates which should be clipped. Defaults to 0.5." +msgid "The desired quantile of updates which should be clipped. Defaults to 0.5." msgstr "需要剪切的更新量化值。默认为 0.5。" #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:21 @@ -12284,19 +13027,17 @@ msgstr "需要剪切的更新量化值。默认为 0.5。" #: of #, fuzzy msgid "" -"The learning rate for the clipping norm adaptation. Defaults to 0.2. Andrew " -"et al. recommends to set to 0.2." +"The learning rate for the clipping norm adaptation. Defaults to 0.2. " +"Andrew et al. recommends to set to 0.2." msgstr "剪切规范适应的学习率。默认为 0.2。安德鲁等人建议设置为 0.2。" #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:24 #: of #, fuzzy msgid "" -"The stddev of the noise added to the count of updates currently below the " -"estimate. Andrew et al. recommends to set to `expected_num_records/20`" -msgstr "" -"添加到当前低于估计值的更新计数中的噪声的 stddev。安德鲁等人建议设置为 " -"\"expected_num_records/20" +"The stddev of the noise added to the count of updates currently below the" +" estimate. Andrew et al. recommends to set to `expected_num_records/20`" +msgstr "添加到当前低于估计值的更新计数中的噪声的 stddev。安德鲁等人建议设置为 \"expected_num_records/20" #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:30 #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:23 @@ -12311,10 +13052,9 @@ msgstr "server.strategy" #: of #, fuzzy msgid "" -"Wrap the strategy with the `DifferentialPrivacyClientSideAdaptiveClipping` " -"wrapper:" -msgstr "" -"用 \"DifferentialPrivacyClientSideAdaptiveClipping \"包装器对策略进行包装:" +"Wrap the strategy with the " +"`DifferentialPrivacyClientSideAdaptiveClipping` wrapper:" +msgstr "用 \"DifferentialPrivacyClientSideAdaptiveClipping \"包装器对策略进行包装:" #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:40 #: of @@ -12326,25 +13066,25 @@ msgstr "在客户端,将 \"adaptiveclipping_mod \"添加到客户端模块中 #: of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ " -"\\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\" +" \\(server\\_round\\, results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ " -"\\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\" +" \\(server\\_round\\, results\\, ...\\)" #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit " +"`\\" +" \\(server\\_round\\, results\\, failures\\)" msgstr "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit " +"`\\" +" \\(server\\_round\\, results\\, failures\\)" #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_fit:1 @@ -12359,49 +13099,49 @@ msgstr "汇总 DPFedAvgFixed 中的训练结果并更新片段标准。" #: of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" msgstr "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ " -"\\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\" +" \\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ " -"\\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\" +" \\(client\\_manager\\)" #: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyClientSideFixedClipping.rst:2 #, fuzzy @@ -12420,22 +13160,21 @@ msgstr "在客户端使用 `fixedclipping_mod` 修改器。" msgid "" "In comparison to `DifferentialPrivacyServerSideFixedClipping`, which " "performs clipping on the server-side, " -"`DifferentialPrivacyClientSideFixedClipping` expects clipping to happen on " -"the client-side, usually by using the built-in `fixedclipping_mod`." +"`DifferentialPrivacyClientSideFixedClipping` expects clipping to happen " +"on the client-side, usually by using the built-in `fixedclipping_mod`." msgstr "" -"与在服务器端执行剪切的 \"DifferentialPrivacyServerSideFixedClipping \"相" -"比,\"DifferentialPrivacyClientSideFixedClipping \"希望在客户端进行剪切,通常" -"是使用内置的 \"fixedclipping_mod\"。" +"与在服务器端执行剪切的 \"DifferentialPrivacyServerSideFixedClipping " +"\"相比,\"DifferentialPrivacyClientSideFixedClipping \"希望在客户端进行剪切,通常是使用内置的 " +"\"fixedclipping_mod\"。" #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:12 #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:5 #: of #, fuzzy msgid "" -"The noise multiplier for the Gaussian mechanism for model updates. A value " -"of 1.0 or higher is recommended for strong privacy." -msgstr "" -"模型更新高斯机制的噪声乘数。建议使用 1.0 或更高的值,以获得较强的隐私性。" +"The noise multiplier for the Gaussian mechanism for model updates. A " +"value of 1.0 or higher is recommended for strong privacy." +msgstr "模型更新高斯机制的噪声乘数。建议使用 1.0 或更高的值,以获得较强的隐私性。" #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:15 #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:8 @@ -12462,25 +13201,25 @@ msgstr "在客户端,将 \"fixedclipping_mod \"添加到客户端模块中:" #: of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ " -"\\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\" +" \\(server\\_round\\, results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ " -"\\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\" +" \\(server\\_round\\, results\\, ...\\)" #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit " +"`\\" +" \\(server\\_round\\, results\\, failures\\)" msgstr "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit " +"`\\" +" \\(server\\_round\\, results\\, failures\\)" #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_fit:1 @@ -12493,49 +13232,49 @@ msgstr "然后将汇总结果序列化:" #: of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`evaluate `\\ \\(server\\_round\\, " -"parameters\\)" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" msgstr "" -":py:obj:`evaluate `\\ \\(server\\_round\\, " -"parameters\\)" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ " -"\\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\" +" \\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ " -"\\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\" +" \\(client\\_manager\\)" #: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyServerSideAdaptiveClipping.rst:2 #, fuzzy @@ -12546,11 +13285,10 @@ msgstr "DifferentialPrivacyServerSideAdaptiveClipping" #: of #, fuzzy msgid "" -"The standard deviation of the noise added to the count of updates below the " -"estimate. Andrew et al. recommends to set to `expected_num_records/20`" -msgstr "" -"添加到低于估计值的更新计数中的噪声标准偏差。安德鲁等人建议设置为 " -"\"expected_num_records/20" +"The standard deviation of the noise added to the count of updates below " +"the estimate. Andrew et al. recommends to set to " +"`expected_num_records/20`" +msgstr "添加到低于估计值的更新计数中的噪声标准偏差。安德鲁等人建议设置为 \"expected_num_records/20" #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:27 #: of @@ -12564,73 +13302,73 @@ msgstr "用 DifferentialPrivacyServerSideAdaptiveClipping 封装器封装策略" #: of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ " -"\\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\" +" \\(server\\_round\\, results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ " -"\\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\" +" \\(server\\_round\\, results\\, ...\\)" #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit " +"`\\" +" \\(server\\_round\\, results\\, failures\\)" msgstr "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit " +"`\\" +" \\(server\\_round\\, results\\, failures\\)" #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" msgstr "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" #: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ " -"\\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\" +" \\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ " -"\\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\" +" \\(client\\_manager\\)" #: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyServerSideFixedClipping.rst:2 #, fuzzy @@ -12641,32 +13379,33 @@ msgstr "差分隐私" #: of #, fuzzy msgid "" -"Wrap the strategy with the DifferentialPrivacyServerSideFixedClipping wrapper" +"Wrap the strategy with the DifferentialPrivacyServerSideFixedClipping " +"wrapper" msgstr "用 DifferentialPrivacyServerSideFixedClipping 封装器封装策略" #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ " -"\\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\" +" \\(server\\_round\\, results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ " -"\\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\" +" \\(server\\_round\\, results\\, ...\\)" #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit " +"`\\" +" \\(server\\_round\\, results\\, failures\\)" msgstr "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit " +"`\\" +" \\(server\\_round\\, results\\, failures\\)" #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_fit:1 @@ -12679,49 +13418,49 @@ msgstr "计算更新、剪辑并将其传递给聚合。" #: of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`evaluate `\\ \\(server\\_round\\, " -"parameters\\)" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" msgstr "" -":py:obj:`evaluate `\\ \\(server\\_round\\, " -"parameters\\)" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ " -"\\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\" +" \\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ " -"\\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\" +" \\(client\\_manager\\)" #: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_fit:3 #: of @@ -12738,21 +13477,25 @@ msgstr "server.strategy.FaultTolerantFedAvg" #: of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`aggregate_fit `\\ \\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit " +"`\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -":py:obj:`aggregate_fit `\\ \\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit " +"`\\ " +"\\(server\\_round\\, results\\, failures\\)" #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_fit:1 @@ -12774,21 +13517,25 @@ msgstr "使用加权平均法汇总拟合结果。" #: of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`configure_fit `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_fit `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 #: of @@ -12804,34 +13551,40 @@ msgstr "" #: of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`num_fit_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_fit_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: ../../source/ref-api/flwr.server.strategy.FedAdagrad.rst:2 -#: ../../source/ref-changelog.md:839 +#: ../../source/ref-changelog.md:905 msgid "FedAdagrad" msgstr "FedAdagrad" @@ -12882,38 +13635,42 @@ msgstr "控制算法的适应度。默认为 1e-9。" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit `\\" +" \\(server\\_round\\, results\\, failures\\)" msgstr "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit `\\" +" \\(server\\_round\\, results\\, failures\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit `\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit `\\" +" \\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy @@ -12927,29 +13684,35 @@ msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_fit_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_fit_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: ../../source/ref-api/flwr.server.strategy.FedAdam.rst:2 #, fuzzy @@ -12969,11 +13732,13 @@ msgstr "第二动量参数。默认为 0.99。" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\," +" results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\," +" results\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy @@ -12987,11 +13752,13 @@ msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy @@ -13014,28 +13781,34 @@ msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_fit_clients `\\ " +":py:obj:`num_fit_clients " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_fit_clients `\\ " +":py:obj:`num_fit_clients " +"`\\ " "\\(num\\_available\\_clients\\)" #: ../../source/ref-api/flwr.server.strategy.FedAvg.rst:2 @@ -13052,8 +13825,8 @@ msgstr "实施基于 https://arxiv.org/abs/1602.05629" #: of msgid "" "Fraction of clients used during training. In case `min_fit_clients` is " -"larger than `fraction_fit * available_clients`, `min_fit_clients` will still " -"be sampled. Defaults to 1.0." +"larger than `fraction_fit * available_clients`, `min_fit_clients` will " +"still be sampled. Defaults to 1.0." msgstr "" "训练过程中使用的客户端比例。如果 `min_fit_clients` 大于 `fraction_fit * " "available_clients`,则仍会对 `min_fit_clients` 进行采样。默认为 1.0。" @@ -13061,13 +13834,13 @@ msgstr "" #: flwr.server.strategy.fedavg.FedAvg:9 flwr.server.strategy.fedprox.FedProx:41 #: of msgid "" -"Fraction of clients used during validation. In case `min_evaluate_clients` " -"is larger than `fraction_evaluate * available_clients`, " -"`min_evaluate_clients` will still be sampled. Defaults to 1.0." +"Fraction of clients used during validation. In case " +"`min_evaluate_clients` is larger than `fraction_evaluate * " +"available_clients`, `min_evaluate_clients` will still be sampled. " +"Defaults to 1.0." msgstr "" -"验证过程中使用的客户端的比例。如果 `min_evaluate_clients` 大于 " -"`fraction_evaluate * available_clients`,则仍会对 `min_evaluate_clients` 进行" -"采样。默认为 1.0。" +"验证过程中使用的客户端的比例。如果 `min_evaluate_clients` 大于 `fraction_evaluate * " +"available_clients`,则仍会对 `min_evaluate_clients` 进行采样。默认为 1.0。" #: flwr.server.strategy.fedavg.FedAvg:33 of #, fuzzy @@ -13077,11 +13850,13 @@ msgstr "启用(真)或禁用(假)模型更新的就地聚合。" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\, " +"results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\, " +"results\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy @@ -13095,11 +13870,13 @@ msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy @@ -13122,29 +13899,33 @@ msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_fit_clients `\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients `\\" +" \\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_fit_clients `\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients `\\" +" \\(num\\_available\\_clients\\)" #: ../../source/ref-api/flwr.server.strategy.FedAvgAndroid.rst:2 #, fuzzy @@ -13155,31 +13936,35 @@ msgstr "DPFedAvgAdaptive" #: of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" #: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit " +"`\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit " +"`\\ " "\\(server\\_round\\, results\\, failures\\)" #: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`bytes_to_ndarray `\\ \\(tensor\\)" +":py:obj:`bytes_to_ndarray " +"`\\ \\(tensor\\)" msgstr "" -":py:obj:`bytes_to_ndarray `\\ \\(tensor\\)" +":py:obj:`bytes_to_ndarray " +"`\\ \\(tensor\\)" #: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 #: flwr.server.strategy.fedavg_android.FedAvgAndroid.bytes_to_ndarray:1 of @@ -13191,20 +13976,24 @@ msgstr "从字节反序列化 NumPy ndarray。" #: of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit " +"`\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit " +"`\\ " "\\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 @@ -13221,21 +14010,23 @@ msgstr "" #: of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" #: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`ndarray_to_bytes `\\ \\(ndarray\\)" +":py:obj:`ndarray_to_bytes " +"`\\ \\(ndarray\\)" msgstr "" -":py:obj:`ndarray_to_bytes `\\ \\(ndarray\\)" +":py:obj:`ndarray_to_bytes " +"`\\ \\(ndarray\\)" #: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 #: flwr.server.strategy.fedavg_android.FedAvgAndroid.ndarray_to_bytes:1 of @@ -13247,41 +14038,49 @@ msgstr "将 NumPy ndarray 序列化为字节。" #: of #, fuzzy msgid "" -":py:obj:`ndarrays_to_parameters `\\ \\(ndarrays\\)" +":py:obj:`ndarrays_to_parameters " +"`\\ " +"\\(ndarrays\\)" msgstr "" -":py:obj:`ndarrays_to_parameters `\\ \\(ndarrays\\)" +":py:obj:`ndarrays_to_parameters " +"`\\ " +"\\(ndarrays\\)" #: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`num_fit_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_fit_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`parameters_to_ndarrays `\\ \\(parameters\\)" +":py:obj:`parameters_to_ndarrays " +"`\\ " +"\\(parameters\\)" msgstr "" -":py:obj:`parameters_to_ndarrays `\\ \\(parameters\\)" +":py:obj:`parameters_to_ndarrays " +"`\\ " +"\\(parameters\\)" #: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 #: flwr.server.strategy.fedavg_android.FedAvgAndroid.parameters_to_ndarrays:1 @@ -13302,7 +14101,8 @@ msgstr "实施基于 https://arxiv.org/pdf/1909.06335.pdf" #: flwr.server.strategy.fedavgm.FedAvgM:25 of msgid "" -"Server-side learning rate used in server-side optimization. Defaults to 1.0." +"Server-side learning rate used in server-side optimization. Defaults to " +"1.0." msgstr "服务器端优化中使用的服务器端学习率。默认为 1.0。" #: flwr.server.strategy.fedavgm.FedAvgM:28 of @@ -13312,11 +14112,13 @@ msgstr "用于 FedAvgM 的服务器端动量因子。默认为 0.0。" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\," +" results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\," +" results\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy @@ -13330,11 +14132,13 @@ msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy @@ -13357,28 +14161,34 @@ msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_fit_clients `\\ " +":py:obj:`num_fit_clients " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_fit_clients `\\ " +":py:obj:`num_fit_clients " +"`\\ " "\\(num\\_available\\_clients\\)" #: ../../source/ref-api/flwr.server.strategy.FedMedian.rst:2 @@ -13389,11 +14199,13 @@ msgstr "联邦医保" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy @@ -13412,11 +14224,13 @@ msgstr "使用中位数汇总拟合结果。" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy @@ -13439,28 +14253,34 @@ msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_fit_clients `\\ " +":py:obj:`num_fit_clients " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_fit_clients `\\ " +":py:obj:`num_fit_clients " +"`\\ " "\\(num\\_available\\_clients\\)" #: ../../source/ref-api/flwr.server.strategy.FedOpt.rst:2 @@ -13479,11 +14299,13 @@ msgstr "第二动量参数。默认为 0.0。" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\, " +"results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\, " +"results\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy @@ -13497,11 +14319,13 @@ msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy @@ -13524,29 +14348,33 @@ msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_fit_clients `\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients `\\" +" \\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_fit_clients `\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients `\\" +" \\(num\\_available\\_clients\\)" #: ../../source/ref-api/flwr.server.strategy.FedProx.rst:2 #, fuzzy @@ -13559,12 +14387,10 @@ msgstr "实施基于 https://arxiv.org/abs/1812.06127" #: flwr.server.strategy.fedprox.FedProx:5 of msgid "" -"The strategy in itself will not be different than FedAvg, the client needs " -"to be adjusted. A proximal term needs to be added to the loss function " -"during the training:" -msgstr "" -"策略本身与 FedAvg 并无不同,客户端需要进行调整。在训练过程中,需要在损失函数" -"中添加一个近端项:" +"The strategy in itself will not be different than FedAvg, the client " +"needs to be adjusted. A proximal term needs to be added to the loss " +"function during the training:" +msgstr "策略本身与 FedAvg 并无不同,客户端需要进行调整。在训练过程中,需要在损失函数中添加一个近端项:" #: flwr.server.strategy.fedprox.FedProx:9 of msgid "" @@ -13598,20 +14424,22 @@ msgstr "其中,\"global_params \"是训练前的参数副本。" msgid "" "The weight of the proximal term used in the optimization. 0.0 makes this " "strategy equivalent to FedAvg, and the higher the coefficient, the more " -"regularization will be used (that is, the client parameters will need to be " -"closer to the server parameters during training)." +"regularization will be used (that is, the client parameters will need to " +"be closer to the server parameters during training)." msgstr "" -"优化中使用的近端项权重。0.0 使该策略等同于 FedAvg,系数越大,使用的正则化就越" -"多(也就是说,在训练过程中,客户端参数需要更接近服务器参数)。" +"优化中使用的近端项权重。0.0 使该策略等同于 " +"FedAvg,系数越大,使用的正则化就越多(也就是说,在训练过程中,客户端参数需要更接近服务器参数)。" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\," +" results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\," +" results\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy @@ -13625,11 +14453,13 @@ msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy @@ -13652,28 +14482,34 @@ msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_fit_clients `\\ " +":py:obj:`num_fit_clients " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_fit_clients `\\ " +":py:obj:`num_fit_clients " +"`\\ " "\\(num\\_available\\_clients\\)" #: flwr.server.strategy.fedprox.FedProx.configure_fit:3 of @@ -13697,19 +14533,23 @@ msgstr "截取分布两个尾部的分数。默认为 0.2。" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit " +"`\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit " +"`\\ " "\\(server\\_round\\, results\\, failures\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 @@ -13720,19 +14560,23 @@ msgstr "使用修剪平均值汇总拟合结果。" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit " +"`\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit " +"`\\ " "\\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of @@ -13747,29 +14591,35 @@ msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_fit_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_fit_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: ../../source/ref-api/flwr.server.strategy.FedXgbBagging.rst:2 #, fuzzy @@ -13780,11 +14630,13 @@ msgstr "FedXgbBagging" #: of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" #: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1 #: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 @@ -13799,10 +14651,12 @@ msgstr "采用加权平均法计算评估损失总额。" #: of #, fuzzy msgid "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit " +"`\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit " +"`\\ " "\\(server\\_round\\, results\\, failures\\)" #: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 @@ -13817,20 +14671,24 @@ msgstr "使用 Bulyan 技术汇总拟合结果。" #: of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit " +"`\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit " +"`\\ " "\\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 @@ -13847,31 +14705,37 @@ msgstr "" #: of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" #: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`num_fit_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_fit_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: ../../source/ref-api/flwr.server.strategy.FedXgbCyclic.rst:2 #, fuzzy @@ -13882,41 +14746,49 @@ msgstr "FedXgbCyclic" #: of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" #: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit " +"`\\ \\(server\\_round\\," +" results\\, failures\\)" msgstr "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit " +"`\\ \\(server\\_round\\," +" results\\, failures\\)" #: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" msgstr "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" #: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 #: of @@ -13932,31 +14804,37 @@ msgstr "" #: of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" #: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 #: of #, fuzzy msgid "" -":py:obj:`num_fit_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_fit_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: ../../source/ref-api/flwr.server.strategy.FedXgbNnAvg.rst:2 #, fuzzy @@ -13967,46 +14845,55 @@ msgstr "DP-FedAvg" #, fuzzy msgid "" "This strategy is deprecated, but a copy of it is available in Flower " -"Baselines: https://github.com/adap/flower/tree/main/baselines/hfedxgboost." +"Baselines: " +"https://github.com/adap/flower/tree/main/baselines/hfedxgboost." msgstr "" -"该策略已被弃用,但在 Flower Baselines: https://github.com/adap/flower/tree/" -"main/baselines/hfedxgboost 中有其副本。" +"该策略已被弃用,但在 Flower Baselines: " +"https://github.com/adap/flower/tree/main/baselines/hfedxgboost 中有其副本。" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit " +"`\\ \\(server\\_round\\, " +"results\\, failures\\)" msgstr "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit " +"`\\ \\(server\\_round\\, " +"results\\, failures\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit " +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" msgstr "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit " +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy @@ -14020,29 +14907,35 @@ msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_fit_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_fit_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: ../../source/ref-api/flwr.server.strategy.FedYogi.rst:2 #, fuzzy @@ -14067,11 +14960,13 @@ msgstr "控制算法的适应度。默认为 1e-9。" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\," +" results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\," +" results\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy @@ -14085,11 +14980,13 @@ msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy @@ -14112,28 +15009,34 @@ msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_fit_clients `\\ " +":py:obj:`num_fit_clients " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_fit_clients `\\ " +":py:obj:`num_fit_clients " +"`\\ " "\\(num\\_available\\_clients\\)" #: ../../source/ref-api/flwr.server.strategy.Krum.rst:2 @@ -14148,20 +15051,20 @@ msgstr "实施基于 https://arxiv.org/abs/2304.07537。" #: flwr.server.strategy.krum.Krum:17 of msgid "" -"Number of clients to keep before averaging (MultiKrum). Defaults to 0, in " -"that case classical Krum is applied." -msgstr "" -"求平均值前保留的客户端数量(MultiKrum)。默认值为 0,在这种情况下会应用经典 " -"Krum。" +"Number of clients to keep before averaging (MultiKrum). Defaults to 0, in" +" that case classical Krum is applied." +msgstr "求平均值前保留的客户端数量(MultiKrum)。默认值为 0,在这种情况下会应用经典 Krum。" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\, " +"results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\, " +"results\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy @@ -14180,11 +15083,13 @@ msgstr "使用 Krum 汇总拟合结果。" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy @@ -14207,20 +15112,24 @@ msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of #, fuzzy @@ -14239,11 +15148,13 @@ msgstr "DP-FedAvg" #: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\," +" results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\," +" results\\, ...\\)" #: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of #, fuzzy @@ -14257,11 +15168,13 @@ msgstr "" #: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" #: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of #, fuzzy @@ -14284,28 +15197,34 @@ msgstr "" #: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" #: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_evaluation_clients `\\ \\(num\\_available\\_clients\\)" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" #: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of #, fuzzy msgid "" -":py:obj:`num_fit_clients `\\ " +":py:obj:`num_fit_clients " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -":py:obj:`num_fit_clients `\\ " +":py:obj:`num_fit_clients " +"`\\ " "\\(num\\_available\\_clients\\)" #: ../../source/ref-api/flwr.server.strategy.Strategy.rst:2 @@ -14317,11 +15236,13 @@ msgstr "Krum 策略。" #: of #, fuzzy msgid "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -":py:obj:`aggregate_evaluate `\\ \\(server\\_round\\, results\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" #: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1 #: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 @@ -14348,11 +15269,13 @@ msgstr "汇总训练结果。" #: of #, fuzzy msgid "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -":py:obj:`configure_evaluate `\\ \\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" #: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 #: of @@ -14383,11 +15306,13 @@ msgstr "评估当前的模型参数。" #: of #, fuzzy msgid "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -":py:obj:`initialize_parameters `\\ \\(client\\_manager\\)" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" #: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 #: flwr.server.strategy.strategy.Strategy.initialize_parameters:1 of @@ -14396,22 +15321,19 @@ msgstr "初始化(全局)模型参数。" #: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:5 of msgid "" -"Successful updates from the previously selected and configured clients. Each " -"pair of `(ClientProxy, FitRes` constitutes a successful update from one of " -"the previously selected clients. Not that not all previously selected " -"clients are necessarily included in this list: a client might drop out and " -"not submit a result. For each client that did not submit an update, there " -"should be an `Exception` in `failures`." +"Successful updates from the previously selected and configured clients. " +"Each pair of `(ClientProxy, FitRes` constitutes a successful update from " +"one of the previously selected clients. Not that not all previously " +"selected clients are necessarily included in this list: a client might " +"drop out and not submit a result. For each client that did not submit an " +"update, there should be an `Exception` in `failures`." msgstr "" -"从先前选定和配置的客户端进行的成功更新。每一对`(ClientProxy, FitRes)`都是来自" -"先前选定客户端的一次成功更新。但并非所有先前选定的客户机都一定包含在此列表" -"中:客户机可能会退出,不提交结果。对于每个没有提交更新的客户端,`failures`中" -"都应该有一个`Exception`。" +"从先前选定和配置的客户端进行的成功更新。每一对`(ClientProxy, " +"FitRes)`都是来自先前选定客户端的一次成功更新。但并非所有先前选定的客户机都一定包含在此列表中:客户机可能会退出,不提交结果。对于每个没有提交更新的客户端,`failures`中都应该有一个`Exception`。" #: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:13 #: flwr.server.strategy.strategy.Strategy.aggregate_fit:13 of -msgid "" -"Exceptions that occurred while the server was waiting for client updates." +msgid "Exceptions that occurred while the server was waiting for client updates." msgstr "服务器等待客户端更新时发生的异常。" #: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:16 of @@ -14422,31 +15344,29 @@ msgstr "**aggregation_result** -- 汇总的评估结果。聚合通常使用某 #: flwr.server.strategy.strategy.Strategy.aggregate_fit:5 of msgid "" -"Successful updates from the previously selected and configured clients. Each " -"pair of `(ClientProxy, FitRes)` constitutes a successful update from one of " -"the previously selected clients. Not that not all previously selected " -"clients are necessarily included in this list: a client might drop out and " -"not submit a result. For each client that did not submit an update, there " -"should be an `Exception` in `failures`." +"Successful updates from the previously selected and configured clients. " +"Each pair of `(ClientProxy, FitRes)` constitutes a successful update from" +" one of the previously selected clients. Not that not all previously " +"selected clients are necessarily included in this list: a client might " +"drop out and not submit a result. For each client that did not submit an " +"update, there should be an `Exception` in `failures`." msgstr "" -"来自先前选定和配置的客户端的成功更新。每一对`(ClientProxy, FitRes)`都构成先前" -"选定的客户端之一的一次成功更新。但并非所有先前选定的客户机都一定包含在此列表" -"中:客户机可能会退出,不提交结果。对于每个没有提交更新的客户端,\"失败 \"中都" -"应该有一个 \"异常\"。" +"来自先前选定和配置的客户端的成功更新。每一对`(ClientProxy, " +"FitRes)`都构成先前选定的客户端之一的一次成功更新。但并非所有先前选定的客户机都一定包含在此列表中:客户机可能会退出,不提交结果。对于每个没有提交更新的客户端,\"失败" +" \"中都应该有一个 \"异常\"。" #: flwr.server.strategy.strategy.Strategy.aggregate_fit:17 of msgid "" "**parameters** -- If parameters are returned, then the server will treat " -"these as the new global model parameters (i.e., it will replace the previous " -"parameters with the ones returned from this method). If `None` is returned " -"(e.g., because there were only failures and no viable results) then the " -"server will no update the previous model parameters, the updates received in " -"this round are discarded, and the global model parameters remain the same." +"these as the new global model parameters (i.e., it will replace the " +"previous parameters with the ones returned from this method). If `None` " +"is returned (e.g., because there were only failures and no viable " +"results) then the server will no update the previous model parameters, " +"the updates received in this round are discarded, and the global model " +"parameters remain the same." msgstr "" -"**parameters** -- 如果返回参数,那么服务器将把这些参数作为新的全局模型参数" -"(即用本方法返回的参数替换之前的参数)。如果返回 \"无\"(例如,因为只有失败而" -"没有可行的结果),那么服务器将不再更新之前的模型参数,本轮收到的更新将被丢" -"弃,全局模型参数保持不变。" +"**parameters** -- 如果返回参数,那么服务器将把这些参数作为新的全局模型参数(即用本方法返回的参数替换之前的参数)。如果返回 " +"\"无\"(例如,因为只有失败而没有可行的结果),那么服务器将不再更新之前的模型参数,本轮收到的更新将被丢弃,全局模型参数保持不变。" #: flwr.server.strategy.strategy.Strategy.evaluate:3 of msgid "" @@ -14456,18 +15376,16 @@ msgstr "该函数可用于对模型参数进行集中(即服务器端)评估 #: flwr.server.strategy.strategy.Strategy.evaluate:11 of msgid "" -"**evaluation_result** -- The evaluation result, usually a Tuple containing " -"loss and a dictionary containing task-specific metrics (e.g., accuracy)." -msgstr "" -"**evaluation_result** -- 评估结果,通常是一个元组,包含损失值和一个字典,字典" -"中包含特定任务的指标(如准确率)。" +"**evaluation_result** -- The evaluation result, usually a Tuple " +"containing loss and a dictionary containing task-specific metrics (e.g., " +"accuracy)." +msgstr "**evaluation_result** -- 评估结果,通常是一个元组,包含损失值和一个字典,字典中包含特定任务的指标(如准确率)。" #: flwr.server.strategy.strategy.Strategy.initialize_parameters:6 of msgid "" "**parameters** -- If parameters are returned, then the server will treat " "these as the initial global model parameters." -msgstr "" -"**parameters** -- 如果返回参数,服务器将把这些参数视为初始全局模型参数。" +msgstr "**parameters** -- 如果返回参数,服务器将把这些参数视为初始全局模型参数。" #: ../../source/ref-api/flwr.server.workflow.rst:2 #, fuzzy @@ -14534,25 +15452,22 @@ msgstr "工作流程" #: of #, fuzzy msgid "" -"The SecAgg+ protocol ensures the secure summation of integer vectors owned " -"by multiple parties, without accessing any individual integer vector. This " -"workflow allows the server to compute the weighted average of model " -"parameters across all clients, ensuring individual contributions remain " -"private. This is achieved by clients sending both, a weighting factor and a " -"weighted version of the locally updated parameters, both of which are masked " -"for privacy. Specifically, each client uploads \"[w, w * params]\" with " -"masks, where weighting factor 'w' is the number of examples ('num_examples') " -"and 'params' represents the model parameters ('parameters') from the " -"client's `FitRes`. The server then aggregates these contributions to compute " -"the weighted average of model parameters." -msgstr "" -"SecAgg+ 协议可确保对多方拥有的整数向量进行安全求和,而不会访问任何单个整数向" -"量。该工作流程允许服务器计算所有客户端模型参数的加权平均值,确保个人贡献保持" -"私密。这可以通过客户端同时发送加权因子和本地更新参数的加权版本来实现,为了保" -"护隐私,两者都会被屏蔽。具体来说,每个客户端都会上传带掩码的\"[w, w * " -"params]\",其中加权因子 \"w \"是示例数(\"num_examples\"),\"params \"代表客" -"户端 \"FitRes \"中的模型参数(\"parameters\")。然后,服务器会汇总这些贡献," -"计算模型参数的加权平均值。" +"The SecAgg+ protocol ensures the secure summation of integer vectors " +"owned by multiple parties, without accessing any individual integer " +"vector. This workflow allows the server to compute the weighted average " +"of model parameters across all clients, ensuring individual contributions" +" remain private. This is achieved by clients sending both, a weighting " +"factor and a weighted version of the locally updated parameters, both of " +"which are masked for privacy. Specifically, each client uploads \"[w, w *" +" params]\" with masks, where weighting factor 'w' is the number of " +"examples ('num_examples') and 'params' represents the model parameters " +"('parameters') from the client's `FitRes`. The server then aggregates " +"these contributions to compute the weighted average of model parameters." +msgstr "" +"SecAgg+ " +"协议可确保对多方拥有的整数向量进行安全求和,而不会访问任何单个整数向量。该工作流程允许服务器计算所有客户端模型参数的加权平均值,确保个人贡献保持私密。这可以通过客户端同时发送加权因子和本地更新参数的加权版本来实现,为了保护隐私,两者都会被屏蔽。具体来说,每个客户端都会上传带掩码的\"[w," +" w * params]\",其中加权因子 \"w \"是示例数(\"num_examples\"),\"params \"代表客户端 " +"\"FitRes \"中的模型参数(\"parameters\")。然后,服务器会汇总这些贡献,计算模型参数的加权平均值。" #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:14 #: of @@ -14561,9 +15476,7 @@ msgid "" "The protocol involves four main stages: - 'setup': Send SecAgg+ " "configuration to clients and collect their public keys. - 'share keys': " "Broadcast public keys among clients and collect encrypted secret" -msgstr "" -"协议包括四个主要阶段: - 设置\": 向客户端发送 SecAgg+ 配置并收集其公钥。- 共" -"享密钥\": 在客户端之间广播公钥,并收集加密密钥。" +msgstr "协议包括四个主要阶段: - 设置\": 向客户端发送 SecAgg+ 配置并收集其公钥。- 共享密钥\": 在客户端之间广播公钥,并收集加密密钥。" #: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:17 #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:17 @@ -14595,53 +15508,45 @@ msgstr "解密\": 收集密钥共享,解密并汇总模型参数。" #: of #, fuzzy msgid "" -"Only the aggregated model parameters are exposed and passed to `Strategy." -"aggregate_fit`, ensuring individual data privacy." -msgstr "" -"只有聚合模型参数才会公开并传递给 `Strategy.aggregate_fit`,从而确保个人数据隐" -"私。" +"Only the aggregated model parameters are exposed and passed to " +"`Strategy.aggregate_fit`, ensuring individual data privacy." +msgstr "只有聚合模型参数才会公开并传递给 `Strategy.aggregate_fit`,从而确保个人数据隐私。" #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:25 #: of #, fuzzy msgid "" -"The number of shares into which each client's private key is split under the " -"SecAgg+ protocol. If specified as a float, it represents the proportion of " -"all selected clients, and the number of shares will be set dynamically in " -"the run time. A private key can be reconstructed from these shares, allowing " -"for the secure aggregation of model updates. Each client sends one share to " -"each of its neighbors while retaining one." +"The number of shares into which each client's private key is split under " +"the SecAgg+ protocol. If specified as a float, it represents the " +"proportion of all selected clients, and the number of shares will be set " +"dynamically in the run time. A private key can be reconstructed from " +"these shares, allowing for the secure aggregation of model updates. Each " +"client sends one share to each of its neighbors while retaining one." msgstr "" -"在 SecAgg+ 协议下,每个客户的私钥被分成的份数。如果指定为浮点数,则代表所有选" -"定客户的比例,份额数将在运行时动态设置。私钥可以从这些份额中重建,从而实现模" -"型更新的安全聚合。每个客户端向其每个邻居发送一份,同时保留一份。" +"在 SecAgg+ " +"协议下,每个客户的私钥被分成的份数。如果指定为浮点数,则代表所有选定客户的比例,份额数将在运行时动态设置。私钥可以从这些份额中重建,从而实现模型更新的安全聚合。每个客户端向其每个邻居发送一份,同时保留一份。" #: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:25 #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:32 #: of #, fuzzy msgid "" -"The minimum number of shares required to reconstruct a client's private key, " -"or, if specified as a float, it represents the proportion of the total " -"number of shares needed for reconstruction. This threshold ensures privacy " -"by allowing for the recovery of contributions from dropped clients during " -"aggregation, without compromising individual client data." -msgstr "" -"重建客户私钥所需的最小份数,如果指定为浮动,则表示重建所需的份数占总份数的比" -"例。这个阈值允许在聚合过程中恢复掉线客户的贡献,从而确保隐私,而不会泄露单个" -"客户的数据。" +"The minimum number of shares required to reconstruct a client's private " +"key, or, if specified as a float, it represents the proportion of the " +"total number of shares needed for reconstruction. This threshold ensures " +"privacy by allowing for the recovery of contributions from dropped " +"clients during aggregation, without compromising individual client data." +msgstr "重建客户私钥所需的最小份数,如果指定为浮动,则表示重建所需的份数占总份数的比例。这个阈值允许在聚合过程中恢复掉线客户的贡献,从而确保隐私,而不会泄露单个客户的数据。" #: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:31 #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:38 #: of #, fuzzy msgid "" -"The maximum value of the weight that can be assigned to any single client's " -"update during the weighted average calculation on the server side, e.g., in " -"the FedAvg algorithm." -msgstr "" -"在服务器端进行加权平均计算(如 FedAvg 算法)时,可分配给任何单个客户端更新的" -"权重的最大值。" +"The maximum value of the weight that can be assigned to any single " +"client's update during the weighted average calculation on the server " +"side, e.g., in the FedAvg algorithm." +msgstr "在服务器端进行加权平均计算(如 FedAvg 算法)时,可分配给任何单个客户端更新的权重的最大值。" #: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:35 #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:42 @@ -14649,11 +15554,9 @@ msgstr "" #, fuzzy msgid "" "The range within which model parameters are clipped before quantization. " -"This parameter ensures each model parameter is bounded within [-" -"clipping_range, clipping_range], facilitating quantization." -msgstr "" -"量化前模型参数的裁剪范围。该参数可确保每个模型参数都在 [-clipping_range, " -"clipping_range] 范围内,便于量化。" +"This parameter ensures each model parameter is bounded within " +"[-clipping_range, clipping_range], facilitating quantization." +msgstr "量化前模型参数的裁剪范围。该参数可确保每个模型参数都在 [-clipping_range, clipping_range] 范围内,便于量化。" #: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:39 #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:46 @@ -14664,9 +15567,7 @@ msgid "" "quantized, mapping each parameter to an integer in [0, " "quantization_range-1]. This facilitates cryptographic operations on the " "model updates." -msgstr "" -"浮点模型参数量化范围的大小,将每个参数映射为 [0, quantization_range-1] 中的整" -"数。这有助于对模型更新进行加密操作。" +msgstr "浮点模型参数量化范围的大小,将每个参数映射为 [0, quantization_range-1] 中的整数。这有助于对模型更新进行加密操作。" #: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:43 #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:50 @@ -14674,42 +15575,36 @@ msgstr "" #, fuzzy msgid "" "The range of values from which random mask entries are uniformly sampled " -"([0, modulus_range-1]). `modulus_range` must be less than 4294967296. Please " -"use 2**n values for `modulus_range` to prevent overflow issues." +"([0, modulus_range-1]). `modulus_range` must be less than 4294967296. " +"Please use 2**n values for `modulus_range` to prevent overflow issues." msgstr "" -"对随机掩码条目进行均匀采样的数值范围([0, modulus_range-1])。modulus_range " -"\"必须小于 4294967296。为防止出现溢出问题,请为 `modulus_range` 使用 2**n 的" -"值。" +"对随机掩码条目进行均匀采样的数值范围([0, modulus_range-1])。modulus_range \"必须小于 " +"4294967296。为防止出现溢出问题,请为 `modulus_range` 使用 2**n 的值。" #: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:47 #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:54 #: of #, fuzzy msgid "" -"The timeout duration in seconds. If specified, the workflow will wait for " -"replies for this duration each time. If `None`, there is no time limit and " -"the workflow will wait until replies for all messages are received." -msgstr "" -"超时时间(秒)。如果指定,工作流将在每次等待回复的时间内等待回复。如果指定为 " -"\"无\",则没有时间限制,工作流程将一直等待到收到所有信息的回复。" +"The timeout duration in seconds. If specified, the workflow will wait for" +" replies for this duration each time. If `None`, there is no time limit " +"and the workflow will wait until replies for all messages are received." +msgstr "超时时间(秒)。如果指定,工作流将在每次等待回复的时间内等待回复。如果指定为 \"无\",则没有时间限制,工作流程将一直等待到收到所有信息的回复。" #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:61 #: of #, fuzzy msgid "" "Generally, higher `num_shares` means more robust to dropouts while " -"increasing the computational costs; higher `reconstruction_threshold` means " -"better privacy guarantees but less tolerance to dropouts." -msgstr "" -"一般来说,\"份额数 \"越高,意味着对丢弃的鲁棒性越强,同时计算成本也会增加;" -"\"重构阈值 \"越高,意味着隐私保证越好,但对丢弃的容忍度越低。" +"increasing the computational costs; higher `reconstruction_threshold` " +"means better privacy guarantees but less tolerance to dropouts." +msgstr "一般来说,\"份额数 \"越高,意味着对丢弃的鲁棒性越强,同时计算成本也会增加;\"重构阈值 \"越高,意味着隐私保证越好,但对丢弃的容忍度越低。" #: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:58 #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:64 #: of #, fuzzy -msgid "" -"Too large `max_weight` may compromise the precision of the quantization." +msgid "Too large `max_weight` may compromise the precision of the quantization." msgstr "过大的 `max_weight` 可能会影响量化的精度。" #: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:59 @@ -14724,46 +15619,46 @@ msgstr "modulus_range \"必须为 2**n,且大于 \"quantization_range\"。" #, fuzzy msgid "" "When `num_shares` is a float, it is interpreted as the proportion of all " -"selected clients, and hence the number of shares will be determined in the " -"runtime. This allows for dynamic adjustment based on the total number of " -"participating clients." -msgstr "" -"当 `num_shares` 为浮点数时,它被解释为所有选定客户端的比例,因此份额数将在运" -"行时确定。这样就可以根据参与客户端的总数进行动态调整。" +"selected clients, and hence the number of shares will be determined in " +"the runtime. This allows for dynamic adjustment based on the total number" +" of participating clients." +msgstr "当 `num_shares` 为浮点数时,它被解释为所有选定客户端的比例,因此份额数将在运行时确定。这样就可以根据参与客户端的总数进行动态调整。" #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:69 #: of #, fuzzy msgid "" -"Similarly, when `reconstruction_threshold` is a float, it is interpreted as " -"the proportion of the number of shares needed for the reconstruction of a " -"private key. This feature enables flexibility in setting the security " -"threshold relative to the number of distributed shares." +"Similarly, when `reconstruction_threshold` is a float, it is interpreted " +"as the proportion of the number of shares needed for the reconstruction " +"of a private key. This feature enables flexibility in setting the " +"security threshold relative to the number of distributed shares." msgstr "" -"同样,当 `reconstruction_threshold` 为浮点数时,它被解释为重建私钥所需的份额" -"数比例。这一功能使我们可以根据分发的份额数灵活设置安全阈值。" +"同样,当 `reconstruction_threshold` " +"为浮点数时,它被解释为重建私钥所需的份额数比例。这一功能使我们可以根据分发的份额数灵活设置安全阈值。" #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:73 #: of #, fuzzy msgid "" -"`num_shares`, `reconstruction_threshold`, and the quantization parameters " -"(`clipping_range`, `quantization_range`, `modulus_range`) play critical " -"roles in balancing privacy, robustness, and efficiency within the SecAgg+ " -"protocol." +"`num_shares`, `reconstruction_threshold`, and the quantization parameters" +" (`clipping_range`, `quantization_range`, `modulus_range`) play critical " +"roles in balancing privacy, robustness, and efficiency within the SecAgg+" +" protocol." msgstr "" -"份额数\"、\"重建阈值 \"和量化参数(\"裁剪范围\"、\"量化范围\"、\"模数范围\")" -"在平衡 SecAgg+ 协议的隐私性、稳健性和效率方面发挥着关键作用。" +"份额数\"、\"重建阈值 \"和量化参数(\"裁剪范围\"、\"量化范围\"、\"模数范围\")在平衡 SecAgg+ " +"协议的隐私性、稳健性和效率方面发挥着关键作用。" #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 #: of #, fuzzy msgid "" -":py:obj:`collect_masked_vectors_stage `\\ \\(driver\\, ...\\)" +":py:obj:`collect_masked_vectors_stage " +"`\\" +" \\(driver\\, ...\\)" msgstr "" -":py:obj:`collect_masked_vectors_stage `\\ \\(driver\\, ...\\)" +":py:obj:`collect_masked_vectors_stage " +"`\\" +" \\(driver\\, ...\\)" #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1 #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 @@ -14776,11 +15671,13 @@ msgstr "执行 \"收集屏蔽向量 \"阶段。" #: of #, fuzzy msgid "" -":py:obj:`setup_stage `\\ \\(driver\\, context\\, state\\)" +":py:obj:`setup_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" msgstr "" -":py:obj:`setup_stage `\\ \\(driver\\, context\\, state\\)" +":py:obj:`setup_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.setup_stage:1 @@ -14793,11 +15690,13 @@ msgstr "执行 \"设置 \"阶段。" #: of #, fuzzy msgid "" -":py:obj:`share_keys_stage `\\ \\(driver\\, context\\, state\\)" +":py:obj:`share_keys_stage " +"`\\ " +"\\(driver\\, context\\, state\\)" msgstr "" -":py:obj:`share_keys_stage `\\ \\(driver\\, context\\, state\\)" +":py:obj:`share_keys_stage " +"`\\ " +"\\(driver\\, context\\, state\\)" #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.share_keys_stage:1 @@ -14810,11 +15709,13 @@ msgstr "执行 \"共享密钥 \"阶段。" #: of #, fuzzy msgid "" -":py:obj:`unmask_stage `\\ \\(driver\\, context\\, state\\)" +":py:obj:`unmask_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" msgstr "" -":py:obj:`unmask_stage `\\ \\(driver\\, context\\, state\\)" +":py:obj:`unmask_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.unmask_stage:1 @@ -14829,50 +15730,51 @@ msgstr "工作流程" #: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:1 of msgid "" -"Bases: :py:class:`~flwr.server.workflow.secure_aggregation." -"secaggplus_workflow.SecAggPlusWorkflow`" +"Bases: " +":py:class:`~flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow`" msgstr "" #: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:3 of msgid "" -"The SecAgg protocol ensures the secure summation of integer vectors owned by " -"multiple parties, without accessing any individual integer vector. This " -"workflow allows the server to compute the weighted average of model " +"The SecAgg protocol ensures the secure summation of integer vectors owned" +" by multiple parties, without accessing any individual integer vector. " +"This workflow allows the server to compute the weighted average of model " "parameters across all clients, ensuring individual contributions remain " -"private. This is achieved by clients sending both, a weighting factor and a " -"weighted version of the locally updated parameters, both of which are masked " -"for privacy. Specifically, each client uploads \"[w, w * params]\" with " -"masks, where weighting factor 'w' is the number of examples ('num_examples') " -"and 'params' represents the model parameters ('parameters') from the " -"client's `FitRes`. The server then aggregates these contributions to compute " -"the weighted average of model parameters." +"private. This is achieved by clients sending both, a weighting factor and" +" a weighted version of the locally updated parameters, both of which are " +"masked for privacy. Specifically, each client uploads \"[w, w * params]\"" +" with masks, where weighting factor 'w' is the number of examples " +"('num_examples') and 'params' represents the model parameters " +"('parameters') from the client's `FitRes`. The server then aggregates " +"these contributions to compute the weighted average of model parameters." msgstr "" #: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:14 of msgid "" -"The protocol involves four main stages: - 'setup': Send SecAgg configuration " -"to clients and collect their public keys. - 'share keys': Broadcast public " -"keys among clients and collect encrypted secret" +"The protocol involves four main stages: - 'setup': Send SecAgg " +"configuration to clients and collect their public keys. - 'share keys': " +"Broadcast public keys among clients and collect encrypted secret" msgstr "" #: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:54 of msgid "" -"Each client's private key is split into N shares under the SecAgg protocol, " -"where N is the number of selected clients." +"Each client's private key is split into N shares under the SecAgg " +"protocol, where N is the number of selected clients." msgstr "" #: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:56 of msgid "" -"Generally, higher `reconstruction_threshold` means better privacy guarantees " -"but less tolerance to dropouts." +"Generally, higher `reconstruction_threshold` means better privacy " +"guarantees but less tolerance to dropouts." msgstr "" #: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:60 of msgid "" "When `reconstruction_threshold` is a float, it is interpreted as the " "proportion of the number of all selected clients needed for the " -"reconstruction of a private key. This feature enables flexibility in setting " -"the security threshold relative to the number of selected clients." +"reconstruction of a private key. This feature enables flexibility in " +"setting the security threshold relative to the number of selected " +"clients." msgstr "" #: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:64 of @@ -14886,29 +15788,32 @@ msgstr "" #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 #: of msgid "" -":py:obj:`collect_masked_vectors_stage `\\ \\(driver\\, ...\\)" +":py:obj:`collect_masked_vectors_stage " +"`\\ " +"\\(driver\\, ...\\)" msgstr "" #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 #: of msgid "" -":py:obj:`setup_stage `\\ " -"\\(driver\\, context\\, state\\)" +":py:obj:`setup_stage `\\" +" \\(driver\\, context\\, state\\)" msgstr "" #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 #: of msgid "" -":py:obj:`share_keys_stage `\\ \\(driver\\, context\\, state\\)" +":py:obj:`share_keys_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" msgstr "" #: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 #: of msgid "" -":py:obj:`unmask_stage `\\ " -"\\(driver\\, context\\, state\\)" +":py:obj:`unmask_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" msgstr "" #: ../../source/ref-api/flwr.simulation.rst:2 @@ -14916,35 +15821,24 @@ msgstr "" msgid "simulation" msgstr "运行模拟" -#: ../../source/ref-api/flwr.simulation.rst:19::1 +#: ../../source/ref-api/flwr.simulation.rst:18::1 msgid "" -":py:obj:`start_simulation `\\ \\(\\*\\, " -"client\\_fn\\[\\, ...\\]\\)" +":py:obj:`start_simulation `\\ \\(\\*\\," +" client\\_fn\\[\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.simulation.rst:19::1 +#: ../../source/ref-api/flwr.simulation.rst:18::1 #: flwr.simulation.app.start_simulation:1 of msgid "Start a Ray-based Flower simulation server." msgstr "启动基于 Ray 的Flower模拟服务器。" -#: ../../source/ref-api/flwr.simulation.rst:19::1 -msgid "" -":py:obj:`run_simulation_from_cli `\\ \\(\\)" -msgstr "" - -#: ../../source/ref-api/flwr.simulation.rst:19::1 -#: flwr.simulation.run_simulation.run_simulation_from_cli:1 of -msgid "Run Simulation Engine from the CLI." -msgstr "" - -#: ../../source/ref-api/flwr.simulation.rst:19::1 +#: ../../source/ref-api/flwr.simulation.rst:18::1 msgid "" ":py:obj:`run_simulation `\\ " "\\(server\\_app\\, client\\_app\\, ...\\)" msgstr "" -#: ../../source/ref-api/flwr.simulation.rst:19::1 +#: ../../source/ref-api/flwr.simulation.rst:18::1 #: flwr.simulation.run_simulation.run_simulation:1 of msgid "Run a Flower App using the Simulation Engine." msgstr "" @@ -14962,14 +15856,15 @@ msgstr "" #: flwr.simulation.run_simulation.run_simulation:6 of msgid "" -"The `ClientApp` to be executed by each of the SuperNodes. It will receive " -"messages sent by the `ServerApp`." +"The `ClientApp` to be executed by each of the SuperNodes. It will receive" +" messages sent by the `ServerApp`." msgstr "" #: flwr.simulation.run_simulation.run_simulation:9 of msgid "" -"Number of nodes that run a ClientApp. They can be sampled by a Driver in the " -"ServerApp and receive a Message describing what the ClientApp should perform." +"Number of nodes that run a ClientApp. They can be sampled by a Driver in " +"the ServerApp and receive a Message describing what the ClientApp should " +"perform." msgstr "" #: flwr.simulation.run_simulation.run_simulation:13 of @@ -14978,33 +15873,28 @@ msgstr "" #: flwr.simulation.run_simulation.run_simulation:15 of msgid "" -"'A dictionary, e.g {\"\": , \"\": } to configure a " -"backend. Values supported in are those included by `flwr.common." -"typing.ConfigsRecordValues`." +"'A dictionary, e.g {\"\": , \"\": } to " +"configure a backend. Values supported in are those included by " +"`flwr.common.typing.ConfigsRecordValues`." msgstr "" #: flwr.simulation.run_simulation.run_simulation:19 of msgid "" -"A boolean to indicate whether to enable GPU growth on the main thread. This " -"is desirable if you make use of a TensorFlow model on your `ServerApp` while " -"having your `ClientApp` running on the same GPU. Without enabling this, you " -"might encounter an out-of-memory error because TensorFlow, by default, " -"allocates all GPU memory. Read more about how `tf.config.experimental." -"set_memory_growth()` works in the TensorFlow documentation: https://www." -"tensorflow.org/api/stable." +"A boolean to indicate whether to enable GPU growth on the main thread. " +"This is desirable if you make use of a TensorFlow model on your " +"`ServerApp` while having your `ClientApp` running on the same GPU. " +"Without enabling this, you might encounter an out-of-memory error because" +" TensorFlow, by default, allocates all GPU memory. Read more about how " +"`tf.config.experimental.set_memory_growth()` works in the TensorFlow " +"documentation: https://www.tensorflow.org/api/stable." msgstr "" #: flwr.simulation.run_simulation.run_simulation:26 of msgid "" -"When diabled, only INFO, WARNING and ERROR log messages will be shown. If " -"enabled, DEBUG-level logs will be displayed." +"When diabled, only INFO, WARNING and ERROR log messages will be shown. If" +" enabled, DEBUG-level logs will be displayed." msgstr "" -#: ../../source/ref-api/flwr.simulation.run_simulation_from_cli.rst:2 -#, fuzzy -msgid "run\\_simulation\\_from\\_cli" -msgstr "运行模拟" - #: ../../source/ref-api/flwr.simulation.start_simulation.rst:2 #, fuzzy msgid "start\\_simulation" @@ -15012,76 +15902,68 @@ msgstr "start_simulation" #: flwr.simulation.app.start_simulation:3 of msgid "" -"A function creating client instances. The function must take a single `str` " -"argument called `cid`. It should return a single client instance of type " -"Client. Note that the created client instances are ephemeral and will often " -"be destroyed after a single method invocation. Since client instances are " -"not long-lived, they should not attempt to carry state over method " -"invocations. Any state required by the instance (model, dataset, " +"A function creating client instances. The function must take a single " +"`str` argument called `cid`. It should return a single client instance of" +" type Client. Note that the created client instances are ephemeral and " +"will often be destroyed after a single method invocation. Since client " +"instances are not long-lived, they should not attempt to carry state over" +" method invocations. Any state required by the instance (model, dataset, " "hyperparameters, ...) should be (re-)created in either the call to " -"`client_fn` or the call to any of the client methods (e.g., load evaluation " -"data in the `evaluate` method itself)." +"`client_fn` or the call to any of the client methods (e.g., load " +"evaluation data in the `evaluate` method itself)." msgstr "" -"创建客户端实例的函数。该函数必须接受一个名为 `cid` 的 `str` 参数。它应返回一" -"个 Client 类型的客户端实例。请注意,创建的客户端实例是短暂的,通常在调用一个" -"方法后就会被销毁。由于客户机实例不是长期存在的,它们不应试图在方法调用时携带" -"状态数据。实例所需的任何状态数据(模型、数据集、超参数......)都应在调用 " -"`client_fn` 或任何客户端方法(例如,在 `evaluate` 方法中加载评估数据)时(重" -"新)创建。" +"创建客户端实例的函数。该函数必须接受一个名为 `cid` 的 `str` 参数。它应返回一个 Client " +"类型的客户端实例。请注意,创建的客户端实例是短暂的,通常在调用一个方法后就会被销毁。由于客户机实例不是长期存在的,它们不应试图在方法调用时携带状态数据。实例所需的任何状态数据(模型、数据集、超参数......)都应在调用" +" `client_fn` 或任何客户端方法(例如,在 `evaluate` 方法中加载评估数据)时(重新)创建。" #: flwr.simulation.app.start_simulation:13 of msgid "" "The total number of clients in this simulation. This must be set if " "`clients_ids` is not set and vice-versa." -msgstr "" -"本次模拟的客户总数。如果未设置 `clients_ids`,则必须设置该参数,反之亦然。" +msgstr "本次模拟的客户总数。如果未设置 `clients_ids`,则必须设置该参数,反之亦然。" #: flwr.simulation.app.start_simulation:16 of msgid "" -"List `client_id`s for each client. This is only required if `num_clients` is " -"not set. Setting both `num_clients` and `clients_ids` with " +"List `client_id`s for each client. This is only required if `num_clients`" +" is not set. Setting both `num_clients` and `clients_ids` with " "`len(clients_ids)` not equal to `num_clients` generates an error." msgstr "" -"列出每个客户的 `client_id`。只有在未设置 `num_clients` 时才需要这样做。同时设" -"置`num_clients`和`clients_ids`,且`len(clients_ids)`不等于`num_clients`,会产" -"生错误。" +"列出每个客户的 `client_id`。只有在未设置 `num_clients` " +"时才需要这样做。同时设置`num_clients`和`clients_ids`,且`len(clients_ids)`不等于`num_clients`,会产生错误。" #: flwr.simulation.app.start_simulation:20 of #, fuzzy msgid "" -"CPU and GPU resources for a single client. Supported keys are `num_cpus` and " -"`num_gpus`. To understand the GPU utilization caused by `num_gpus`, as well " -"as using custom resources, please consult the Ray documentation." +"CPU and GPU resources for a single client. Supported keys are `num_cpus` " +"and `num_gpus`. To understand the GPU utilization caused by `num_gpus`, " +"as well as using custom resources, please consult the Ray documentation." msgstr "" -"\"num_gpus\": 0.0` 单个客户端的 CPU 和 GPU 资源。支持的键值为 `num_cpus` 和 " -"`num_gpus`。要了解 `num_gpus` 所导致的 GPU 利用率,以及使用自定义资源的情况," -"请查阅 Ray 文档。" +"\"num_gpus\": 0.0` 单个客户端的 CPU 和 GPU 资源。支持的键值为 `num_cpus` 和 `num_gpus`。要了解" +" `num_gpus` 所导致的 GPU 利用率,以及使用自定义资源的情况,请查阅 Ray 文档。" #: flwr.simulation.app.start_simulation:25 of msgid "" "An implementation of the abstract base class `flwr.server.Server`. If no " "instance is provided, then `start_server` will create one." -msgstr "" -"抽象基类 `flwr.server.Server`的实现。如果没有提供实例,`start_server` 将创建" -"一个。" +msgstr "抽象基类 `flwr.server.Server`的实现。如果没有提供实例,`start_server` 将创建一个。" #: flwr.simulation.app.start_simulation:31 of msgid "" -"An implementation of the abstract base class `flwr.server.Strategy`. If no " -"strategy is provided, then `start_server` will use `flwr.server.strategy." -"FedAvg`." +"An implementation of the abstract base class `flwr.server.Strategy`. If " +"no strategy is provided, then `start_server` will use " +"`flwr.server.strategy.FedAvg`." msgstr "" -"抽象基类 `flwr.server.strategy` 的实现。如果没有提供策略,`start_server` 将使" -"用 `flwr.server.strategy.FedAvg`。" +"抽象基类 `flwr.server.strategy` 的实现。如果没有提供策略,`start_server` 将使用 " +"`flwr.server.strategy.FedAvg`。" #: flwr.simulation.app.start_simulation:35 of msgid "" -"An implementation of the abstract base class `flwr.server.ClientManager`. If " -"no implementation is provided, then `start_simulation` will use `flwr.server." -"client_manager.SimpleClientManager`." +"An implementation of the abstract base class `flwr.server.ClientManager`." +" If no implementation is provided, then `start_simulation` will use " +"`flwr.server.client_manager.SimpleClientManager`." msgstr "" -"抽象基类 `flwr.server.ClientManager` 的实现。如果没有提供实现," -"`start_simulation` 将使用 `flwr.server.client_manager.SimpleClientManager`。" +"抽象基类 `flwr.server.ClientManager` 的实现。如果没有提供实现,`start_simulation` 将使用 " +"`flwr.server.client_manager.SimpleClientManager`。" #: flwr.simulation.app.start_simulation:39 of msgid "" @@ -15089,21 +15971,19 @@ msgid "" "ray_init_args is None (the default), Ray will be initialized with the " "following default args: { \"ignore_reinit_error\": True, " "\"include_dashboard\": False } An empty dictionary can be used " -"(ray_init_args={}) to prevent any arguments from being passed to ray.init." +"(ray_init_args={}) to prevent any arguments from being passed to " +"ray.init." msgstr "" -"可选字典,包含调用 `ray.init` 时的参数。如果 ray_init_args 为 None(默认" -"值),则将使用以下默认参数初始化 Ray: { \"ignore_reinit_error\": True, " -"\"include_dashboard\": False } 可以使用空字典(ray_init_args={})来防止向 " -"ray.init 传递任何参数。" +"可选字典,包含调用 `ray.init` 时的参数。如果 ray_init_args 为 None(默认值),则将使用以下默认参数初始化 Ray:" +" { \"ignore_reinit_error\": True, \"include_dashboard\": False } " +"可以使用空字典(ray_init_args={})来防止向 ray.init 传递任何参数。" #: flwr.simulation.app.start_simulation:39 of msgid "" "Optional dictionary containing arguments for the call to `ray.init`. If " "ray_init_args is None (the default), Ray will be initialized with the " "following default args:" -msgstr "" -"可选字典,包含调用 `ray.init` 时的参数。如果 ray_init_args 为 None(默认" -"值),则将使用以下默认参数初始化 Ray:" +msgstr "可选字典,包含调用 `ray.init` 时的参数。如果 ray_init_args 为 None(默认值),则将使用以下默认参数初始化 Ray:" #: flwr.simulation.app.start_simulation:43 of msgid "{ \"ignore_reinit_error\": True, \"include_dashboard\": False }" @@ -15111,15 +15991,15 @@ msgstr "{ \"ignore_reinit_error\": True, \"include_dashboard\": False }" #: flwr.simulation.app.start_simulation:45 of msgid "" -"An empty dictionary can be used (ray_init_args={}) to prevent any arguments " -"from being passed to ray.init." +"An empty dictionary can be used (ray_init_args={}) to prevent any " +"arguments from being passed to ray.init." msgstr "可以使用空字典 (ray_init_args={}) 来防止向 ray.init 传递任何参数。" #: flwr.simulation.app.start_simulation:48 of msgid "" -"Set to True to prevent `ray.shutdown()` in case `ray.is_initialized()=True`." -msgstr "" -"设为 True 可在 `ray.is_initialized()=True` 情况下阻止 `ray.shutdown()` 。" +"Set to True to prevent `ray.shutdown()` in case " +"`ray.is_initialized()=True`." +msgstr "设为 True 可在 `ray.is_initialized()=True` 情况下阻止 `ray.shutdown()` 。" #: flwr.simulation.app.start_simulation:50 of #, fuzzy @@ -15127,33 +16007,28 @@ msgid "" "Optionally specify the type of actor to use. The actor object, which " "persists throughout the simulation, will be the process in charge of " "executing a ClientApp wrapping input argument `client_fn`." -msgstr "" -"可选择指定要使用的actor类型。actor对象将在整个模拟过程中持续存在,它将是负责" -"运行客户端作业(即其 `fit()`方法)的进程。" +msgstr "可选择指定要使用的actor类型。actor对象将在整个模拟过程中持续存在,它将是负责运行客户端作业(即其 `fit()`方法)的进程。" #: flwr.simulation.app.start_simulation:54 of msgid "" -"If you want to create your own Actor classes, you might need to pass some " -"input argument. You can use this dictionary for such purpose." -msgstr "" -"如果您想创建自己的 Actor 类,可能需要传递一些输入参数。为此,您可以使用本字" -"典。" +"If you want to create your own Actor classes, you might need to pass some" +" input argument. You can use this dictionary for such purpose." +msgstr "如果您想创建自己的 Actor 类,可能需要传递一些输入参数。为此,您可以使用本字典。" #: flwr.simulation.app.start_simulation:57 of msgid "" -"(default: \"DEFAULT\") Optional string (\"DEFAULT\" or \"SPREAD\") for the " -"VCE to choose in which node the actor is placed. If you are an advanced user " -"needed more control you can use lower-level scheduling strategies to pin " -"actors to specific compute nodes (e.g. via NodeAffinitySchedulingStrategy). " -"Please note this is an advanced feature. For all details, please refer to " -"the Ray documentation: https://docs.ray.io/en/latest/ray-core/scheduling/" -"index.html" +"(default: \"DEFAULT\") Optional string (\"DEFAULT\" or \"SPREAD\") for " +"the VCE to choose in which node the actor is placed. If you are an " +"advanced user needed more control you can use lower-level scheduling " +"strategies to pin actors to specific compute nodes (e.g. via " +"NodeAffinitySchedulingStrategy). Please note this is an advanced feature." +" For all details, please refer to the Ray documentation: " +"https://docs.ray.io/en/latest/ray-core/scheduling/index.html" msgstr "" -"(默认:\"DEFAULT\")可选字符串(\"DEFAULT \"或 \"SPREAD\"),供 VCE 选择将行" -"为体放置在哪个节点上。如果你是需要更多控制权的高级用户,可以使用低级调度策略" -"将actor固定到特定计算节点(例如,通过 NodeAffinitySchedulingStrategy)。请注" -"意,这是一项高级功能。有关详细信息,请参阅 Ray 文档:https://docs.ray.io/en/" -"latest/ray-core/scheduling/index.html" +"(默认:\"DEFAULT\")可选字符串(\"DEFAULT \"或 \"SPREAD\"),供 VCE " +"选择将行为体放置在哪个节点上。如果你是需要更多控制权的高级用户,可以使用低级调度策略将actor固定到特定计算节点(例如,通过 " +"NodeAffinitySchedulingStrategy)。请注意,这是一项高级功能。有关详细信息,请参阅 Ray " +"文档:https://docs.ray.io/en/latest/ray-core/scheduling/index.html" #: flwr.simulation.app.start_simulation:66 of msgid "**hist** -- Object containing metrics from training." @@ -15167,58 +16042,65 @@ msgstr "更新日志" msgid "Unreleased" msgstr "尚未发布" -#: ../../source/ref-changelog.md:5 ../../source/ref-changelog.md:17 -#: ../../source/ref-changelog.md:110 ../../source/ref-changelog.md:210 -#: ../../source/ref-changelog.md:294 ../../source/ref-changelog.md:358 -#: ../../source/ref-changelog.md:416 ../../source/ref-changelog.md:485 -#: ../../source/ref-changelog.md:614 ../../source/ref-changelog.md:656 -#: ../../source/ref-changelog.md:723 ../../source/ref-changelog.md:789 -#: ../../source/ref-changelog.md:834 ../../source/ref-changelog.md:873 -#: ../../source/ref-changelog.md:906 ../../source/ref-changelog.md:956 +#: ../../source/ref-changelog.md:5 ../../source/ref-changelog.md:19 +#: ../../source/ref-changelog.md:83 ../../source/ref-changelog.md:176 +#: ../../source/ref-changelog.md:276 ../../source/ref-changelog.md:360 +#: ../../source/ref-changelog.md:424 ../../source/ref-changelog.md:482 +#: ../../source/ref-changelog.md:551 ../../source/ref-changelog.md:680 +#: ../../source/ref-changelog.md:722 ../../source/ref-changelog.md:789 +#: ../../source/ref-changelog.md:855 ../../source/ref-changelog.md:900 +#: ../../source/ref-changelog.md:939 ../../source/ref-changelog.md:972 +#: ../../source/ref-changelog.md:1022 msgid "What's new?" msgstr "有什么新内容?" -#: ../../source/ref-changelog.md:7 ../../source/ref-changelog.md:80 -#: ../../source/ref-changelog.md:192 ../../source/ref-changelog.md:282 -#: ../../source/ref-changelog.md:346 ../../source/ref-changelog.md:404 -#: ../../source/ref-changelog.md:473 ../../source/ref-changelog.md:535 -#: ../../source/ref-changelog.md:554 ../../source/ref-changelog.md:710 -#: ../../source/ref-changelog.md:781 ../../source/ref-changelog.md:818 -#: ../../source/ref-changelog.md:861 +#: ../../source/ref-changelog.md:7 ../../source/ref-changelog.md:71 +#: ../../source/ref-changelog.md:146 ../../source/ref-changelog.md:258 +#: ../../source/ref-changelog.md:348 ../../source/ref-changelog.md:412 +#: ../../source/ref-changelog.md:470 ../../source/ref-changelog.md:539 +#: ../../source/ref-changelog.md:601 ../../source/ref-changelog.md:620 +#: ../../source/ref-changelog.md:776 ../../source/ref-changelog.md:847 +#: ../../source/ref-changelog.md:884 ../../source/ref-changelog.md:927 msgid "Incompatible changes" msgstr "不兼容的更改" -#: ../../source/ref-changelog.md:9 +#: ../../source/ref-changelog.md:9 ../../source/ref-changelog.md:73 +#: ../../source/ref-changelog.md:350 ../../source/ref-changelog.md:414 +#: ../../source/ref-changelog.md:472 ../../source/ref-changelog.md:541 +#: ../../source/ref-changelog.md:603 +msgid "None" +msgstr "无" + +#: ../../source/ref-changelog.md:11 #, fuzzy -msgid "v1.7.0 (2024-02-05)" +msgid "v1.8.0 (2024-04-03)" msgstr "v1.3.0 (2023-02-06)" -#: ../../source/ref-changelog.md:11 ../../source/ref-changelog.md:104 -#: ../../source/ref-changelog.md:204 ../../source/ref-changelog.md:288 -#: ../../source/ref-changelog.md:352 ../../source/ref-changelog.md:410 -#: ../../source/ref-changelog.md:479 ../../source/ref-changelog.md:548 +#: ../../source/ref-changelog.md:13 ../../source/ref-changelog.md:77 +#: ../../source/ref-changelog.md:170 ../../source/ref-changelog.md:270 +#: ../../source/ref-changelog.md:354 ../../source/ref-changelog.md:418 +#: ../../source/ref-changelog.md:476 ../../source/ref-changelog.md:545 +#: ../../source/ref-changelog.md:614 msgid "Thanks to our contributors" msgstr "感谢我们的贡献者" -#: ../../source/ref-changelog.md:13 ../../source/ref-changelog.md:106 -#: ../../source/ref-changelog.md:206 ../../source/ref-changelog.md:290 -#: ../../source/ref-changelog.md:354 ../../source/ref-changelog.md:412 +#: ../../source/ref-changelog.md:15 ../../source/ref-changelog.md:79 +#: ../../source/ref-changelog.md:172 ../../source/ref-changelog.md:272 +#: ../../source/ref-changelog.md:356 ../../source/ref-changelog.md:420 +#: ../../source/ref-changelog.md:478 msgid "" -"We would like to give our special thanks to all the contributors who made " -"the new version of Flower possible (in `git shortlog` order):" -msgstr "" -"在此,我们要特别感谢所有为 Flower 的新版本做出贡献的人员(按 `git shortlog` " -"顺序排列):" +"We would like to give our special thanks to all the contributors who made" +" the new version of Flower possible (in `git shortlog` order):" +msgstr "在此,我们要特别感谢所有为 Flower 的新版本做出贡献的人员(按 `git shortlog` 顺序排列):" -#: ../../source/ref-changelog.md:15 +#: ../../source/ref-changelog.md:17 #, fuzzy msgid "" -"`Aasheesh Singh`, `Adam Narozniak`, `Aml Hassan Esmil`, `Charles Beauville`, " -"`Daniel J. Beutel`, `Daniel Nata Nugraha`, `Edoardo Gabrielli`, `Gustavo " -"Bertoli`, `HelinLin`, `Heng Pan`, `Javier`, `M S Chaitanya Kumar`, `Mohammad " -"Naseri`, `Nikos Vlachakis`, `Pritam Neog`, `Robert Kuska`, `Robert Steiner`, " -"`Taner Topal`, `Yahia Salaheldin Shaaban`, `Yan Gao`, `Yasar Abbas` " +"`Adam Narozniak`, `Charles Beauville`, `Daniel J. Beutel`, `Daniel Nata " +"Nugraha`, `Danny`, `Gustavo Bertoli`, `Heng Pan`, `Ikko Eltociear " +"Ashimine`, `Jack Cook`, `Javier`, `Raj Parekh`, `Robert Steiner`, " +"`Sebastian van der Voort`, `Taner Topal`, `Yan Gao`, `mohammadnaseri`, " +"`tabdar-khan` " msgstr "" "`Adam Narozniak`, `Anass Anhari`, `Charles Beauville`, `Dana-Farber`, " "`Daniel J. Beutel`, `Daniel Nata Nugraha`, `Edoardo Gabrielli`, `Gustavo " @@ -15226,820 +16108,1336 @@ msgstr "" "Topal`, `achiverram28`, `danielnugraha`, `eunchung`, `ruthgal` " -#: ../../source/ref-changelog.md:19 -#, fuzzy -msgid "" -"**Introduce stateful clients (experimental)** ([#2770](https://github.com/" -"adap/flower/pull/2770), [#2686](https://github.com/adap/flower/pull/2686), " -"[#2696](https://github.com/adap/flower/pull/2696), [#2643](https://github." -"com/adap/flower/pull/2643), [#2769](https://github.com/adap/flower/" -"pull/2769))" -msgstr "" -"** baselines的普通更新** ([#2301](https://github.com/adap/flower/pull/2301), " -"[#2305](https://github.com/adap/flower/pull/2305), [#2307](https://github." -"com/adap/flower/pull/2307), [#2327](https://github.com/adap/flower/" -"pull/2327), [#2435](https://github.com/adap/flower/pull/2435))" - #: ../../source/ref-changelog.md:21 msgid "" -"Subclasses of `Client` and `NumPyClient` can now store local state that " -"remains on the client. Let's start with the highlight first: this new " -"feature is compatible with both simulated clients (via `start_simulation`) " -"and networked clients (via `start_client`). It's also the first preview of " -"new abstractions like `Context` and `RecordSet`. Clients can access state of " -"type `RecordSet` via `state: RecordSet = self.context.state`. Changes to " -"this `RecordSet` are preserved across different rounds of execution to " -"enable stateful computations in a unified way across simulation and " -"deployment." +"**Introduce Flower Next high-level API (stable)** " +"([#3002](https://github.com/adap/flower/pull/3002), " +"[#2934](https://github.com/adap/flower/pull/2934), " +"[#2958](https://github.com/adap/flower/pull/2958), " +"[#3173](https://github.com/adap/flower/pull/3173), " +"[#3174](https://github.com/adap/flower/pull/3174), " +"[#2923](https://github.com/adap/flower/pull/2923), " +"[#2691](https://github.com/adap/flower/pull/2691), " +"[#3079](https://github.com/adap/flower/pull/3079), " +"[#2961](https://github.com/adap/flower/pull/2961), " +"[#2924](https://github.com/adap/flower/pull/2924), " +"[#3166](https://github.com/adap/flower/pull/3166), " +"[#3031](https://github.com/adap/flower/pull/3031), " +"[#3057](https://github.com/adap/flower/pull/3057), " +"[#3000](https://github.com/adap/flower/pull/3000), " +"[#3113](https://github.com/adap/flower/pull/3113), " +"[#2957](https://github.com/adap/flower/pull/2957), " +"[#3183](https://github.com/adap/flower/pull/3183), " +"[#3180](https://github.com/adap/flower/pull/3180), " +"[#3035](https://github.com/adap/flower/pull/3035), " +"[#3189](https://github.com/adap/flower/pull/3189), " +"[#3185](https://github.com/adap/flower/pull/3185), " +"[#3190](https://github.com/adap/flower/pull/3190), " +"[#3191](https://github.com/adap/flower/pull/3191), " +"[#3195](https://github.com/adap/flower/pull/3195), " +"[#3197](https://github.com/adap/flower/pull/3197))" msgstr "" #: ../../source/ref-changelog.md:23 -#, fuzzy msgid "" -"**Improve performance** ([#2293](https://github.com/adap/flower/pull/2293))" -msgstr "**改进示例笔记** ([#2005](https://github.com/adap/flower/pull/2005))" +"The Flower Next high-level API is stable! Flower Next is the future of " +"Flower - all new features (like Flower Mods) will be built on top of it. " +"You can start to migrate your existing projects to Flower Next by using " +"`ServerApp` and `ClientApp` (check out `quickstart-pytorch` or " +"`quickstart-tensorflow`, a detailed migration guide will follow shortly)." +" Flower Next allows you to run multiple projects concurrently (we call " +"this multi-run) and execute the same project in either simulation " +"environments or deployment environments without having to change a single" +" line of code. The best part? It's fully compatible with existing Flower " +"projects that use `Strategy`, `NumPyClient` & co." +msgstr "" #: ../../source/ref-changelog.md:25 +#, fuzzy msgid "" -"Flower is faster than ever. All `FedAvg`-derived strategies now use in-place " -"aggregation to reduce memory consumption. The Flower client serialization/" -"deserialization has been rewritten from the ground up, which results in " -"significant speedups, especially when the client-side training time is short." +"**Introduce Flower Next low-level API (preview)** " +"([#3062](https://github.com/adap/flower/pull/3062), " +"[#3034](https://github.com/adap/flower/pull/3034), " +"[#3069](https://github.com/adap/flower/pull/3069))" msgstr "" +"** 统一客户端应用程序接口** ([#2303](https://github.com/adap/flower/pull/2303), " +"[#2390](https://github.com/adap/flower/pull/2390), " +"[#2493](https://github.com/adap/flower/pull/2493))" #: ../../source/ref-changelog.md:27 -#, fuzzy msgid "" -"**Support Federated Learning with Apple MLX and Flower** ([#2693](https://" -"github.com/adap/flower/pull/2693))" +"In addition to the Flower Next *high-level* API that uses `Strategy`, " +"`NumPyClient` & co, Flower 1.8 also comes with a preview version of the " +"new Flower Next *low-level* API. The low-level API allows for granular " +"control of every aspect of the learning process by sending/receiving " +"individual messages to/from client nodes. The new `ServerApp` supports " +"registering a custom `main` function that allows writing custom training " +"loops for methods like async FL, cyclic training, or federated analytics." +" The new `ClientApp` supports registering `train`, `evaluate` and `query`" +" functions that can access the raw message received from the `ServerApp`." +" New abstractions like `RecordSet`, `Message` and `Context` further " +"enable sending multiple models, multiple sets of config values and " +"metrics, stateful computations on the client node and implementations of " +"custom SMPC protocols, to name just a few." msgstr "" -"** 添加使用 fastai 和 Flower 进行联邦学习的新示例** ([#1598](https://github." -"com/adap/flower/pull/1598))" #: ../../source/ref-changelog.md:29 +#, fuzzy msgid "" -"Flower has official support for federated learning using [Apple MLX](https://" -"ml-explore.github.io/mlx) via the new `quickstart-mlx` code example." +"**Introduce Flower Mods (preview)** " +"([#3054](https://github.com/adap/flower/pull/3054), " +"[#2911](https://github.com/adap/flower/pull/2911), " +"[#3083](https://github.com/adap/flower/pull/3083))" msgstr "" +"**引入新的模拟引擎** ([#1969](https://github.com/adap/flower/pull/1969), " +"[#2221](https://github.com/adap/flower/pull/2221), " +"[#2248](https://github.com/adap/flower/pull/2248))" #: ../../source/ref-changelog.md:31 -#, fuzzy msgid "" -"**Introduce new XGBoost cyclic strategy** ([#2666](https://github.com/adap/" -"flower/pull/2666), [#2668](https://github.com/adap/flower/pull/2668))" +"Flower Modifiers (we call them Mods) can intercept messages and analyze, " +"edit or handle them directly. Mods can be used to develop pluggable " +"modules that work across different projects. Flower 1.8 already includes " +"mods to log the size of a message, the number of parameters sent over the" +" network, differential privacy with fixed clipping and adaptive clipping," +" local differential privacy and secure aggregation protocols SecAgg and " +"SecAgg+. The Flower Mods API is released as a preview, but researchers " +"can already use it to experiment with arbirtrary SMPC protocols." msgstr "" -"**介绍 iOS SDK(预览版)** ([#1621](https://github.com/adap/flower/" -"pull/1621), [#1764](https://github.com/adap/flower/pull/1764))" #: ../../source/ref-changelog.md:33 +#, fuzzy msgid "" -"A new strategy called `FedXgbCyclic` supports a client-by-client style of " -"training (often called cyclic). The `xgboost-comprehensive` code example " -"shows how to use it in a full project. In addition to that, `xgboost-" -"comprehensive` now also supports simulation mode. With this, Flower offers " -"best-in-class XGBoost support." +"**Fine-tune LLMs with LLM FlowerTune** " +"([#3029](https://github.com/adap/flower/pull/3029), " +"[#3089](https://github.com/adap/flower/pull/3089), " +"[#3092](https://github.com/adap/flower/pull/3092), " +"[#3100](https://github.com/adap/flower/pull/3100), " +"[#3114](https://github.com/adap/flower/pull/3114), " +"[#3162](https://github.com/adap/flower/pull/3162), " +"[#3172](https://github.com/adap/flower/pull/3172))" msgstr "" +"**改进教程** ([#1468](https://github.com/adap/flower/pull/1468), " +"[#1470](https://github.com/adap/flower/pull/1470), " +"[#1472](https://github.com/adap/flower/pull/1472), " +"[#1473](https://github.com/adap/flower/pull/1473), " +"[#1474](https://github.com/adap/flower/pull/1474), " +"[#1475](https://github.com/adap/flower/pull/1475)))" #: ../../source/ref-changelog.md:35 -#, fuzzy msgid "" -"**Support Python 3.11** ([#2394](https://github.com/adap/flower/pull/2394))" +"We are introducing LLM FlowerTune, an introductory example that " +"demonstrates federated LLM fine-tuning of pre-trained Llama2 models on " +"the Alpaca-GPT4 dataset. The example is built to be easily adapted to use" +" different models and/or datasets. Read our blog post [LLM FlowerTune: " +"Federated LLM Fine-tuning with Flower](https://flower.ai/blog/2024-03-14" +"-llm-flowertune-federated-llm-finetuning-with-flower/) for more details." msgstr "" -"** 支持 Python 3.10** ([#1320](https://github.com/adap/flower/pull/1320))" #: ../../source/ref-changelog.md:37 +#, fuzzy msgid "" -"Framework tests now run on Python 3.8, 3.9, 3.10, and 3.11. This will ensure " -"better support for users using more recent Python versions." -msgstr "" +"**Introduce built-in Differential Privacy (preview)** " +"([#2798](https://github.com/adap/flower/pull/2798), " +"[#2959](https://github.com/adap/flower/pull/2959), " +"[#3038](https://github.com/adap/flower/pull/3038), " +"[#3147](https://github.com/adap/flower/pull/3147), " +"[#2909](https://github.com/adap/flower/pull/2909), " +"[#2893](https://github.com/adap/flower/pull/2893), " +"[#2892](https://github.com/adap/flower/pull/2892), " +"[#3039](https://github.com/adap/flower/pull/3039), " +"[#3074](https://github.com/adap/flower/pull/3074))" +msgstr "" +"** 支持 SSL 的服务器和客户端** ([#842](https://github.com/adap/flower/pull/842), " +"[#844](https://github.com/adap/flower/pull/844), " +"[#845](https://github.com/adap/flower/pull/845), " +"[#847](https://github.com/adap/flower/pull/847), " +"[#993](https://github.com/adap/flower/pull/993), " +"[#994](https://github.com/adap/flower/pull/994))" #: ../../source/ref-changelog.md:39 -#, fuzzy msgid "" -"**Update gRPC and ProtoBuf dependencies** ([#2814](https://github.com/adap/" -"flower/pull/2814))" +"Built-in Differential Privacy is here! Flower supports both central and " +"local differential privacy (DP). Central DP can be configured with either" +" fixed or adaptive clipping. The clipping can happen either on the " +"server-side or the client-side. Local DP does both clipping and noising " +"on the client-side. A new documentation page [explains Differential " +"Privacy approaches](https://flower.ai/docs/framework/explanation-" +"differential-privacy.html) and a new how-to guide describes [how to use " +"the new Differential Privacy components](https://flower.ai/docs/framework" +"/how-to-use-differential-privacy.html) in Flower." msgstr "" -"**更新 REST API 以支持创建和删除节点** ([#2283](https://github.com/adap/" -"flower/pull/2283))" #: ../../source/ref-changelog.md:41 +#, fuzzy msgid "" -"The `grpcio` and `protobuf` dependencies were updated to their latest " -"versions for improved security and performance." +"**Introduce built-in Secure Aggregation (preview)** " +"([#3120](https://github.com/adap/flower/pull/3120), " +"[#3110](https://github.com/adap/flower/pull/3110), " +"[#3108](https://github.com/adap/flower/pull/3108))" msgstr "" +"**引入新的模拟引擎** ([#1969](https://github.com/adap/flower/pull/1969), " +"[#2221](https://github.com/adap/flower/pull/2221), " +"[#2248](https://github.com/adap/flower/pull/2248))" #: ../../source/ref-changelog.md:43 -#, fuzzy msgid "" -"**Introduce Docker image for Flower server** ([#2700](https://github.com/" -"adap/flower/pull/2700), [#2688](https://github.com/adap/flower/pull/2688), " -"[#2705](https://github.com/adap/flower/pull/2705), [#2695](https://github." -"com/adap/flower/pull/2695), [#2747](https://github.com/adap/flower/" -"pull/2747), [#2746](https://github.com/adap/flower/pull/2746), [#2680]" -"(https://github.com/adap/flower/pull/2680), [#2682](https://github.com/adap/" -"flower/pull/2682), [#2701](https://github.com/adap/flower/pull/2701))" +"Built-in Secure Aggregation is here! Flower now supports different secure" +" aggregation protocols out-of-the-box. The best part? You can add secure " +"aggregation to your Flower projects with only a few lines of code. In " +"this initial release, we inlcude support for SecAgg and SecAgg+, but more" +" protocols will be implemented shortly. We'll also add detailed docs that" +" explain secure aggregation and how to use it in Flower. You can already " +"check out the new code example that shows how to use Flower to easily " +"combine Federated Learning, Differential Privacy and Secure Aggregation " +"in the same project." msgstr "" -"** 支持 SSL 的服务器和客户端** ([#842](https://github.com/adap/flower/" -"pull/842), [#844](https://github.com/adap/flower/pull/844), [#845](https://" -"github.com/adap/flower/pull/845), [#847](https://github.com/adap/flower/" -"pull/847), [#993](https://github.com/adap/flower/pull/993), [#994](https://" -"github.com/adap/flower/pull/994))" #: ../../source/ref-changelog.md:45 +#, fuzzy msgid "" -"The Flower server can now be run using an official Docker image. A new how-" -"to guide explains [how to run Flower using Docker](https://flower.ai/docs/" -"framework/how-to-run-flower-using-docker.html). An official Flower client " -"Docker image will follow." -msgstr "" +"**Introduce** `flwr` **CLI (preview)** " +"([#2942](https://github.com/adap/flower/pull/2942), " +"[#3055](https://github.com/adap/flower/pull/3055), " +"[#3111](https://github.com/adap/flower/pull/3111), " +"[#3130](https://github.com/adap/flower/pull/3130), " +"[#3136](https://github.com/adap/flower/pull/3136), " +"[#3094](https://github.com/adap/flower/pull/3094), " +"[#3059](https://github.com/adap/flower/pull/3059), " +"[#3049](https://github.com/adap/flower/pull/3049), " +"[#3142](https://github.com/adap/flower/pull/3142))" +msgstr "" +"**普通改进**([#1872](https://github.com/adap/flower/pull/1872), " +"[#1866](https://github.com/adap/flower/pull/1866), " +"[#1884](https://github.com/adap/flower/pull/1884), " +"[#1837](https://github.com/adap/flower/pull/1837), " +"[#1477](https://github.com/adap/flower/pull/1477), " +"[#2171](https://github.com/adap/flower/pull/2171))" #: ../../source/ref-changelog.md:47 -#, fuzzy msgid "" -"**Introduce** `flower-via-docker-compose` **example** ([#2626](https://" -"github.com/adap/flower/pull/2626))" +"A new `flwr` CLI command allows creating new Flower projects (`flwr new`)" +" and then running them using the Simulation Engine (`flwr run`)." msgstr "" -"**介绍Flower Android SDK** ([#2131](https://github.com/adap/flower/" -"pull/2131))" #: ../../source/ref-changelog.md:49 #, fuzzy msgid "" -"**Introduce** `quickstart-sklearn-tabular` **example** ([#2719](https://" -"github.com/adap/flower/pull/2719))" -msgstr "" -"**引入 start_driver**([#1697](https://github.com/adap/flower/pull/1697))" +"**Introduce Flower Next Simulation Engine** " +"([#3024](https://github.com/adap/flower/pull/3024), " +"[#3061](https://github.com/adap/flower/pull/3061), " +"[#2997](https://github.com/adap/flower/pull/2997), " +"[#2783](https://github.com/adap/flower/pull/2783), " +"[#3184](https://github.com/adap/flower/pull/3184), " +"[#3075](https://github.com/adap/flower/pull/3075), " +"[#3047](https://github.com/adap/flower/pull/3047), " +"[#2998](https://github.com/adap/flower/pull/2998), " +"[#3009](https://github.com/adap/flower/pull/3009), " +"[#3008](https://github.com/adap/flower/pull/3008))" +msgstr "" +"**引入(试验性)REST API** ([#1594](https://github.com/adap/flower/pull/1594), " +"[#1690](https://github.com/adap/flower/pull/1690), " +"[#1695](https://github.com/adap/flower/pull/1695), " +"[#1712](https://github.com/adap/flower/pull/1712), " +"[#1802](https://github.com/adap/flower/pull/1802), " +"[#1770](https://github.com/adap/flower/pull/1770), " +"[#1733](https://github.com/adap/flower/pull/1733))" #: ../../source/ref-changelog.md:51 -#, fuzzy msgid "" -"**Introduce** `custom-metrics` **example** ([#1958](https://github.com/adap/" -"flower/pull/1958))" +"The Flower Simulation Engine can now run Flower Next projects. For " +"notebook environments, there's also a new `run_simulation` function that " +"can run `ServerApp` and `ClientApp`." msgstr "" -"**引入 start_driver**([#1697](https://github.com/adap/flower/pull/1697))" #: ../../source/ref-changelog.md:53 #, fuzzy msgid "" -"**Update code examples to use Flower Datasets** ([#2450](https://github.com/" -"adap/flower/pull/2450), [#2456](https://github.com/adap/flower/pull/2456), " -"[#2318](https://github.com/adap/flower/pull/2318), [#2712](https://github." -"com/adap/flower/pull/2712))" -msgstr "" -"更新开发人员工具([#1231](https://github.com/adap/flower/pull/1231), [#1276]" -"(https://github.com/adap/flower/pull/1276), [#1301](https://github.com/adap/" -"flower/pull/1301), [#1310](https://github.com/adap/flower/pull/1310)" +"**Handle SuperNode connection errors** " +"([#2969](https://github.com/adap/flower/pull/2969))" +msgstr "** 添加一个新的 gRPC 选项**([#2197](https://github.com/adap/flower/pull/2197))" #: ../../source/ref-changelog.md:55 msgid "" -"Several code examples were updated to use [Flower Datasets](https://flower." -"ai/docs/datasets/)." +"A SuperNode will now try to reconnect indefinitely to the SuperLink in " +"case of connection errors. The arguments `--max-retries` and `--max-wait-" +"time` can now be passed to the `flower-client-app` command. `--max-" +"retries` will define the number of tentatives the client should make " +"before it gives up trying to reconnect to the SuperLink, and, `--max-" +"wait-time` defines the time before the SuperNode gives up trying to " +"reconnect to the SuperLink." msgstr "" #: ../../source/ref-changelog.md:57 #, fuzzy msgid "" -"**General updates to Flower Examples** ([#2381](https://github.com/adap/" -"flower/pull/2381), [#2805](https://github.com/adap/flower/pull/2805), [#2782]" -"(https://github.com/adap/flower/pull/2782), [#2806](https://github.com/adap/" -"flower/pull/2806), [#2829](https://github.com/adap/flower/pull/2829), [#2825]" -"(https://github.com/adap/flower/pull/2825), [#2816](https://github.com/adap/" -"flower/pull/2816), [#2726](https://github.com/adap/flower/pull/2726), [#2659]" -"(https://github.com/adap/flower/pull/2659), [#2655](https://github.com/adap/" -"flower/pull/2655))" +"**General updates to Flower Baselines** " +"([#2904](https://github.com/adap/flower/pull/2904), " +"[#2482](https://github.com/adap/flower/pull/2482), " +"[#2985](https://github.com/adap/flower/pull/2985), " +"[#2968](https://github.com/adap/flower/pull/2968))" msgstr "" -"**改进(试验性)驱动程序应用程序接口** ([#1663](https://github.com/adap/" -"flower/pull/1663), [#1666](https://github.com/adap/flower/pull/1666), [#1667]" -"(https://github.com/adap/flower/pull/1667), [#1664](https://github.com/adap/" -"flower/pull/1664), [#1675](https://github.com/adap/flower/pull/1675), [#1676]" -"(https://github.com/adap/flower/pull/1676), [#1693](https://github.com/adap/" -"flower/pull/1693), [#1662](https://github.com/adap/flower/pull/1662), [#1794]" -"(https://github.com/adap/flower/pull/1794))" +"**引入新的 Flower Baseline: FedProx MNIST** " +"([#1513](https://github.com/adap/flower/pull/1513), " +"[#1680](https://github.com/adap/flower/pull/1680), " +"[#1681](https://github.com/adap/flower/pull/1681), " +"[#1679](https://github.com/adap/flower/pull/1679)" #: ../../source/ref-changelog.md:59 +msgid "" +"There's a new [FedStar](https://flower.ai/docs/baselines/fedstar.html) " +"baseline. Several other baselined have been updated as well." +msgstr "" + +#: ../../source/ref-changelog.md:61 +msgid "" +"**Improve documentation and translations** " +"([#3050](https://github.com/adap/flower/pull/3050), " +"[#3044](https://github.com/adap/flower/pull/3044), " +"[#3043](https://github.com/adap/flower/pull/3043), " +"[#2986](https://github.com/adap/flower/pull/2986), " +"[#3041](https://github.com/adap/flower/pull/3041), " +"[#3046](https://github.com/adap/flower/pull/3046), " +"[#3042](https://github.com/adap/flower/pull/3042), " +"[#2978](https://github.com/adap/flower/pull/2978), " +"[#2952](https://github.com/adap/flower/pull/2952), " +"[#3167](https://github.com/adap/flower/pull/3167), " +"[#2953](https://github.com/adap/flower/pull/2953), " +"[#3045](https://github.com/adap/flower/pull/3045), " +"[#2654](https://github.com/adap/flower/pull/2654), " +"[#3082](https://github.com/adap/flower/pull/3082), " +"[#2990](https://github.com/adap/flower/pull/2990), " +"[#2989](https://github.com/adap/flower/pull/2989))" +msgstr "" + +#: ../../source/ref-changelog.md:63 +msgid "" +"As usual, we merged many smaller and larger improvements to the " +"documentation. A special thank you goes to [Sebastian van der " +"Voort](https://github.com/svdvoort) for landing a big documentation PR!" +msgstr "" + +#: ../../source/ref-changelog.md:65 +#, fuzzy +msgid "" +"**General updates to Flower Examples** " +"([3134](https://github.com/adap/flower/pull/3134), " +"[2996](https://github.com/adap/flower/pull/2996), " +"[2930](https://github.com/adap/flower/pull/2930), " +"[2967](https://github.com/adap/flower/pull/2967), " +"[2467](https://github.com/adap/flower/pull/2467), " +"[2910](https://github.com/adap/flower/pull/2910), " +"[#2918](https://github.com/adap/flower/pull/2918), " +"[#2773](https://github.com/adap/flower/pull/2773), " +"[#3063](https://github.com/adap/flower/pull/3063), " +"[#3116](https://github.com/adap/flower/pull/3116), " +"[#3117](https://github.com/adap/flower/pull/3117))" +msgstr "" +"** 更新文档** ([#1494](https://github.com/adap/flower/pull/1494), " +"[#1496](https://github.com/adap/flower/pull/1496), " +"[#1500](https://github.com/adap/flower/pull/1500), " +"[#1503](https://github.com/adap/flower/pull/1503), " +"[#1505](https://github.com/adap/flower/pull/1505), " +"[#1524](https://github.com/adap/flower/pull/1524), " +"[#1518](https://github.com/adap/flower/pull/1518), " +"[#1519](https://github.com/adap/flower/pull/1519), " +"[#1515](https://github.com/adap/flower/pull/1515))" + +#: ../../source/ref-changelog.md:67 +msgid "" +"Two new examples show federated training of a Vision Transformer (ViT) " +"and federated learning in a medical context using the popular MONAI " +"library. `quickstart-pytorch` and `quickstart-tensorflow` demonstrate the" +" new Flower Next `ServerApp` and `ClientApp`. Many other examples " +"received considerable updates as well." +msgstr "" + +#: ../../source/ref-changelog.md:69 +msgid "" +"**General improvements** " +"([#3171](https://github.com/adap/flower/pull/3171), " +"[3099](https://github.com/adap/flower/pull/3099), " +"[3003](https://github.com/adap/flower/pull/3003), " +"[3145](https://github.com/adap/flower/pull/3145), " +"[3017](https://github.com/adap/flower/pull/3017), " +"[3085](https://github.com/adap/flower/pull/3085), " +"[3012](https://github.com/adap/flower/pull/3012), " +"[3119](https://github.com/adap/flower/pull/3119), " +"[2991](https://github.com/adap/flower/pull/2991), " +"[2970](https://github.com/adap/flower/pull/2970), " +"[2980](https://github.com/adap/flower/pull/2980), " +"[3086](https://github.com/adap/flower/pull/3086), " +"[2932](https://github.com/adap/flower/pull/2932), " +"[2928](https://github.com/adap/flower/pull/2928), " +"[2941](https://github.com/adap/flower/pull/2941), " +"[2933](https://github.com/adap/flower/pull/2933), " +"[3181](https://github.com/adap/flower/pull/3181), " +"[2973](https://github.com/adap/flower/pull/2973), " +"[2992](https://github.com/adap/flower/pull/2992), " +"[2915](https://github.com/adap/flower/pull/2915), " +"[3040](https://github.com/adap/flower/pull/3040), " +"[3022](https://github.com/adap/flower/pull/3022), " +"[3032](https://github.com/adap/flower/pull/3032), " +"[2902](https://github.com/adap/flower/pull/2902), " +"[2931](https://github.com/adap/flower/pull/2931), " +"[3005](https://github.com/adap/flower/pull/3005), " +"[3132](https://github.com/adap/flower/pull/3132), " +"[3115](https://github.com/adap/flower/pull/3115), " +"[2944](https://github.com/adap/flower/pull/2944), " +"[3064](https://github.com/adap/flower/pull/3064), " +"[3106](https://github.com/adap/flower/pull/3106), " +"[2974](https://github.com/adap/flower/pull/2974), " +"[3178](https://github.com/adap/flower/pull/3178), " +"[2993](https://github.com/adap/flower/pull/2993), " +"[3186](https://github.com/adap/flower/pull/3186), " +"[3091](https://github.com/adap/flower/pull/3091), " +"[3125](https://github.com/adap/flower/pull/3125), " +"[3093](https://github.com/adap/flower/pull/3093), " +"[3013](https://github.com/adap/flower/pull/3013), " +"[3033](https://github.com/adap/flower/pull/3033), " +"[3133](https://github.com/adap/flower/pull/3133), " +"[3068](https://github.com/adap/flower/pull/3068), " +"[2916](https://github.com/adap/flower/pull/2916), " +"[2975](https://github.com/adap/flower/pull/2975), " +"[2984](https://github.com/adap/flower/pull/2984), " +"[2846](https://github.com/adap/flower/pull/2846), " +"[3077](https://github.com/adap/flower/pull/3077), " +"[3143](https://github.com/adap/flower/pull/3143), " +"[2921](https://github.com/adap/flower/pull/2921), " +"[3101](https://github.com/adap/flower/pull/3101), " +"[2927](https://github.com/adap/flower/pull/2927), " +"[2995](https://github.com/adap/flower/pull/2995), " +"[2972](https://github.com/adap/flower/pull/2972), " +"[2912](https://github.com/adap/flower/pull/2912), " +"[3065](https://github.com/adap/flower/pull/3065), " +"[3028](https://github.com/adap/flower/pull/3028), " +"[2922](https://github.com/adap/flower/pull/2922), " +"[2982](https://github.com/adap/flower/pull/2982), " +"[2914](https://github.com/adap/flower/pull/2914), " +"[3179](https://github.com/adap/flower/pull/3179), " +"[3080](https://github.com/adap/flower/pull/3080), " +"[2994](https://github.com/adap/flower/pull/2994), " +"[3187](https://github.com/adap/flower/pull/3187), " +"[2926](https://github.com/adap/flower/pull/2926), " +"[3018](https://github.com/adap/flower/pull/3018), " +"[3144](https://github.com/adap/flower/pull/3144), " +"[3011](https://github.com/adap/flower/pull/3011), " +"[#3152](https://github.com/adap/flower/pull/3152), " +"[#2836](https://github.com/adap/flower/pull/2836), " +"[#2929](https://github.com/adap/flower/pull/2929), " +"[#2943](https://github.com/adap/flower/pull/2943), " +"[#2955](https://github.com/adap/flower/pull/2955), " +"[#2954](https://github.com/adap/flower/pull/2954))" +msgstr "" + +#: ../../source/ref-changelog.md:75 +#, fuzzy +msgid "v1.7.0 (2024-02-05)" +msgstr "v1.3.0 (2023-02-06)" + +#: ../../source/ref-changelog.md:81 +#, fuzzy +msgid "" +"`Aasheesh Singh`, `Adam Narozniak`, `Aml Hassan Esmil`, `Charles " +"Beauville`, `Daniel J. Beutel`, `Daniel Nata Nugraha`, `Edoardo " +"Gabrielli`, `Gustavo Bertoli`, `HelinLin`, `Heng Pan`, `Javier`, `M S " +"Chaitanya Kumar`, `Mohammad Naseri`, `Nikos Vlachakis`, `Pritam Neog`, " +"`Robert Kuska`, `Robert Steiner`, `Taner Topal`, `Yahia Salaheldin " +"Shaaban`, `Yan Gao`, `Yasar Abbas` " +msgstr "" +"`Adam Narozniak`, `Anass Anhari`, `Charles Beauville`, `Dana-Farber`, " +"`Daniel J. Beutel`, `Daniel Nata Nugraha`, `Edoardo Gabrielli`, `Gustavo " +"Bertoli`, `Heng Pan`, `Javier`, `Mahdi`, `Steven Hé (Sīchàng)`, `Taner " +"Topal`, `achiverram28`, `danielnugraha`, `eunchung`, `ruthgal` " + +#: ../../source/ref-changelog.md:85 +#, fuzzy +msgid "" +"**Introduce stateful clients (experimental)** " +"([#2770](https://github.com/adap/flower/pull/2770), " +"[#2686](https://github.com/adap/flower/pull/2686), " +"[#2696](https://github.com/adap/flower/pull/2696), " +"[#2643](https://github.com/adap/flower/pull/2643), " +"[#2769](https://github.com/adap/flower/pull/2769))" +msgstr "" +"** baselines的普通更新** ([#2301](https://github.com/adap/flower/pull/2301), " +"[#2305](https://github.com/adap/flower/pull/2305), " +"[#2307](https://github.com/adap/flower/pull/2307), " +"[#2327](https://github.com/adap/flower/pull/2327), " +"[#2435](https://github.com/adap/flower/pull/2435))" + +#: ../../source/ref-changelog.md:87 +msgid "" +"Subclasses of `Client` and `NumPyClient` can now store local state that " +"remains on the client. Let's start with the highlight first: this new " +"feature is compatible with both simulated clients (via " +"`start_simulation`) and networked clients (via `start_client`). It's also" +" the first preview of new abstractions like `Context` and `RecordSet`. " +"Clients can access state of type `RecordSet` via `state: RecordSet = " +"self.context.state`. Changes to this `RecordSet` are preserved across " +"different rounds of execution to enable stateful computations in a " +"unified way across simulation and deployment." +msgstr "" + +#: ../../source/ref-changelog.md:89 +#, fuzzy +msgid "" +"**Improve performance** " +"([#2293](https://github.com/adap/flower/pull/2293))" +msgstr "**改进示例笔记** ([#2005](https://github.com/adap/flower/pull/2005))" + +#: ../../source/ref-changelog.md:91 +msgid "" +"Flower is faster than ever. All `FedAvg`-derived strategies now use in-" +"place aggregation to reduce memory consumption. The Flower client " +"serialization/deserialization has been rewritten from the ground up, " +"which results in significant speedups, especially when the client-side " +"training time is short." +msgstr "" + +#: ../../source/ref-changelog.md:93 +#, fuzzy +msgid "" +"**Support Federated Learning with Apple MLX and Flower** " +"([#2693](https://github.com/adap/flower/pull/2693))" +msgstr "" +"** 添加使用 fastai 和 Flower 进行联邦学习的新示例** " +"([#1598](https://github.com/adap/flower/pull/1598))" + +#: ../../source/ref-changelog.md:95 +msgid "" +"Flower has official support for federated learning using [Apple " +"MLX](https://ml-explore.github.io/mlx) via the new `quickstart-mlx` code " +"example." +msgstr "" + +#: ../../source/ref-changelog.md:97 +#, fuzzy +msgid "" +"**Introduce new XGBoost cyclic strategy** " +"([#2666](https://github.com/adap/flower/pull/2666), " +"[#2668](https://github.com/adap/flower/pull/2668))" +msgstr "" +"**介绍 iOS SDK(预览版)** ([#1621](https://github.com/adap/flower/pull/1621), " +"[#1764](https://github.com/adap/flower/pull/1764))" + +#: ../../source/ref-changelog.md:99 +msgid "" +"A new strategy called `FedXgbCyclic` supports a client-by-client style of" +" training (often called cyclic). The `xgboost-comprehensive` code example" +" shows how to use it in a full project. In addition to that, `xgboost-" +"comprehensive` now also supports simulation mode. With this, Flower " +"offers best-in-class XGBoost support." +msgstr "" + +#: ../../source/ref-changelog.md:101 +#, fuzzy +msgid "" +"**Support Python 3.11** " +"([#2394](https://github.com/adap/flower/pull/2394))" +msgstr "** 支持 Python 3.10** ([#1320](https://github.com/adap/flower/pull/1320))" + +#: ../../source/ref-changelog.md:103 +msgid "" +"Framework tests now run on Python 3.8, 3.9, 3.10, and 3.11. This will " +"ensure better support for users using more recent Python versions." +msgstr "" + +#: ../../source/ref-changelog.md:105 +#, fuzzy +msgid "" +"**Update gRPC and ProtoBuf dependencies** " +"([#2814](https://github.com/adap/flower/pull/2814))" +msgstr "" +"**更新 REST API 以支持创建和删除节点** " +"([#2283](https://github.com/adap/flower/pull/2283))" + +#: ../../source/ref-changelog.md:107 +msgid "" +"The `grpcio` and `protobuf` dependencies were updated to their latest " +"versions for improved security and performance." +msgstr "" + +#: ../../source/ref-changelog.md:109 +#, fuzzy +msgid "" +"**Introduce Docker image for Flower server** " +"([#2700](https://github.com/adap/flower/pull/2700), " +"[#2688](https://github.com/adap/flower/pull/2688), " +"[#2705](https://github.com/adap/flower/pull/2705), " +"[#2695](https://github.com/adap/flower/pull/2695), " +"[#2747](https://github.com/adap/flower/pull/2747), " +"[#2746](https://github.com/adap/flower/pull/2746), " +"[#2680](https://github.com/adap/flower/pull/2680), " +"[#2682](https://github.com/adap/flower/pull/2682), " +"[#2701](https://github.com/adap/flower/pull/2701))" +msgstr "" +"** 支持 SSL 的服务器和客户端** ([#842](https://github.com/adap/flower/pull/842), " +"[#844](https://github.com/adap/flower/pull/844), " +"[#845](https://github.com/adap/flower/pull/845), " +"[#847](https://github.com/adap/flower/pull/847), " +"[#993](https://github.com/adap/flower/pull/993), " +"[#994](https://github.com/adap/flower/pull/994))" + +#: ../../source/ref-changelog.md:111 +msgid "" +"The Flower server can now be run using an official Docker image. A new " +"how-to guide explains [how to run Flower using " +"Docker](https://flower.ai/docs/framework/how-to-run-flower-using-" +"docker.html). An official Flower client Docker image will follow." +msgstr "" + +#: ../../source/ref-changelog.md:113 +#, fuzzy +msgid "" +"**Introduce** `flower-via-docker-compose` **example** " +"([#2626](https://github.com/adap/flower/pull/2626))" +msgstr "" +"**介绍Flower Android SDK** " +"([#2131](https://github.com/adap/flower/pull/2131))" + +#: ../../source/ref-changelog.md:115 +#, fuzzy +msgid "" +"**Introduce** `quickstart-sklearn-tabular` **example** " +"([#2719](https://github.com/adap/flower/pull/2719))" +msgstr "**引入 start_driver**([#1697](https://github.com/adap/flower/pull/1697))" + +#: ../../source/ref-changelog.md:117 +#, fuzzy +msgid "" +"**Introduce** `custom-metrics` **example** " +"([#1958](https://github.com/adap/flower/pull/1958))" +msgstr "**引入 start_driver**([#1697](https://github.com/adap/flower/pull/1697))" + +#: ../../source/ref-changelog.md:119 +#, fuzzy +msgid "" +"**Update code examples to use Flower Datasets** " +"([#2450](https://github.com/adap/flower/pull/2450), " +"[#2456](https://github.com/adap/flower/pull/2456), " +"[#2318](https://github.com/adap/flower/pull/2318), " +"[#2712](https://github.com/adap/flower/pull/2712))" +msgstr "" +"更新开发人员工具([#1231](https://github.com/adap/flower/pull/1231), " +"[#1276](https://github.com/adap/flower/pull/1276), " +"[#1301](https://github.com/adap/flower/pull/1301), " +"[#1310](https://github.com/adap/flower/pull/1310)" + +#: ../../source/ref-changelog.md:121 +msgid "" +"Several code examples were updated to use [Flower " +"Datasets](https://flower.ai/docs/datasets/)." +msgstr "" + +#: ../../source/ref-changelog.md:123 +#, fuzzy +msgid "" +"**General updates to Flower Examples** " +"([#2381](https://github.com/adap/flower/pull/2381), " +"[#2805](https://github.com/adap/flower/pull/2805), " +"[#2782](https://github.com/adap/flower/pull/2782), " +"[#2806](https://github.com/adap/flower/pull/2806), " +"[#2829](https://github.com/adap/flower/pull/2829), " +"[#2825](https://github.com/adap/flower/pull/2825), " +"[#2816](https://github.com/adap/flower/pull/2816), " +"[#2726](https://github.com/adap/flower/pull/2726), " +"[#2659](https://github.com/adap/flower/pull/2659), " +"[#2655](https://github.com/adap/flower/pull/2655))" +msgstr "" +"**改进(试验性)驱动程序应用程序接口** ([#1663](https://github.com/adap/flower/pull/1663)," +" [#1666](https://github.com/adap/flower/pull/1666), " +"[#1667](https://github.com/adap/flower/pull/1667), " +"[#1664](https://github.com/adap/flower/pull/1664), " +"[#1675](https://github.com/adap/flower/pull/1675), " +"[#1676](https://github.com/adap/flower/pull/1676), " +"[#1693](https://github.com/adap/flower/pull/1693), " +"[#1662](https://github.com/adap/flower/pull/1662), " +"[#1794](https://github.com/adap/flower/pull/1794))" + +#: ../../source/ref-changelog.md:125 msgid "Many Flower code examples received substantial updates." msgstr "" -#: ../../source/ref-changelog.md:61 ../../source/ref-changelog.md:154 +#: ../../source/ref-changelog.md:127 ../../source/ref-changelog.md:220 msgid "**Update Flower Baselines**" msgstr "**更新 Flower Baselines**" -#: ../../source/ref-changelog.md:63 +#: ../../source/ref-changelog.md:129 #, fuzzy msgid "" -"HFedXGBoost ([#2226](https://github.com/adap/flower/pull/2226), [#2771]" -"(https://github.com/adap/flower/pull/2771))" +"HFedXGBoost ([#2226](https://github.com/adap/flower/pull/2226), " +"[#2771](https://github.com/adap/flower/pull/2771))" msgstr "" -"FedBN ([#2608](https://github.com/adap/flower/pull/2608), [#2615](https://" -"github.com/adap/flower/pull/2615))" +"FedBN ([#2608](https://github.com/adap/flower/pull/2608), " +"[#2615](https://github.com/adap/flower/pull/2615))" -#: ../../source/ref-changelog.md:64 +#: ../../source/ref-changelog.md:130 #, fuzzy msgid "FedVSSL ([#2412](https://github.com/adap/flower/pull/2412))" msgstr "FjORD [#2431](https://github.com/adap/flower/pull/2431)" -#: ../../source/ref-changelog.md:65 +#: ../../source/ref-changelog.md:131 #, fuzzy msgid "FedNova ([#2179](https://github.com/adap/flower/pull/2179))" msgstr "FjORD [#2431](https://github.com/adap/flower/pull/2431)" -#: ../../source/ref-changelog.md:66 +#: ../../source/ref-changelog.md:132 #, fuzzy msgid "HeteroFL ([#2439](https://github.com/adap/flower/pull/2439))" msgstr "FedMeta [#2438](https://github.com/adap/flower/pull/2438)" -#: ../../source/ref-changelog.md:67 +#: ../../source/ref-changelog.md:133 #, fuzzy msgid "FedAvgM ([#2246](https://github.com/adap/flower/pull/2246))" msgstr "FedPer [#2266](https://github.com/adap/flower/pull/2266)" -#: ../../source/ref-changelog.md:68 +#: ../../source/ref-changelog.md:134 #, fuzzy msgid "FedPara ([#2722](https://github.com/adap/flower/pull/2722))" msgstr "FedPer [#2266](https://github.com/adap/flower/pull/2266)" -#: ../../source/ref-changelog.md:70 -#, fuzzy -msgid "" -"**Improve documentation** ([#2674](https://github.com/adap/flower/" -"pull/2674), [#2480](https://github.com/adap/flower/pull/2480), [#2826]" -"(https://github.com/adap/flower/pull/2826), [#2727](https://github.com/adap/" -"flower/pull/2727), [#2761](https://github.com/adap/flower/pull/2761), [#2900]" -"(https://github.com/adap/flower/pull/2900))" -msgstr "" -"** 更新文档** ([#1629](https://github.com/adap/flower/pull/1629), [#1628]" -"(https://github.com/adap/flower/pull/1628), [#1620](https://github.com/adap/" -"flower/pull/1620), [#1618](https://github.com/adap/flower/pull/1618), [#1617]" -"(https://github.com/adap/flower/pull/1617), [#1613](https://github.com/adap/" -"flower/pull/1613), [#1614](https://github.com/adap/flower/pull/1614)))" - -#: ../../source/ref-changelog.md:72 -msgid "" -"**Improved testing and development infrastructure** ([#2797](https://github." -"com/adap/flower/pull/2797), [#2676](https://github.com/adap/flower/" -"pull/2676), [#2644](https://github.com/adap/flower/pull/2644), [#2656]" -"(https://github.com/adap/flower/pull/2656), [#2848](https://github.com/adap/" -"flower/pull/2848), [#2675](https://github.com/adap/flower/pull/2675), [#2735]" -"(https://github.com/adap/flower/pull/2735), [#2767](https://github.com/adap/" -"flower/pull/2767), [#2732](https://github.com/adap/flower/pull/2732), [#2744]" -"(https://github.com/adap/flower/pull/2744), [#2681](https://github.com/adap/" -"flower/pull/2681), [#2699](https://github.com/adap/flower/pull/2699), [#2745]" -"(https://github.com/adap/flower/pull/2745), [#2734](https://github.com/adap/" -"flower/pull/2734), [#2731](https://github.com/adap/flower/pull/2731), [#2652]" -"(https://github.com/adap/flower/pull/2652), [#2720](https://github.com/adap/" -"flower/pull/2720), [#2721](https://github.com/adap/flower/pull/2721), [#2717]" -"(https://github.com/adap/flower/pull/2717), [#2864](https://github.com/adap/" -"flower/pull/2864), [#2694](https://github.com/adap/flower/pull/2694), [#2709]" -"(https://github.com/adap/flower/pull/2709), [#2658](https://github.com/adap/" -"flower/pull/2658), [#2796](https://github.com/adap/flower/pull/2796), [#2692]" -"(https://github.com/adap/flower/pull/2692), [#2657](https://github.com/adap/" -"flower/pull/2657), [#2813](https://github.com/adap/flower/pull/2813), [#2661]" -"(https://github.com/adap/flower/pull/2661), [#2398](https://github.com/adap/" -"flower/pull/2398))" -msgstr "" - -#: ../../source/ref-changelog.md:74 -msgid "" -"The Flower testing and development infrastructure has received substantial " -"updates. This makes Flower 1.7 the most tested release ever." -msgstr "" - -#: ../../source/ref-changelog.md:76 -#, fuzzy -msgid "" -"**Update dependencies** ([#2753](https://github.com/adap/flower/pull/2753), " -"[#2651](https://github.com/adap/flower/pull/2651), [#2739](https://github." -"com/adap/flower/pull/2739), [#2837](https://github.com/adap/flower/" -"pull/2837), [#2788](https://github.com/adap/flower/pull/2788), [#2811]" -"(https://github.com/adap/flower/pull/2811), [#2774](https://github.com/adap/" -"flower/pull/2774), [#2790](https://github.com/adap/flower/pull/2790), [#2751]" -"(https://github.com/adap/flower/pull/2751), [#2850](https://github.com/adap/" -"flower/pull/2850), [#2812](https://github.com/adap/flower/pull/2812), [#2872]" -"(https://github.com/adap/flower/pull/2872), [#2736](https://github.com/adap/" -"flower/pull/2736), [#2756](https://github.com/adap/flower/pull/2756), [#2857]" -"(https://github.com/adap/flower/pull/2857), [#2757](https://github.com/adap/" -"flower/pull/2757), [#2810](https://github.com/adap/flower/pull/2810), [#2740]" -"(https://github.com/adap/flower/pull/2740), [#2789](https://github.com/adap/" -"flower/pull/2789))" -msgstr "" -"**更新Example** ([#1772](https://github.com/adap/flower/pull/1772), [#1873]" -"(https://github.com/adap/flower/pull/1873), [#1981](https://github.com/adap/" -"flower/pull/1981), [#1988](https://github.com/adap/flower/pull/1988), [#1984]" -"(https://github.com/adap/flower/pull/1984), [#1982](https://github.com/adap/" -"flower/pull/1982), [#2112](https://github.com/adap/flower/pull/2112), [#2144]" -"(https://github.com/adap/flower/pull/2144), [#2174](https://github.com/adap/" -"flower/pull/2174), [#2225](https://github.com/adap/flower/pull/2225), [#2183]" -"(https://github.com/adap/flower/pull/2183))" - -#: ../../source/ref-changelog.md:78 -msgid "" -"**General improvements** ([#2803](https://github.com/adap/flower/pull/2803), " -"[#2847](https://github.com/adap/flower/pull/2847), [#2877](https://github." -"com/adap/flower/pull/2877), [#2690](https://github.com/adap/flower/" -"pull/2690), [#2889](https://github.com/adap/flower/pull/2889), [#2874]" -"(https://github.com/adap/flower/pull/2874), [#2819](https://github.com/adap/" -"flower/pull/2819), [#2689](https://github.com/adap/flower/pull/2689), [#2457]" -"(https://github.com/adap/flower/pull/2457), [#2870](https://github.com/adap/" -"flower/pull/2870), [#2669](https://github.com/adap/flower/pull/2669), [#2876]" -"(https://github.com/adap/flower/pull/2876), [#2885](https://github.com/adap/" -"flower/pull/2885), [#2858](https://github.com/adap/flower/pull/2858), [#2867]" -"(https://github.com/adap/flower/pull/2867), [#2351](https://github.com/adap/" -"flower/pull/2351), [#2886](https://github.com/adap/flower/pull/2886), [#2860]" -"(https://github.com/adap/flower/pull/2860), [#2828](https://github.com/adap/" -"flower/pull/2828), [#2869](https://github.com/adap/flower/pull/2869), [#2875]" -"(https://github.com/adap/flower/pull/2875), [#2733](https://github.com/adap/" -"flower/pull/2733), [#2488](https://github.com/adap/flower/pull/2488), [#2646]" -"(https://github.com/adap/flower/pull/2646), [#2879](https://github.com/adap/" -"flower/pull/2879), [#2821](https://github.com/adap/flower/pull/2821), [#2855]" -"(https://github.com/adap/flower/pull/2855), [#2800](https://github.com/adap/" -"flower/pull/2800), [#2807](https://github.com/adap/flower/pull/2807), [#2801]" -"(https://github.com/adap/flower/pull/2801), [#2804](https://github.com/adap/" -"flower/pull/2804), [#2851](https://github.com/adap/flower/pull/2851), [#2787]" -"(https://github.com/adap/flower/pull/2787), [#2852](https://github.com/adap/" -"flower/pull/2852), [#2672](https://github.com/adap/flower/pull/2672), [#2759]" -"(https://github.com/adap/flower/pull/2759))" -msgstr "" - -#: ../../source/ref-changelog.md:82 -#, fuzzy -msgid "" -"**Deprecate** `start_numpy_client` ([#2563](https://github.com/adap/flower/" -"pull/2563), [#2718](https://github.com/adap/flower/pull/2718))" -msgstr "" -"TAMUNA ([#2254](https://github.com/adap/flower/pull/2254), [#2508](https://" -"github.com/adap/flower/pull/2508))" - -#: ../../source/ref-changelog.md:84 +#: ../../source/ref-changelog.md:136 +#, fuzzy +msgid "" +"**Improve documentation** " +"([#2674](https://github.com/adap/flower/pull/2674), " +"[#2480](https://github.com/adap/flower/pull/2480), " +"[#2826](https://github.com/adap/flower/pull/2826), " +"[#2727](https://github.com/adap/flower/pull/2727), " +"[#2761](https://github.com/adap/flower/pull/2761), " +"[#2900](https://github.com/adap/flower/pull/2900))" +msgstr "" +"** 更新文档** ([#1629](https://github.com/adap/flower/pull/1629), " +"[#1628](https://github.com/adap/flower/pull/1628), " +"[#1620](https://github.com/adap/flower/pull/1620), " +"[#1618](https://github.com/adap/flower/pull/1618), " +"[#1617](https://github.com/adap/flower/pull/1617), " +"[#1613](https://github.com/adap/flower/pull/1613), " +"[#1614](https://github.com/adap/flower/pull/1614)))" + +#: ../../source/ref-changelog.md:138 msgid "" -"Until now, clients of type `NumPyClient` needed to be started via " -"`start_numpy_client`. In our efforts to consolidate framework APIs, we have " -"introduced changes, and now all client types should start via " -"`start_client`. To continue using `NumPyClient` clients, you simply need to " -"first call the `.to_client()` method and then pass returned `Client` object " -"to `start_client`. The examples and the documentation have been updated " -"accordingly." +"**Improved testing and development infrastructure** " +"([#2797](https://github.com/adap/flower/pull/2797), " +"[#2676](https://github.com/adap/flower/pull/2676), " +"[#2644](https://github.com/adap/flower/pull/2644), " +"[#2656](https://github.com/adap/flower/pull/2656), " +"[#2848](https://github.com/adap/flower/pull/2848), " +"[#2675](https://github.com/adap/flower/pull/2675), " +"[#2735](https://github.com/adap/flower/pull/2735), " +"[#2767](https://github.com/adap/flower/pull/2767), " +"[#2732](https://github.com/adap/flower/pull/2732), " +"[#2744](https://github.com/adap/flower/pull/2744), " +"[#2681](https://github.com/adap/flower/pull/2681), " +"[#2699](https://github.com/adap/flower/pull/2699), " +"[#2745](https://github.com/adap/flower/pull/2745), " +"[#2734](https://github.com/adap/flower/pull/2734), " +"[#2731](https://github.com/adap/flower/pull/2731), " +"[#2652](https://github.com/adap/flower/pull/2652), " +"[#2720](https://github.com/adap/flower/pull/2720), " +"[#2721](https://github.com/adap/flower/pull/2721), " +"[#2717](https://github.com/adap/flower/pull/2717), " +"[#2864](https://github.com/adap/flower/pull/2864), " +"[#2694](https://github.com/adap/flower/pull/2694), " +"[#2709](https://github.com/adap/flower/pull/2709), " +"[#2658](https://github.com/adap/flower/pull/2658), " +"[#2796](https://github.com/adap/flower/pull/2796), " +"[#2692](https://github.com/adap/flower/pull/2692), " +"[#2657](https://github.com/adap/flower/pull/2657), " +"[#2813](https://github.com/adap/flower/pull/2813), " +"[#2661](https://github.com/adap/flower/pull/2661), " +"[#2398](https://github.com/adap/flower/pull/2398))" +msgstr "" + +#: ../../source/ref-changelog.md:140 +msgid "" +"The Flower testing and development infrastructure has received " +"substantial updates. This makes Flower 1.7 the most tested release ever." +msgstr "" + +#: ../../source/ref-changelog.md:142 +#, fuzzy +msgid "" +"**Update dependencies** " +"([#2753](https://github.com/adap/flower/pull/2753), " +"[#2651](https://github.com/adap/flower/pull/2651), " +"[#2739](https://github.com/adap/flower/pull/2739), " +"[#2837](https://github.com/adap/flower/pull/2837), " +"[#2788](https://github.com/adap/flower/pull/2788), " +"[#2811](https://github.com/adap/flower/pull/2811), " +"[#2774](https://github.com/adap/flower/pull/2774), " +"[#2790](https://github.com/adap/flower/pull/2790), " +"[#2751](https://github.com/adap/flower/pull/2751), " +"[#2850](https://github.com/adap/flower/pull/2850), " +"[#2812](https://github.com/adap/flower/pull/2812), " +"[#2872](https://github.com/adap/flower/pull/2872), " +"[#2736](https://github.com/adap/flower/pull/2736), " +"[#2756](https://github.com/adap/flower/pull/2756), " +"[#2857](https://github.com/adap/flower/pull/2857), " +"[#2757](https://github.com/adap/flower/pull/2757), " +"[#2810](https://github.com/adap/flower/pull/2810), " +"[#2740](https://github.com/adap/flower/pull/2740), " +"[#2789](https://github.com/adap/flower/pull/2789))" +msgstr "" +"**更新Example** ([#1772](https://github.com/adap/flower/pull/1772), " +"[#1873](https://github.com/adap/flower/pull/1873), " +"[#1981](https://github.com/adap/flower/pull/1981), " +"[#1988](https://github.com/adap/flower/pull/1988), " +"[#1984](https://github.com/adap/flower/pull/1984), " +"[#1982](https://github.com/adap/flower/pull/1982), " +"[#2112](https://github.com/adap/flower/pull/2112), " +"[#2144](https://github.com/adap/flower/pull/2144), " +"[#2174](https://github.com/adap/flower/pull/2174), " +"[#2225](https://github.com/adap/flower/pull/2225), " +"[#2183](https://github.com/adap/flower/pull/2183))" + +#: ../../source/ref-changelog.md:144 +msgid "" +"**General improvements** " +"([#2803](https://github.com/adap/flower/pull/2803), " +"[#2847](https://github.com/adap/flower/pull/2847), " +"[#2877](https://github.com/adap/flower/pull/2877), " +"[#2690](https://github.com/adap/flower/pull/2690), " +"[#2889](https://github.com/adap/flower/pull/2889), " +"[#2874](https://github.com/adap/flower/pull/2874), " +"[#2819](https://github.com/adap/flower/pull/2819), " +"[#2689](https://github.com/adap/flower/pull/2689), " +"[#2457](https://github.com/adap/flower/pull/2457), " +"[#2870](https://github.com/adap/flower/pull/2870), " +"[#2669](https://github.com/adap/flower/pull/2669), " +"[#2876](https://github.com/adap/flower/pull/2876), " +"[#2885](https://github.com/adap/flower/pull/2885), " +"[#2858](https://github.com/adap/flower/pull/2858), " +"[#2867](https://github.com/adap/flower/pull/2867), " +"[#2351](https://github.com/adap/flower/pull/2351), " +"[#2886](https://github.com/adap/flower/pull/2886), " +"[#2860](https://github.com/adap/flower/pull/2860), " +"[#2828](https://github.com/adap/flower/pull/2828), " +"[#2869](https://github.com/adap/flower/pull/2869), " +"[#2875](https://github.com/adap/flower/pull/2875), " +"[#2733](https://github.com/adap/flower/pull/2733), " +"[#2488](https://github.com/adap/flower/pull/2488), " +"[#2646](https://github.com/adap/flower/pull/2646), " +"[#2879](https://github.com/adap/flower/pull/2879), " +"[#2821](https://github.com/adap/flower/pull/2821), " +"[#2855](https://github.com/adap/flower/pull/2855), " +"[#2800](https://github.com/adap/flower/pull/2800), " +"[#2807](https://github.com/adap/flower/pull/2807), " +"[#2801](https://github.com/adap/flower/pull/2801), " +"[#2804](https://github.com/adap/flower/pull/2804), " +"[#2851](https://github.com/adap/flower/pull/2851), " +"[#2787](https://github.com/adap/flower/pull/2787), " +"[#2852](https://github.com/adap/flower/pull/2852), " +"[#2672](https://github.com/adap/flower/pull/2672), " +"[#2759](https://github.com/adap/flower/pull/2759))" msgstr "" -#: ../../source/ref-changelog.md:86 +#: ../../source/ref-changelog.md:148 #, fuzzy msgid "" -"**Deprecate legacy DP wrappers** ([#2749](https://github.com/adap/flower/" -"pull/2749))" +"**Deprecate** `start_numpy_client` " +"([#2563](https://github.com/adap/flower/pull/2563), " +"[#2718](https://github.com/adap/flower/pull/2718))" +msgstr "" +"TAMUNA ([#2254](https://github.com/adap/flower/pull/2254), " +"[#2508](https://github.com/adap/flower/pull/2508))" + +#: ../../source/ref-changelog.md:150 +msgid "" +"Until now, clients of type `NumPyClient` needed to be started via " +"`start_numpy_client`. In our efforts to consolidate framework APIs, we " +"have introduced changes, and now all client types should start via " +"`start_client`. To continue using `NumPyClient` clients, you simply need " +"to first call the `.to_client()` method and then pass returned `Client` " +"object to `start_client`. The examples and the documentation have been " +"updated accordingly." msgstr "" -"**移除过时的 KerasClient**([#857](https://github.com/adap/flower/pull/857))" -#: ../../source/ref-changelog.md:88 +#: ../../source/ref-changelog.md:152 +#, fuzzy +msgid "" +"**Deprecate legacy DP wrappers** " +"([#2749](https://github.com/adap/flower/pull/2749))" +msgstr "**移除过时的 KerasClient**([#857](https://github.com/adap/flower/pull/857))" + +#: ../../source/ref-changelog.md:154 msgid "" -"Legacy DP wrapper classes are deprecated, but still functional. This is in " -"preparation for an all-new pluggable version of differential privacy support " -"in Flower." +"Legacy DP wrapper classes are deprecated, but still functional. This is " +"in preparation for an all-new pluggable version of differential privacy " +"support in Flower." msgstr "" -#: ../../source/ref-changelog.md:90 +#: ../../source/ref-changelog.md:156 #, fuzzy msgid "" "**Make optional arg** `--callable` **in** `flower-client` **a required " "positional arg** ([#2673](https://github.com/adap/flower/pull/2673))" msgstr "" -"**从** `start_client` 中移除** `rest` **实验参数 ([#2324](https://github.com/" -"adap/flower/pull/2324))" +"**从** `start_client` 中移除** `rest` **实验参数 " +"([#2324](https://github.com/adap/flower/pull/2324))" -#: ../../source/ref-changelog.md:92 +#: ../../source/ref-changelog.md:158 #, fuzzy msgid "" -"**Rename** `certificates` **to** `root_certificates` **in** `Driver` ([#2890]" -"(https://github.com/adap/flower/pull/2890))" +"**Rename** `certificates` **to** `root_certificates` **in** `Driver` " +"([#2890](https://github.com/adap/flower/pull/2890))" msgstr "" -"**重新命名** `rnd` ** to** `server_round` ([#1321](https://github.com/adap/" -"flower/pull/1321))" +"**重新命名** `rnd` ** to** `server_round` " +"([#1321](https://github.com/adap/flower/pull/1321))" -#: ../../source/ref-changelog.md:94 +#: ../../source/ref-changelog.md:160 #, fuzzy msgid "" -"**Drop experimental** `Task` **fields** ([#2866](https://github.com/adap/" -"flower/pull/2866), [#2865](https://github.com/adap/flower/pull/2865))" +"**Drop experimental** `Task` **fields** " +"([#2866](https://github.com/adap/flower/pull/2866), " +"[#2865](https://github.com/adap/flower/pull/2865))" msgstr "" -"FedBN ([#2608](https://github.com/adap/flower/pull/2608), [#2615](https://" -"github.com/adap/flower/pull/2615))" +"FedBN ([#2608](https://github.com/adap/flower/pull/2608), " +"[#2615](https://github.com/adap/flower/pull/2615))" -#: ../../source/ref-changelog.md:96 +#: ../../source/ref-changelog.md:162 msgid "" "Experimental fields `sa`, `legacy_server_message` and " -"`legacy_client_message` were removed from `Task` message. The removed fields " -"are superseded by the new `RecordSet` abstraction." +"`legacy_client_message` were removed from `Task` message. The removed " +"fields are superseded by the new `RecordSet` abstraction." msgstr "" -#: ../../source/ref-changelog.md:98 +#: ../../source/ref-changelog.md:164 #, fuzzy msgid "" -"**Retire MXNet examples** ([#2724](https://github.com/adap/flower/pull/2724))" -msgstr "" -"**新的 scikit-learn 代码示例** ([#748](https://github.com/adap/flower/" -"pull/748))" +"**Retire MXNet examples** " +"([#2724](https://github.com/adap/flower/pull/2724))" +msgstr "**新的 scikit-learn 代码示例** ([#748](https://github.com/adap/flower/pull/748))" -#: ../../source/ref-changelog.md:100 +#: ../../source/ref-changelog.md:166 msgid "" "The development of the MXNet fremework has ended and the project is now " "[archived on GitHub](https://github.com/apache/mxnet). Existing MXNet " "examples won't receive updates." msgstr "" -#: ../../source/ref-changelog.md:102 +#: ../../source/ref-changelog.md:168 #, fuzzy msgid "v1.6.0 (2023-11-28)" msgstr "v1.4.0 (2023-04-21)" -#: ../../source/ref-changelog.md:108 +#: ../../source/ref-changelog.md:174 msgid "" "`Aashish Kolluri`, `Adam Narozniak`, `Alessio Mora`, `Barathwaja S`, " "`Charles Beauville`, `Daniel J. Beutel`, `Daniel Nata Nugraha`, `Gabriel " -"Mota`, `Heng Pan`, `Ivan Agarský`, `JS.KIM`, `Javier`, `Marius Schlegel`, " -"`Navin Chandra`, `Nic Lane`, `Peterpan828`, `Qinbin Li`, `Shaz-hash`, `Steve " -"Laskaridis`, `Taner Topal`, `William Lindskog`, `Yan Gao`, `cnxdeveloper`, " -"`k3nfalt` " +"Mota`, `Heng Pan`, `Ivan Agarský`, `JS.KIM`, `Javier`, `Marius Schlegel`," +" `Navin Chandra`, `Nic Lane`, `Peterpan828`, `Qinbin Li`, `Shaz-hash`, " +"`Steve Laskaridis`, `Taner Topal`, `William Lindskog`, `Yan Gao`, " +"`cnxdeveloper`, `k3nfalt` " msgstr "" -#: ../../source/ref-changelog.md:112 +#: ../../source/ref-changelog.md:178 msgid "" -"**Add experimental support for Python 3.12** ([#2565](https://github.com/" -"adap/flower/pull/2565))" +"**Add experimental support for Python 3.12** " +"([#2565](https://github.com/adap/flower/pull/2565))" msgstr "" -"** 增加对 Python 3.12 的实验支持** ([#2565](https://github.com/adap/flower/" -"pull/2565))" +"** 增加对 Python 3.12 的实验支持** " +"([#2565](https://github.com/adap/flower/pull/2565))" -#: ../../source/ref-changelog.md:114 +#: ../../source/ref-changelog.md:180 #, fuzzy msgid "" -"**Add new XGBoost examples** ([#2612](https://github.com/adap/flower/" -"pull/2612), [#2554](https://github.com/adap/flower/pull/2554), [#2617]" -"(https://github.com/adap/flower/pull/2617), [#2618](https://github.com/adap/" -"flower/pull/2618), [#2619](https://github.com/adap/flower/pull/2619), [#2567]" -"(https://github.com/adap/flower/pull/2567))" +"**Add new XGBoost examples** " +"([#2612](https://github.com/adap/flower/pull/2612), " +"[#2554](https://github.com/adap/flower/pull/2554), " +"[#2617](https://github.com/adap/flower/pull/2617), " +"[#2618](https://github.com/adap/flower/pull/2618), " +"[#2619](https://github.com/adap/flower/pull/2619), " +"[#2567](https://github.com/adap/flower/pull/2567))" msgstr "" -"**引入(试验性)Driver API** ([#1520](https://github.com/adap/flower/" -"pull/1520), [#1525](https://github.com/adap/flower/pull/1525), [#1545]" -"(https://github.com/adap/flower/pull/1545), [#1546](https://github.com/adap/" -"flower/pull/1546), [#1550](https://github.com/adap/flower/pull/1550), [#1551]" -"(https://github.com/adap/flower/pull/1551), [#1567](https://github.com/adap/" -"flower/pull/1567))" +"**引入(试验性)Driver API** ([#1520](https://github.com/adap/flower/pull/1520)," +" [#1525](https://github.com/adap/flower/pull/1525), " +"[#1545](https://github.com/adap/flower/pull/1545), " +"[#1546](https://github.com/adap/flower/pull/1546), " +"[#1550](https://github.com/adap/flower/pull/1550), " +"[#1551](https://github.com/adap/flower/pull/1551), " +"[#1567](https://github.com/adap/flower/pull/1567))" -#: ../../source/ref-changelog.md:116 +#: ../../source/ref-changelog.md:182 msgid "" -"We have added a new `xgboost-quickstart` example alongside a new `xgboost-" -"comprehensive` example that goes more in-depth." +"We have added a new `xgboost-quickstart` example alongside a new " +"`xgboost-comprehensive` example that goes more in-depth." msgstr "" -#: ../../source/ref-changelog.md:118 +#: ../../source/ref-changelog.md:184 #, fuzzy msgid "" -"**Add Vertical FL example** ([#2598](https://github.com/adap/flower/" -"pull/2598))" -msgstr "" -"**新的 iOS CoreML 代码示例**([#1289](https://github.com/adap/flower/" -"pull/1289))" +"**Add Vertical FL example** " +"([#2598](https://github.com/adap/flower/pull/2598))" +msgstr "**新的 iOS CoreML 代码示例**([#1289](https://github.com/adap/flower/pull/1289))" -#: ../../source/ref-changelog.md:120 +#: ../../source/ref-changelog.md:186 msgid "" -"We had many questions about Vertical Federated Learning using Flower, so we " -"decided to add an simple example for it on the [Titanic dataset](https://www." -"kaggle.com/competitions/titanic/data) alongside a tutorial (in the README)." +"We had many questions about Vertical Federated Learning using Flower, so " +"we decided to add an simple example for it on the [Titanic " +"dataset](https://www.kaggle.com/competitions/titanic/data) alongside a " +"tutorial (in the README)." msgstr "" -#: ../../source/ref-changelog.md:122 +#: ../../source/ref-changelog.md:188 msgid "" -"**Support custom** `ClientManager` **in** `start_driver()` ([#2292](https://" -"github.com/adap/flower/pull/2292))" -msgstr "" -"**在***`start_driver()`中支持自定义***`ClientManager([#2292](https://github." -"com/adap/flower/pull/2292))" +"**Support custom** `ClientManager` **in** `start_driver()` " +"([#2292](https://github.com/adap/flower/pull/2292))" +msgstr "**在***`start_driver()`中支持自定义***`ClientManager([#2292](https://github.com/adap/flower/pull/2292))" -#: ../../source/ref-changelog.md:124 +#: ../../source/ref-changelog.md:190 msgid "" -"**Update REST API to support create and delete nodes** ([#2283](https://" -"github.com/adap/flower/pull/2283))" +"**Update REST API to support create and delete nodes** " +"([#2283](https://github.com/adap/flower/pull/2283))" msgstr "" -"**更新 REST API 以支持创建和删除节点** ([#2283](https://github.com/adap/" -"flower/pull/2283))" +"**更新 REST API 以支持创建和删除节点** " +"([#2283](https://github.com/adap/flower/pull/2283))" -#: ../../source/ref-changelog.md:126 +#: ../../source/ref-changelog.md:192 #, fuzzy msgid "" -"**Update the Android SDK** ([#2187](https://github.com/adap/flower/" -"pull/2187))" +"**Update the Android SDK** " +"([#2187](https://github.com/adap/flower/pull/2187))" msgstr "" -"**介绍Flower Android SDK** ([#2131](https://github.com/adap/flower/" -"pull/2131))" +"**介绍Flower Android SDK** " +"([#2131](https://github.com/adap/flower/pull/2131))" -#: ../../source/ref-changelog.md:128 +#: ../../source/ref-changelog.md:194 #, fuzzy msgid "Add gRPC request-response capability to the Android SDK." msgstr "为 C++ SDK 添加 gRPC 请求-响应功能。" -#: ../../source/ref-changelog.md:130 +#: ../../source/ref-changelog.md:196 #, fuzzy msgid "" -"**Update the C++ SDK** ([#2537](https://github.com/adap/flower/pull/2537), " -"[#2528](https://github.com/adap/flower/pull/2528), [#2523](https://github." -"com/adap/flower/pull/2523), [#2522](https://github.com/adap/flower/" -"pull/2522))" +"**Update the C++ SDK** " +"([#2537](https://github.com/adap/flower/pull/2537), " +"[#2528](https://github.com/adap/flower/pull/2528), " +"[#2523](https://github.com/adap/flower/pull/2523), " +"[#2522](https://github.com/adap/flower/pull/2522))" msgstr "" -"** 更新 C++ SDK** ([#2537](https://github/com/adap/flower/pull/2537), [#2528]" -"(https://github/com/adap/flower/pull/2528), [#2523](https://github.com/adap/" -"flower/pull/2523), [#2522](https://github.com/adap/flower/pull/2522))" +"** 更新 C++ SDK** ([#2537](https://github/com/adap/flower/pull/2537), " +"[#2528](https://github/com/adap/flower/pull/2528), " +"[#2523](https://github.com/adap/flower/pull/2523), " +"[#2522](https://github.com/adap/flower/pull/2522))" -#: ../../source/ref-changelog.md:132 +#: ../../source/ref-changelog.md:198 msgid "Add gRPC request-response capability to the C++ SDK." msgstr "为 C++ SDK 添加 gRPC 请求-响应功能。" -#: ../../source/ref-changelog.md:134 +#: ../../source/ref-changelog.md:200 #, fuzzy msgid "" -"**Make HTTPS the new default** ([#2591](https://github.com/adap/flower/" -"pull/2591), [#2636](https://github.com/adap/flower/pull/2636))" +"**Make HTTPS the new default** " +"([#2591](https://github.com/adap/flower/pull/2591), " +"[#2636](https://github.com/adap/flower/pull/2636))" msgstr "" -"Baselines文档([#2290](https://github.com/adap/flower/pull/2290), [#2400]" -"(https://github.com/adap/flower/pull/2400)" +"Baselines文档([#2290](https://github.com/adap/flower/pull/2290), " +"[#2400](https://github.com/adap/flower/pull/2400)" -#: ../../source/ref-changelog.md:136 +#: ../../source/ref-changelog.md:202 msgid "" "Flower is moving to HTTPS by default. The new `flower-server` requires " -"passing `--certificates`, but users can enable `--insecure` to use HTTP for " -"prototyping. The same applies to `flower-client`, which can either use user-" -"provided credentials or gRPC-bundled certificates to connect to an HTTPS-" -"enabled server or requires opt-out via passing `--insecure` to enable " -"insecure HTTP connections." +"passing `--certificates`, but users can enable `--insecure` to use HTTP " +"for prototyping. The same applies to `flower-client`, which can either " +"use user-provided credentials or gRPC-bundled certificates to connect to " +"an HTTPS-enabled server or requires opt-out via passing `--insecure` to " +"enable insecure HTTP connections." msgstr "" -#: ../../source/ref-changelog.md:138 +#: ../../source/ref-changelog.md:204 msgid "" -"For backward compatibility, `start_client()` and `start_numpy_client()` will " -"still start in insecure mode by default. In a future release, insecure " -"connections will require user opt-in by passing `insecure=True`." +"For backward compatibility, `start_client()` and `start_numpy_client()` " +"will still start in insecure mode by default. In a future release, " +"insecure connections will require user opt-in by passing `insecure=True`." msgstr "" -#: ../../source/ref-changelog.md:140 +#: ../../source/ref-changelog.md:206 msgid "" "**Unify client API** ([#2303](https://github.com/adap/flower/pull/2303), " -"[#2390](https://github.com/adap/flower/pull/2390), [#2493](https://github." -"com/adap/flower/pull/2493))" +"[#2390](https://github.com/adap/flower/pull/2390), " +"[#2493](https://github.com/adap/flower/pull/2493))" msgstr "" -"** 统一客户端应用程序接口** ([#2303](https://github.com/adap/flower/" -"pull/2303), [#2390](https://github.com/adap/flower/pull/2390), [#2493]" -"(https://github.com/adap/flower/pull/2493))" +"** 统一客户端应用程序接口** ([#2303](https://github.com/adap/flower/pull/2303), " +"[#2390](https://github.com/adap/flower/pull/2390), " +"[#2493](https://github.com/adap/flower/pull/2493))" -#: ../../source/ref-changelog.md:142 +#: ../../source/ref-changelog.md:208 #, fuzzy msgid "" -"Using the `client_fn`, Flower clients can interchangeably run as standalone " -"processes (i.e. via `start_client`) or in simulation (i.e. via " -"`start_simulation`) without requiring changes to how the client class is " -"defined and instantiated. The `to_client()` function is introduced to " +"Using the `client_fn`, Flower clients can interchangeably run as " +"standalone processes (i.e. via `start_client`) or in simulation (i.e. via" +" `start_simulation`) without requiring changes to how the client class is" +" defined and instantiated. The `to_client()` function is introduced to " "convert a `NumPyClient` to a `Client`." msgstr "" -"使用 `client_fn`,Flower 客户端可以作为独立进程(即通过 `start_client`)或在" -"模拟中(即通过 `start_simulation`)交替运行,而无需更改客户端类的定义和实例化" -"方式。调用 `start_numpy_client` 现已过时。" +"使用 `client_fn`,Flower 客户端可以作为独立进程(即通过 `start_client`)或在模拟中(即通过 " +"`start_simulation`)交替运行,而无需更改客户端类的定义和实例化方式。调用 `start_numpy_client` 现已过时。" -#: ../../source/ref-changelog.md:144 +#: ../../source/ref-changelog.md:210 msgid "" -"**Add new** `Bulyan` **strategy** ([#1817](https://github.com/adap/flower/" -"pull/1817), [#1891](https://github.com/adap/flower/pull/1891))" +"**Add new** `Bulyan` **strategy** " +"([#1817](https://github.com/adap/flower/pull/1817), " +"[#1891](https://github.com/adap/flower/pull/1891))" msgstr "" -"**添加新**\"Bulyan \"**策略**([#1817](https://github.com/adap/flower/" -"pull/1817), [#1891](https://github.com/adap/flower/pull/1891)" +"**添加新**\"Bulyan " +"\"**策略**([#1817](https://github.com/adap/flower/pull/1817), " +"[#1891](https://github.com/adap/flower/pull/1891)" -#: ../../source/ref-changelog.md:146 +#: ../../source/ref-changelog.md:212 msgid "" -"The new `Bulyan` strategy implements Bulyan by [El Mhamdi et al., 2018]" -"(https://arxiv.org/abs/1802.07927)" -msgstr "" -"新的 \"Bulyan\"策略通过[El Mhamdi 等人,2018](https://arxiv.org/" -"abs/1802.07927)实现" +"The new `Bulyan` strategy implements Bulyan by [El Mhamdi et al., " +"2018](https://arxiv.org/abs/1802.07927)" +msgstr "新的 \"Bulyan\"策略通过[El Mhamdi 等人,2018](https://arxiv.org/abs/1802.07927)实现" -#: ../../source/ref-changelog.md:148 +#: ../../source/ref-changelog.md:214 #, fuzzy msgid "" -"**Add new** `XGB Bagging` **strategy** ([#2611](https://github.com/adap/" -"flower/pull/2611))" -msgstr "" -"**添加新的`FedProx`策略** ([#1619](https://github.com/adap/flower/" -"pull/1619))" +"**Add new** `XGB Bagging` **strategy** " +"([#2611](https://github.com/adap/flower/pull/2611))" +msgstr "**添加新的`FedProx`策略** ([#1619](https://github.com/adap/flower/pull/1619))" -#: ../../source/ref-changelog.md:150 ../../source/ref-changelog.md:152 +#: ../../source/ref-changelog.md:216 ../../source/ref-changelog.md:218 #, fuzzy msgid "" -"**Introduce `WorkloadState`** ([#2564](https://github.com/adap/flower/" -"pull/2564), [#2632](https://github.com/adap/flower/pull/2632))" +"**Introduce `WorkloadState`** " +"([#2564](https://github.com/adap/flower/pull/2564), " +"[#2632](https://github.com/adap/flower/pull/2632))" msgstr "" -"**新的内置策略**([#828](https://github.com/adap/flower/pull/828) [#822]" -"(https://github.com/adap/flower/pull/822)" +"**新的内置策略**([#828](https://github.com/adap/flower/pull/828) " +"[#822](https://github.com/adap/flower/pull/822)" -#: ../../source/ref-changelog.md:156 +#: ../../source/ref-changelog.md:222 msgid "" -"FedProx ([#2210](https://github.com/adap/flower/pull/2210), [#2286](https://" -"github.com/adap/flower/pull/2286), [#2509](https://github.com/adap/flower/" -"pull/2509))" +"FedProx ([#2210](https://github.com/adap/flower/pull/2210), " +"[#2286](https://github.com/adap/flower/pull/2286), " +"[#2509](https://github.com/adap/flower/pull/2509))" msgstr "" -"FedProx ([#2210](https://github.com/adap/flower/pull/2210), [#2286](https://" -"github.com/adap/flower/pull/2286), [#2509](https://github.com/adap/flower/" -"pull/2509))" +"FedProx ([#2210](https://github.com/adap/flower/pull/2210), " +"[#2286](https://github.com/adap/flower/pull/2286), " +"[#2509](https://github.com/adap/flower/pull/2509))" -#: ../../source/ref-changelog.md:158 +#: ../../source/ref-changelog.md:224 msgid "" -"Baselines Docs ([#2290](https://github.com/adap/flower/pull/2290), [#2400]" -"(https://github.com/adap/flower/pull/2400))" +"Baselines Docs ([#2290](https://github.com/adap/flower/pull/2290), " +"[#2400](https://github.com/adap/flower/pull/2400))" msgstr "" -"Baselines文档([#2290](https://github.com/adap/flower/pull/2290), [#2400]" -"(https://github.com/adap/flower/pull/2400)" +"Baselines文档([#2290](https://github.com/adap/flower/pull/2290), " +"[#2400](https://github.com/adap/flower/pull/2400)" -#: ../../source/ref-changelog.md:160 +#: ../../source/ref-changelog.md:226 msgid "" -"FedMLB ([#2340](https://github.com/adap/flower/pull/2340), [#2507](https://" -"github.com/adap/flower/pull/2507))" +"FedMLB ([#2340](https://github.com/adap/flower/pull/2340), " +"[#2507](https://github.com/adap/flower/pull/2507))" msgstr "" -"FedMLB ([#2340](https://github.com/adap/flower/pull/2340), [#2507](https://" -"github.com/adap/flower/pull/2507))" +"FedMLB ([#2340](https://github.com/adap/flower/pull/2340), " +"[#2507](https://github.com/adap/flower/pull/2507))" -#: ../../source/ref-changelog.md:162 +#: ../../source/ref-changelog.md:228 msgid "" -"TAMUNA ([#2254](https://github.com/adap/flower/pull/2254), [#2508](https://" -"github.com/adap/flower/pull/2508))" +"TAMUNA ([#2254](https://github.com/adap/flower/pull/2254), " +"[#2508](https://github.com/adap/flower/pull/2508))" msgstr "" -"TAMUNA ([#2254](https://github.com/adap/flower/pull/2254), [#2508](https://" -"github.com/adap/flower/pull/2508))" +"TAMUNA ([#2254](https://github.com/adap/flower/pull/2254), " +"[#2508](https://github.com/adap/flower/pull/2508))" -#: ../../source/ref-changelog.md:164 +#: ../../source/ref-changelog.md:230 msgid "FedMeta [#2438](https://github.com/adap/flower/pull/2438)" msgstr "FedMeta [#2438](https://github.com/adap/flower/pull/2438)" -#: ../../source/ref-changelog.md:166 +#: ../../source/ref-changelog.md:232 msgid "FjORD [#2431](https://github.com/adap/flower/pull/2431)" msgstr "FjORD [#2431](https://github.com/adap/flower/pull/2431)" -#: ../../source/ref-changelog.md:168 +#: ../../source/ref-changelog.md:234 msgid "MOON [#2421](https://github.com/adap/flower/pull/2421)" msgstr "MOON [#2421](https://github.com/adap/flower/pull/2421)" -#: ../../source/ref-changelog.md:170 +#: ../../source/ref-changelog.md:236 msgid "DepthFL [#2295](https://github.com/adap/flower/pull/2295)" msgstr "DepthFL [#2295](https://github.com/adap/flower/pull/2295)" -#: ../../source/ref-changelog.md:172 +#: ../../source/ref-changelog.md:238 msgid "FedPer [#2266](https://github.com/adap/flower/pull/2266)" msgstr "FedPer [#2266](https://github.com/adap/flower/pull/2266)" -#: ../../source/ref-changelog.md:174 +#: ../../source/ref-changelog.md:240 msgid "FedWav2vec [#2551](https://github.com/adap/flower/pull/2551)" msgstr "FedWav2vec [#2551](https://github.com/adap/flower/pull/2551)" -#: ../../source/ref-changelog.md:176 +#: ../../source/ref-changelog.md:242 msgid "niid-Bench [#2428](https://github.com/adap/flower/pull/2428)" msgstr "niid-Bench [#2428](https://github.com/adap/flower/pull/2428)" -#: ../../source/ref-changelog.md:178 +#: ../../source/ref-changelog.md:244 msgid "" -"FedBN ([#2608](https://github.com/adap/flower/pull/2608), [#2615](https://" -"github.com/adap/flower/pull/2615))" +"FedBN ([#2608](https://github.com/adap/flower/pull/2608), " +"[#2615](https://github.com/adap/flower/pull/2615))" msgstr "" -"FedBN ([#2608](https://github.com/adap/flower/pull/2608), [#2615](https://" -"github.com/adap/flower/pull/2615))" +"FedBN ([#2608](https://github.com/adap/flower/pull/2608), " +"[#2615](https://github.com/adap/flower/pull/2615))" -#: ../../source/ref-changelog.md:180 +#: ../../source/ref-changelog.md:246 #, fuzzy msgid "" -"**General updates to Flower Examples** ([#2384](https://github.com/adap/" -"flower/pull/2384), [#2425](https://github.com/adap/flower/pull/2425), [#2526]" -"(https://github.com/adap/flower/pull/2526), [#2302](https://github.com/adap/" -"flower/pull/2302), [#2545](https://github.com/adap/flower/pull/2545))" +"**General updates to Flower Examples** " +"([#2384](https://github.com/adap/flower/pull/2384), " +"[#2425](https://github.com/adap/flower/pull/2425), " +"[#2526](https://github.com/adap/flower/pull/2526), " +"[#2302](https://github.com/adap/flower/pull/2302), " +"[#2545](https://github.com/adap/flower/pull/2545))" msgstr "" -"** 更新 C++ SDK** ([#2537](https://github/com/adap/flower/pull/2537), [#2528]" -"(https://github/com/adap/flower/pull/2528), [#2523](https://github.com/adap/" -"flower/pull/2523), [#2522](https://github.com/adap/flower/pull/2522))" +"** 更新 C++ SDK** ([#2537](https://github/com/adap/flower/pull/2537), " +"[#2528](https://github/com/adap/flower/pull/2528), " +"[#2523](https://github.com/adap/flower/pull/2523), " +"[#2522](https://github.com/adap/flower/pull/2522))" -#: ../../source/ref-changelog.md:182 +#: ../../source/ref-changelog.md:248 #, fuzzy msgid "" -"**General updates to Flower Baselines** ([#2301](https://github.com/adap/" -"flower/pull/2301), [#2305](https://github.com/adap/flower/pull/2305), [#2307]" -"(https://github.com/adap/flower/pull/2307), [#2327](https://github.com/adap/" -"flower/pull/2327), [#2435](https://github.com/adap/flower/pull/2435), [#2462]" -"(https://github.com/adap/flower/pull/2462), [#2463](https://github.com/adap/" -"flower/pull/2463), [#2461](https://github.com/adap/flower/pull/2461), [#2469]" -"(https://github.com/adap/flower/pull/2469), [#2466](https://github.com/adap/" -"flower/pull/2466), [#2471](https://github.com/adap/flower/pull/2471), [#2472]" -"(https://github.com/adap/flower/pull/2472), [#2470](https://github.com/adap/" -"flower/pull/2470))" -msgstr "" -"**普通改进**([#2309](https://github.com/adap/flower/pull/2309), [#2310]" -"(https://github.com/adap/flower/pull/2310), [2313](https://github.com/adap/" -"flower/pull/2313), [#2316](https://github.com/adap/flower/pull/2316), [2317]" -"(https://github.com/adap/flower/pull/2317),[#2349](https://github.com/adap/" -"flower/pull/2349), [#2360](https://github.com/adap/flower/pull/2360), [#2402]" -"(https://github.com/adap/flower/pull/2402), [#2446](https://github.com/adap/" -"flower/pull/2446) [#2561](https://github.com/adap/flower/pull/2561))" +"**General updates to Flower Baselines** " +"([#2301](https://github.com/adap/flower/pull/2301), " +"[#2305](https://github.com/adap/flower/pull/2305), " +"[#2307](https://github.com/adap/flower/pull/2307), " +"[#2327](https://github.com/adap/flower/pull/2327), " +"[#2435](https://github.com/adap/flower/pull/2435), " +"[#2462](https://github.com/adap/flower/pull/2462), " +"[#2463](https://github.com/adap/flower/pull/2463), " +"[#2461](https://github.com/adap/flower/pull/2461), " +"[#2469](https://github.com/adap/flower/pull/2469), " +"[#2466](https://github.com/adap/flower/pull/2466), " +"[#2471](https://github.com/adap/flower/pull/2471), " +"[#2472](https://github.com/adap/flower/pull/2472), " +"[#2470](https://github.com/adap/flower/pull/2470))" +msgstr "" +"**普通改进**([#2309](https://github.com/adap/flower/pull/2309), " +"[#2310](https://github.com/adap/flower/pull/2310), " +"[2313](https://github.com/adap/flower/pull/2313), " +"[#2316](https://github.com/adap/flower/pull/2316), " +"[2317](https://github.com/adap/flower/pull/2317),[#2349](https://github.com/adap/flower/pull/2349)," +" [#2360](https://github.com/adap/flower/pull/2360), " +"[#2402](https://github.com/adap/flower/pull/2402), " +"[#2446](https://github.com/adap/flower/pull/2446) " +"[#2561](https://github.com/adap/flower/pull/2561))" -#: ../../source/ref-changelog.md:184 +#: ../../source/ref-changelog.md:250 #, fuzzy msgid "" -"**General updates to the simulation engine** ([#2331](https://github.com/" -"adap/flower/pull/2331), [#2447](https://github.com/adap/flower/pull/2447), " -"[#2448](https://github.com/adap/flower/pull/2448), [#2294](https://github." -"com/adap/flower/pull/2294))" +"**General updates to the simulation engine** " +"([#2331](https://github.com/adap/flower/pull/2331), " +"[#2447](https://github.com/adap/flower/pull/2447), " +"[#2448](https://github.com/adap/flower/pull/2448), " +"[#2294](https://github.com/adap/flower/pull/2294))" msgstr "" "**模拟引擎的普通更新** ([#2331](https://github.com/adap/flower/pull/2331), " -"[#2447](https://github.com/adap/flower/pull/2447), [#2448](https://github." -"com/adap/flower/pull/2448))" +"[#2447](https://github.com/adap/flower/pull/2447), " +"[#2448](https://github.com/adap/flower/pull/2448))" -#: ../../source/ref-changelog.md:186 +#: ../../source/ref-changelog.md:252 #, fuzzy msgid "" -"**General updates to Flower SDKs** ([#2288](https://github.com/adap/flower/" -"pull/2288), [#2429](https://github.com/adap/flower/pull/2429), [#2555]" -"(https://github.com/adap/flower/pull/2555), [#2543](https://github.com/adap/" -"flower/pull/2543), [#2544](https://github.com/adap/flower/pull/2544), [#2597]" -"(https://github.com/adap/flower/pull/2597), [#2623](https://github.com/adap/" -"flower/pull/2623))" +"**General updates to Flower SDKs** " +"([#2288](https://github.com/adap/flower/pull/2288), " +"[#2429](https://github.com/adap/flower/pull/2429), " +"[#2555](https://github.com/adap/flower/pull/2555), " +"[#2543](https://github.com/adap/flower/pull/2543), " +"[#2544](https://github.com/adap/flower/pull/2544), " +"[#2597](https://github.com/adap/flower/pull/2597), " +"[#2623](https://github.com/adap/flower/pull/2623))" msgstr "" -"**改进教程** ([#1468](https://github.com/adap/flower/pull/1468), [#1470]" -"(https://github.com/adap/flower/pull/1470), [#1472](https://github.com/adap/" -"flower/pull/1472), [#1473](https://github.com/adap/flower/pull/1473), [#1474]" -"(https://github.com/adap/flower/pull/1474), [#1475](https://github.com/adap/" -"flower/pull/1475)))" +"**改进教程** ([#1468](https://github.com/adap/flower/pull/1468), " +"[#1470](https://github.com/adap/flower/pull/1470), " +"[#1472](https://github.com/adap/flower/pull/1472), " +"[#1473](https://github.com/adap/flower/pull/1473), " +"[#1474](https://github.com/adap/flower/pull/1474), " +"[#1475](https://github.com/adap/flower/pull/1475)))" -#: ../../source/ref-changelog.md:188 +#: ../../source/ref-changelog.md:254 msgid "" -"**General improvements** ([#2309](https://github.com/adap/flower/pull/2309), " -"[#2310](https://github.com/adap/flower/pull/2310), [#2313](https://github." -"com/adap/flower/pull/2313), [#2316](https://github.com/adap/flower/" -"pull/2316), [#2317](https://github.com/adap/flower/pull/2317), [#2349]" -"(https://github.com/adap/flower/pull/2349), [#2360](https://github.com/adap/" -"flower/pull/2360), [#2402](https://github.com/adap/flower/pull/2402), [#2446]" -"(https://github.com/adap/flower/pull/2446), [#2561](https://github.com/adap/" -"flower/pull/2561), [#2273](https://github.com/adap/flower/pull/2273), [#2267]" -"(https://github.com/adap/flower/pull/2267), [#2274](https://github.com/adap/" -"flower/pull/2274), [#2275](https://github.com/adap/flower/pull/2275), [#2432]" -"(https://github.com/adap/flower/pull/2432), [#2251](https://github.com/adap/" -"flower/pull/2251), [#2321](https://github.com/adap/flower/pull/2321), [#1936]" -"(https://github.com/adap/flower/pull/1936), [#2408](https://github.com/adap/" -"flower/pull/2408), [#2413](https://github.com/adap/flower/pull/2413), [#2401]" -"(https://github.com/adap/flower/pull/2401), [#2531](https://github.com/adap/" -"flower/pull/2531), [#2534](https://github.com/adap/flower/pull/2534), [#2535]" -"(https://github.com/adap/flower/pull/2535), [#2521](https://github.com/adap/" -"flower/pull/2521), [#2553](https://github.com/adap/flower/pull/2553), [#2596]" -"(https://github.com/adap/flower/pull/2596))" -msgstr "" - -#: ../../source/ref-changelog.md:190 ../../source/ref-changelog.md:280 -#: ../../source/ref-changelog.md:344 ../../source/ref-changelog.md:398 -#: ../../source/ref-changelog.md:465 -msgid "" -"Flower received many improvements under the hood, too many to list here." +"**General improvements** " +"([#2309](https://github.com/adap/flower/pull/2309), " +"[#2310](https://github.com/adap/flower/pull/2310), " +"[#2313](https://github.com/adap/flower/pull/2313), " +"[#2316](https://github.com/adap/flower/pull/2316), " +"[#2317](https://github.com/adap/flower/pull/2317), " +"[#2349](https://github.com/adap/flower/pull/2349), " +"[#2360](https://github.com/adap/flower/pull/2360), " +"[#2402](https://github.com/adap/flower/pull/2402), " +"[#2446](https://github.com/adap/flower/pull/2446), " +"[#2561](https://github.com/adap/flower/pull/2561), " +"[#2273](https://github.com/adap/flower/pull/2273), " +"[#2267](https://github.com/adap/flower/pull/2267), " +"[#2274](https://github.com/adap/flower/pull/2274), " +"[#2275](https://github.com/adap/flower/pull/2275), " +"[#2432](https://github.com/adap/flower/pull/2432), " +"[#2251](https://github.com/adap/flower/pull/2251), " +"[#2321](https://github.com/adap/flower/pull/2321), " +"[#1936](https://github.com/adap/flower/pull/1936), " +"[#2408](https://github.com/adap/flower/pull/2408), " +"[#2413](https://github.com/adap/flower/pull/2413), " +"[#2401](https://github.com/adap/flower/pull/2401), " +"[#2531](https://github.com/adap/flower/pull/2531), " +"[#2534](https://github.com/adap/flower/pull/2534), " +"[#2535](https://github.com/adap/flower/pull/2535), " +"[#2521](https://github.com/adap/flower/pull/2521), " +"[#2553](https://github.com/adap/flower/pull/2553), " +"[#2596](https://github.com/adap/flower/pull/2596))" +msgstr "" + +#: ../../source/ref-changelog.md:256 ../../source/ref-changelog.md:346 +#: ../../source/ref-changelog.md:410 ../../source/ref-changelog.md:464 +#: ../../source/ref-changelog.md:531 +msgid "Flower received many improvements under the hood, too many to list here." msgstr "Flower 进行了许多改进,这里就不一一列举了。" -#: ../../source/ref-changelog.md:194 +#: ../../source/ref-changelog.md:260 msgid "" -"**Remove support for Python 3.7** ([#2280](https://github.com/adap/flower/" -"pull/2280), [#2299](https://github.com/adap/flower/pull/2299), [#2304]" -"(https://github.com/adap/flower/pull/2304), [#2306](https://github.com/adap/" -"flower/pull/2306), [#2355](https://github.com/adap/flower/pull/2355), [#2356]" -"(https://github.com/adap/flower/pull/2356))" -msgstr "" -"**移除对 Python 3.7 的支持** ([#2280](https://github.com/adap/flower/" -"pull/2280), [#2299](https://github.com/adap/flower/pull/2299), [#2304]" -"(https://github.com/adap/flower/pull/2304), [#2306](https://github.com/adap/" -"flower/pull/2306), [#2355](https://github.com/adap/flower/pull/2355), [#2356]" -"(https://github.com/adap/flower/pull/2356))" +"**Remove support for Python 3.7** " +"([#2280](https://github.com/adap/flower/pull/2280), " +"[#2299](https://github.com/adap/flower/pull/2299), " +"[#2304](https://github.com/adap/flower/pull/2304), " +"[#2306](https://github.com/adap/flower/pull/2306), " +"[#2355](https://github.com/adap/flower/pull/2355), " +"[#2356](https://github.com/adap/flower/pull/2356))" +msgstr "" +"**移除对 Python 3.7 的支持** " +"([#2280](https://github.com/adap/flower/pull/2280), " +"[#2299](https://github.com/adap/flower/pull/2299), " +"[#2304](https://github.com/adap/flower/pull/2304), " +"[#2306](https://github.com/adap/flower/pull/2306), " +"[#2355](https://github.com/adap/flower/pull/2355), " +"[#2356](https://github.com/adap/flower/pull/2356))" -#: ../../source/ref-changelog.md:196 +#: ../../source/ref-changelog.md:262 msgid "" -"Python 3.7 support was deprecated in Flower 1.5, and this release removes " -"support. Flower now requires Python 3.8." -msgstr "" -"在 Flower 1.5 中,Python 3.7 支持已被弃用,本版本将删除该支持。Flower 现在需" -"要 Python 3.8。" +"Python 3.7 support was deprecated in Flower 1.5, and this release removes" +" support. Flower now requires Python 3.8." +msgstr "在 Flower 1.5 中,Python 3.7 支持已被弃用,本版本将删除该支持。Flower 现在需要 Python 3.8。" -#: ../../source/ref-changelog.md:198 +#: ../../source/ref-changelog.md:264 msgid "" -"**Remove experimental argument** `rest` **from** `start_client` ([#2324]" -"(https://github.com/adap/flower/pull/2324))" +"**Remove experimental argument** `rest` **from** `start_client` " +"([#2324](https://github.com/adap/flower/pull/2324))" msgstr "" -"**从** `start_client` 中移除** `rest` **实验参数 ([#2324](https://github.com/" -"adap/flower/pull/2324))" +"**从** `start_client` 中移除** `rest` **实验参数 " +"([#2324](https://github.com/adap/flower/pull/2324))" -#: ../../source/ref-changelog.md:200 +#: ../../source/ref-changelog.md:266 msgid "" -"The (still experimental) argument `rest` was removed from `start_client` and " -"`start_numpy_client`. Use `transport=\"rest\"` to opt into the experimental " -"REST API instead." +"The (still experimental) argument `rest` was removed from `start_client` " +"and `start_numpy_client`. Use `transport=\"rest\"` to opt into the " +"experimental REST API instead." msgstr "" -"删除了 `start_client` 和 `start_numpy_client` 中的参数 `rest`(仍属试验性" -"质)。请使用 `transport=\"rest\"` 来选择使用试验性 REST API。" +"删除了 `start_client` 和 `start_numpy_client` 中的参数 `rest`(仍属试验性质)。请使用 " +"`transport=\"rest\"` 来选择使用试验性 REST API。" -#: ../../source/ref-changelog.md:202 +#: ../../source/ref-changelog.md:268 msgid "v1.5.0 (2023-08-31)" msgstr "v1.5.0 (2023-08-31)" -#: ../../source/ref-changelog.md:208 +#: ../../source/ref-changelog.md:274 msgid "" "`Adam Narozniak`, `Anass Anhari`, `Charles Beauville`, `Dana-Farber`, " "`Daniel J. Beutel`, `Daniel Nata Nugraha`, `Edoardo Gabrielli`, `Gustavo " @@ -16053,766 +17451,831 @@ msgstr "" "Topal`, `achiverram28`, `danielnugraha`, `eunchung`, `ruthgal` " -#: ../../source/ref-changelog.md:212 +#: ../../source/ref-changelog.md:278 msgid "" -"**Introduce new simulation engine** ([#1969](https://github.com/adap/flower/" -"pull/1969), [#2221](https://github.com/adap/flower/pull/2221), [#2248]" -"(https://github.com/adap/flower/pull/2248))" +"**Introduce new simulation engine** " +"([#1969](https://github.com/adap/flower/pull/1969), " +"[#2221](https://github.com/adap/flower/pull/2221), " +"[#2248](https://github.com/adap/flower/pull/2248))" msgstr "" "**引入新的模拟引擎** ([#1969](https://github.com/adap/flower/pull/1969), " -"[#2221](https://github.com/adap/flower/pull/2221), [#2248](https://github." -"com/adap/flower/pull/2248))" +"[#2221](https://github.com/adap/flower/pull/2221), " +"[#2248](https://github.com/adap/flower/pull/2248))" -#: ../../source/ref-changelog.md:214 +#: ../../source/ref-changelog.md:280 msgid "" "The new simulation engine has been rewritten from the ground up, yet it " -"remains fully backwards compatible. It offers much improved stability and " -"memory handling, especially when working with GPUs. Simulations " -"transparently adapt to different settings to scale simulation in CPU-only, " -"CPU+GPU, multi-GPU, or multi-node multi-GPU environments." +"remains fully backwards compatible. It offers much improved stability and" +" memory handling, especially when working with GPUs. Simulations " +"transparently adapt to different settings to scale simulation in CPU-" +"only, CPU+GPU, multi-GPU, or multi-node multi-GPU environments." msgstr "" -"新的模拟引擎从头开始重新编写,但仍完全向后兼容。它的稳定性和内存处理能力大大" -"提高,尤其是在使用 GPU 时。仿真可透明地适应不同的设置,以在仅 CPU、CPU+GPU、" -"多 GPU 或多节点多 GPU 环境中扩展模拟。" +"新的模拟引擎从头开始重新编写,但仍完全向后兼容。它的稳定性和内存处理能力大大提高,尤其是在使用 GPU 时。仿真可透明地适应不同的设置,以在仅 " +"CPU、CPU+GPU、多 GPU 或多节点多 GPU 环境中扩展模拟。" -#: ../../source/ref-changelog.md:216 +#: ../../source/ref-changelog.md:282 msgid "" -"Comprehensive documentation includes a new [how-to run simulations](https://" -"flower.ai/docs/framework/how-to-run-simulations.html) guide, new [simulation-" +"Comprehensive documentation includes a new [how-to run " +"simulations](https://flower.ai/docs/framework/how-to-run-" +"simulations.html) guide, new [simulation-" +"pytorch](https://flower.ai/docs/examples/simulation-pytorch.html) and " +"[simulation-tensorflow](https://flower.ai/docs/examples/simulation-" +"tensorflow.html) notebooks, and a new [YouTube tutorial " +"series](https://www.youtube.com/watch?v=cRebUIGB5RU&list=PLNG4feLHqCWlnj8a_E1A_n5zr2-8pafTB)." +msgstr "" +"综合文档包括新的[how-to run simulations](https://flower.ai/docs/framework/how-to-" +"run-simulations.html) guide, new [simulation-" "pytorch](https://flower.ai/docs/examples/simulation-pytorch.html) and " "[simulation-tensorflow](https://flower.ai/docs/examples/simulation-" -"tensorflow.html) notebooks, and a new [YouTube tutorial series](https://www." -"youtube.com/watch?v=cRebUIGB5RU&list=PLNG4feLHqCWlnj8a_E1A_n5zr2-8pafTB)." -msgstr "" -"综合文档包括新的[how-to run simulations](https://flower.ai/docs/framework/" -"how-to-run-simulations.html) guide, new [simulation-pytorch](https://flower." -"ai/docs/examples/simulation-pytorch.html) and [simulation-tensorflow]" -"(https://flower.ai/docs/examples/simulation-tensorflow.html) notebooks, and " -"a new [YouTube tutorial series](https://www.youtube.com/watch?" -"v=cRebUIGB5RU&list=PLNG4feLHqCWlnj8a_E1A_n5zr2-8pafTB)。" - -#: ../../source/ref-changelog.md:218 -msgid "" -"**Restructure Flower Docs** ([#1824](https://github.com/adap/flower/" -"pull/1824), [#1865](https://github.com/adap/flower/pull/1865), [#1884]" -"(https://github.com/adap/flower/pull/1884), [#1887](https://github.com/adap/" -"flower/pull/1887), [#1919](https://github.com/adap/flower/pull/1919), [#1922]" -"(https://github.com/adap/flower/pull/1922), [#1920](https://github.com/adap/" -"flower/pull/1920), [#1923](https://github.com/adap/flower/pull/1923), [#1924]" -"(https://github.com/adap/flower/pull/1924), [#1962](https://github.com/adap/" -"flower/pull/1962), [#2006](https://github.com/adap/flower/pull/2006), [#2133]" -"(https://github.com/adap/flower/pull/2133), [#2203](https://github.com/adap/" -"flower/pull/2203), [#2215](https://github.com/adap/flower/pull/2215), [#2122]" -"(https://github.com/adap/flower/pull/2122), [#2223](https://github.com/adap/" -"flower/pull/2223), [#2219](https://github.com/adap/flower/pull/2219), [#2232]" -"(https://github.com/adap/flower/pull/2232), [#2233](https://github.com/adap/" -"flower/pull/2233), [#2234](https://github.com/adap/flower/pull/2234), [#2235]" -"(https://github.com/adap/flower/pull/2235), [#2237](https://github.com/adap/" -"flower/pull/2237), [#2238](https://github.com/adap/flower/pull/2238), [#2242]" -"(https://github.com/adap/flower/pull/2242), [#2231](https://github.com/adap/" -"flower/pull/2231), [#2243](https://github.com/adap/flower/pull/2243), [#2227]" -"(https://github.com/adap/flower/pull/2227))" +"tensorflow.html) notebooks, and a new [YouTube tutorial " +"series](https://www.youtube.com/watch?v=cRebUIGB5RU&list=PLNG4feLHqCWlnj8a_E1A_n5zr2-8pafTB)。" + +#: ../../source/ref-changelog.md:284 +msgid "" +"**Restructure Flower Docs** " +"([#1824](https://github.com/adap/flower/pull/1824), " +"[#1865](https://github.com/adap/flower/pull/1865), " +"[#1884](https://github.com/adap/flower/pull/1884), " +"[#1887](https://github.com/adap/flower/pull/1887), " +"[#1919](https://github.com/adap/flower/pull/1919), " +"[#1922](https://github.com/adap/flower/pull/1922), " +"[#1920](https://github.com/adap/flower/pull/1920), " +"[#1923](https://github.com/adap/flower/pull/1923), " +"[#1924](https://github.com/adap/flower/pull/1924), " +"[#1962](https://github.com/adap/flower/pull/1962), " +"[#2006](https://github.com/adap/flower/pull/2006), " +"[#2133](https://github.com/adap/flower/pull/2133), " +"[#2203](https://github.com/adap/flower/pull/2203), " +"[#2215](https://github.com/adap/flower/pull/2215), " +"[#2122](https://github.com/adap/flower/pull/2122), " +"[#2223](https://github.com/adap/flower/pull/2223), " +"[#2219](https://github.com/adap/flower/pull/2219), " +"[#2232](https://github.com/adap/flower/pull/2232), " +"[#2233](https://github.com/adap/flower/pull/2233), " +"[#2234](https://github.com/adap/flower/pull/2234), " +"[#2235](https://github.com/adap/flower/pull/2235), " +"[#2237](https://github.com/adap/flower/pull/2237), " +"[#2238](https://github.com/adap/flower/pull/2238), " +"[#2242](https://github.com/adap/flower/pull/2242), " +"[#2231](https://github.com/adap/flower/pull/2231), " +"[#2243](https://github.com/adap/flower/pull/2243), " +"[#2227](https://github.com/adap/flower/pull/2227))" msgstr "" "**重构 Flower 文档** ([#1824](https://github.com/adap/flower/pull/1824), " -"[#1865](https://github.com/adap/flower/pull/1865), [#1884](https://github." -"com/adap/flower/pull/1884), [#1887](https://github.com/adap/flower/" -"pull/1887), [#1919](https://github.com/adap/flower/pull/1919), [#1922]" -"(https://github.com/adap/flower/pull/1922), [#1920](https://github.com/adap/" -"flower/pull/1920), [#1923](https://github.com/adap/flower/pull/1923), [#1924]" -"(https://github.com/adap/flower/pull/1924), [#1962](https://github.com/adap/" -"flower/pull/1962), [#2006](https://github.com/adap/flower/pull/2006), [#2133]" -"(https://github.com/adap/flower/pull/2133), [#2203](https://github.com/adap/" -"flower/pull/2203), [#2215](https://github.com/adap/flower/pull/2215), [#2122]" -"(https://github.com/adap/flower/pull/2122), [#2223](https://github.com/adap/" -"flower/pull/2223), [#2219](https://github.com/adap/flower/pull/2219), [#2232]" -"(https://github.com/adap/flower/pull/2232), [#2233](https://github.com/adap/" -"flower/pull/2233), [#2234](https://github.com/adap/flower/pull/2234), [#2235]" -"(https://github.com/adap/flower/pull/2235), [#2237](https://github.com/adap/" -"flower/pull/2237), [#2238](https://github.com/adap/flower/pull/2238), [#2242]" -"(https://github.com/adap/flower/pull/2242), [#2231](https://github.com/adap/" -"flower/pull/2231), [#2243](https://github.com/adap/flower/pull/2243), [#2227]" -"(https://github.com/adap/flower/pull/2227))" - -#: ../../source/ref-changelog.md:220 -#, fuzzy -msgid "" -"Much effort went into a completely restructured Flower docs experience. The " -"documentation on [flower.ai/docs](https://flower.ai/docs) is now divided " -"into Flower Framework, Flower Baselines, Flower Android SDK, Flower iOS SDK, " -"and code example projects." -msgstr "" -"Flower 文档体验的全面重构耗费了大量精力。现在,[flower.ai/docs](flower.ai/" -"docs)上的文档分为 Flower Framework、Flower Baselines、Flower Android SDK、" -"Flower iOS SDK 和代码示例项目。" +"[#1865](https://github.com/adap/flower/pull/1865), " +"[#1884](https://github.com/adap/flower/pull/1884), " +"[#1887](https://github.com/adap/flower/pull/1887), " +"[#1919](https://github.com/adap/flower/pull/1919), " +"[#1922](https://github.com/adap/flower/pull/1922), " +"[#1920](https://github.com/adap/flower/pull/1920), " +"[#1923](https://github.com/adap/flower/pull/1923), " +"[#1924](https://github.com/adap/flower/pull/1924), " +"[#1962](https://github.com/adap/flower/pull/1962), " +"[#2006](https://github.com/adap/flower/pull/2006), " +"[#2133](https://github.com/adap/flower/pull/2133), " +"[#2203](https://github.com/adap/flower/pull/2203), " +"[#2215](https://github.com/adap/flower/pull/2215), " +"[#2122](https://github.com/adap/flower/pull/2122), " +"[#2223](https://github.com/adap/flower/pull/2223), " +"[#2219](https://github.com/adap/flower/pull/2219), " +"[#2232](https://github.com/adap/flower/pull/2232), " +"[#2233](https://github.com/adap/flower/pull/2233), " +"[#2234](https://github.com/adap/flower/pull/2234), " +"[#2235](https://github.com/adap/flower/pull/2235), " +"[#2237](https://github.com/adap/flower/pull/2237), " +"[#2238](https://github.com/adap/flower/pull/2238), " +"[#2242](https://github.com/adap/flower/pull/2242), " +"[#2231](https://github.com/adap/flower/pull/2231), " +"[#2243](https://github.com/adap/flower/pull/2243), " +"[#2227](https://github.com/adap/flower/pull/2227))" -#: ../../source/ref-changelog.md:222 +#: ../../source/ref-changelog.md:286 +#, fuzzy msgid "" -"**Introduce Flower Swift SDK** ([#1858](https://github.com/adap/flower/" -"pull/1858), [#1897](https://github.com/adap/flower/pull/1897))" +"Much effort went into a completely restructured Flower docs experience. " +"The documentation on [flower.ai/docs](https://flower.ai/docs) is now " +"divided into Flower Framework, Flower Baselines, Flower Android SDK, " +"Flower iOS SDK, and code example projects." msgstr "" -"**介绍 Flower Swift SDK** ([#1858](https://github.com/adap/flower/" -"pull/1858), [#1897](https://github.com/adap/flower/pull/1897))" +"Flower 文档体验的全面重构耗费了大量精力。现在,[flower.ai/docs](flower.ai/docs)上的文档分为 Flower " +"Framework、Flower Baselines、Flower Android SDK、Flower iOS SDK 和代码示例项目。" -#: ../../source/ref-changelog.md:224 +#: ../../source/ref-changelog.md:288 msgid "" -"This is the first preview release of the Flower Swift SDK. Flower support on " -"iOS is improving, and alongside the Swift SDK and code example, there is now " -"also an iOS quickstart tutorial." +"**Introduce Flower Swift SDK** " +"([#1858](https://github.com/adap/flower/pull/1858), " +"[#1897](https://github.com/adap/flower/pull/1897))" msgstr "" -"这是 Flower Swift SDK 的首个预览版。Flower 对 iOS 的支持正在不断改进,除了 " -"Swift SDK 和代码示例外,现在还有 iOS 快速入门教程。" +"**介绍 Flower Swift SDK** " +"([#1858](https://github.com/adap/flower/pull/1858), " +"[#1897](https://github.com/adap/flower/pull/1897))" -#: ../../source/ref-changelog.md:226 +#: ../../source/ref-changelog.md:290 msgid "" -"**Introduce Flower Android SDK** ([#2131](https://github.com/adap/flower/" -"pull/2131))" +"This is the first preview release of the Flower Swift SDK. Flower support" +" on iOS is improving, and alongside the Swift SDK and code example, there" +" is now also an iOS quickstart tutorial." msgstr "" -"**介绍Flower Android SDK** ([#2131](https://github.com/adap/flower/" -"pull/2131))" +"这是 Flower Swift SDK 的首个预览版。Flower 对 iOS 的支持正在不断改进,除了 Swift SDK " +"和代码示例外,现在还有 iOS 快速入门教程。" -#: ../../source/ref-changelog.md:228 +#: ../../source/ref-changelog.md:292 msgid "" -"This is the first preview release of the Flower Kotlin SDK. Flower support " -"on Android is improving, and alongside the Kotlin SDK and code example, " -"there is now also an Android quickstart tutorial." +"**Introduce Flower Android SDK** " +"([#2131](https://github.com/adap/flower/pull/2131))" msgstr "" -"这是 Flower Kotlin SDK 的首个预览版。Flower 对 Android 的支持正在不断改进,除" -"了 Kotlin SDK 和代码示例,现在还有 Android 快速入门教程。" +"**介绍Flower Android SDK** " +"([#2131](https://github.com/adap/flower/pull/2131))" -#: ../../source/ref-changelog.md:230 +#: ../../source/ref-changelog.md:294 +msgid "" +"This is the first preview release of the Flower Kotlin SDK. Flower " +"support on Android is improving, and alongside the Kotlin SDK and code " +"example, there is now also an Android quickstart tutorial." +msgstr "" +"这是 Flower Kotlin SDK 的首个预览版。Flower 对 Android 的支持正在不断改进,除了 Kotlin SDK " +"和代码示例,现在还有 Android 快速入门教程。" + +#: ../../source/ref-changelog.md:296 msgid "" -"**Introduce new end-to-end testing infrastructure** ([#1842](https://github." -"com/adap/flower/pull/1842), [#2071](https://github.com/adap/flower/" -"pull/2071), [#2072](https://github.com/adap/flower/pull/2072), [#2068]" -"(https://github.com/adap/flower/pull/2068), [#2067](https://github.com/adap/" -"flower/pull/2067), [#2069](https://github.com/adap/flower/pull/2069), [#2073]" -"(https://github.com/adap/flower/pull/2073), [#2070](https://github.com/adap/" -"flower/pull/2070), [#2074](https://github.com/adap/flower/pull/2074), [#2082]" -"(https://github.com/adap/flower/pull/2082), [#2084](https://github.com/adap/" -"flower/pull/2084), [#2093](https://github.com/adap/flower/pull/2093), [#2109]" -"(https://github.com/adap/flower/pull/2109), [#2095](https://github.com/adap/" -"flower/pull/2095), [#2140](https://github.com/adap/flower/pull/2140), [#2137]" -"(https://github.com/adap/flower/pull/2137), [#2165](https://github.com/adap/" -"flower/pull/2165))" +"**Introduce new end-to-end testing infrastructure** " +"([#1842](https://github.com/adap/flower/pull/1842), " +"[#2071](https://github.com/adap/flower/pull/2071), " +"[#2072](https://github.com/adap/flower/pull/2072), " +"[#2068](https://github.com/adap/flower/pull/2068), " +"[#2067](https://github.com/adap/flower/pull/2067), " +"[#2069](https://github.com/adap/flower/pull/2069), " +"[#2073](https://github.com/adap/flower/pull/2073), " +"[#2070](https://github.com/adap/flower/pull/2070), " +"[#2074](https://github.com/adap/flower/pull/2074), " +"[#2082](https://github.com/adap/flower/pull/2082), " +"[#2084](https://github.com/adap/flower/pull/2084), " +"[#2093](https://github.com/adap/flower/pull/2093), " +"[#2109](https://github.com/adap/flower/pull/2109), " +"[#2095](https://github.com/adap/flower/pull/2095), " +"[#2140](https://github.com/adap/flower/pull/2140), " +"[#2137](https://github.com/adap/flower/pull/2137), " +"[#2165](https://github.com/adap/flower/pull/2165))" msgstr "" "*介绍新的端到端测试** ([#1842](https://github.com/adap/flower/pull/1842), " -"[#2071](https://github.com/adap/flower/pull/2071), [#2072](https://github." -"com/adap/flower/pull/2072), [#2068](https://github.com/adap/flower/" -"pull/2068), [#2067](https://github.com/adap/flower/pull/2067), [#2069]" -"(https://github.com/adap/flower/pull/2069), [#2073](https://github.com/adap/" -"flower/pull/2073), [#2070](https://github.com/adap/flower/pull/2070), [#2074]" -"(https://github.com/adap/flower/pull/2074), [#2082](https://github.com/adap/" -"flower/pull/2082), [#2084](https://github.com/adap/flower/pull/2084), [#2093]" -"(https://github.com/adap/flower/pull/2093), [#2109](https://github.com/adap/" -"flower/pull/2109), [#2095](https://github.com/adap/flower/pull/2095), [#2140]" -"(https://github.com/adap/flower/pull/2140), [#2137](https://github.com/adap/" -"flower/pull/2137), [#2165](https://github.com/adap/flower/pull/2165))" +"[#2071](https://github.com/adap/flower/pull/2071), " +"[#2072](https://github.com/adap/flower/pull/2072), " +"[#2068](https://github.com/adap/flower/pull/2068), " +"[#2067](https://github.com/adap/flower/pull/2067), " +"[#2069](https://github.com/adap/flower/pull/2069), " +"[#2073](https://github.com/adap/flower/pull/2073), " +"[#2070](https://github.com/adap/flower/pull/2070), " +"[#2074](https://github.com/adap/flower/pull/2074), " +"[#2082](https://github.com/adap/flower/pull/2082), " +"[#2084](https://github.com/adap/flower/pull/2084), " +"[#2093](https://github.com/adap/flower/pull/2093), " +"[#2109](https://github.com/adap/flower/pull/2109), " +"[#2095](https://github.com/adap/flower/pull/2095), " +"[#2140](https://github.com/adap/flower/pull/2140), " +"[#2137](https://github.com/adap/flower/pull/2137), " +"[#2165](https://github.com/adap/flower/pull/2165))" -#: ../../source/ref-changelog.md:232 +#: ../../source/ref-changelog.md:298 msgid "" -"A new testing infrastructure ensures that new changes stay compatible with " -"existing framework integrations or strategies." +"A new testing infrastructure ensures that new changes stay compatible " +"with existing framework integrations or strategies." msgstr "新的测试设施可确保新的变更与现有的框架集成或策略保持兼容。" -#: ../../source/ref-changelog.md:234 +#: ../../source/ref-changelog.md:300 msgid "**Deprecate Python 3.7**" msgstr "** 过时的 Python 3.7**" -#: ../../source/ref-changelog.md:236 +#: ../../source/ref-changelog.md:302 msgid "" -"Since Python 3.7 reached its end of life (EOL) on 2023-06-27, support for " -"Python 3.7 is now deprecated and will be removed in an upcoming release." -msgstr "" -"由于 Python 3.7 已于 2023-06-27 弃用 (EOL),对 Python 3.7 的支持现已废弃,并" -"将在即将发布的版本中移除。" +"Since Python 3.7 reached its end of life (EOL) on 2023-06-27, support for" +" Python 3.7 is now deprecated and will be removed in an upcoming release." +msgstr "由于 Python 3.7 已于 2023-06-27 弃用 (EOL),对 Python 3.7 的支持现已废弃,并将在即将发布的版本中移除。" -#: ../../source/ref-changelog.md:238 +#: ../../source/ref-changelog.md:304 msgid "" -"**Add new** `FedTrimmedAvg` **strategy** ([#1769](https://github.com/adap/" -"flower/pull/1769), [#1853](https://github.com/adap/flower/pull/1853))" +"**Add new** `FedTrimmedAvg` **strategy** " +"([#1769](https://github.com/adap/flower/pull/1769), " +"[#1853](https://github.com/adap/flower/pull/1853))" msgstr "" -"**添加新的**`FedTrimmedAvg`**策略**([#1769](https://github.com/adap/flower/" -"pull/1769), [#1853](https://github.com/adap/flower/pull/1853)" +"**添加新的**`FedTrimmedAvg`**策略**([#1769](https://github.com/adap/flower/pull/1769)," +" [#1853](https://github.com/adap/flower/pull/1853)" -#: ../../source/ref-changelog.md:240 +#: ../../source/ref-changelog.md:306 msgid "" -"The new `FedTrimmedAvg` strategy implements Trimmed Mean by [Dong Yin, 2018]" -"(https://arxiv.org/abs/1803.01498)." +"The new `FedTrimmedAvg` strategy implements Trimmed Mean by [Dong Yin, " +"2018](https://arxiv.org/abs/1803.01498)." msgstr "" -"新的 \"FedTrimmedAvg \"策略实现了[Dong Yin, 2018](https://arxiv.org/" -"abs/1803.01498)的 \"Trimmed Mean\"。" +"新的 \"FedTrimmedAvg \"策略实现了[Dong Yin, " +"2018](https://arxiv.org/abs/1803.01498)的 \"Trimmed Mean\"。" -#: ../../source/ref-changelog.md:242 +#: ../../source/ref-changelog.md:308 msgid "" -"**Introduce start_driver** ([#1697](https://github.com/adap/flower/" -"pull/1697))" -msgstr "" -"**引入 start_driver**([#1697](https://github.com/adap/flower/pull/1697))" +"**Introduce start_driver** " +"([#1697](https://github.com/adap/flower/pull/1697))" +msgstr "**引入 start_driver**([#1697](https://github.com/adap/flower/pull/1697))" -#: ../../source/ref-changelog.md:244 +#: ../../source/ref-changelog.md:310 msgid "" -"In addition to `start_server` and using the raw Driver API, there is a new " -"`start_driver` function that allows for running `start_server` scripts as a " -"Flower driver with only a single-line code change. Check out the `mt-" -"pytorch` code example to see a working example using `start_driver`." +"In addition to `start_server` and using the raw Driver API, there is a " +"new `start_driver` function that allows for running `start_server` " +"scripts as a Flower driver with only a single-line code change. Check out" +" the `mt-pytorch` code example to see a working example using " +"`start_driver`." msgstr "" -"除了 `start_server` 和使用原始驱动 API 之外,还有一个新的 `start_driver` 函" -"数,只需修改一行代码,就能将 `start_server` 脚本作为 Flower 驱动程序运行。请" -"查看 `mt-pytorch` 代码示例,了解使用 `start_driver` 的工作示例。" +"除了 `start_server` 和使用原始驱动 API 之外,还有一个新的 `start_driver` 函数,只需修改一行代码,就能将 " +"`start_server` 脚本作为 Flower 驱动程序运行。请查看 `mt-pytorch` 代码示例,了解使用 " +"`start_driver` 的工作示例。" -#: ../../source/ref-changelog.md:246 +#: ../../source/ref-changelog.md:312 msgid "" -"**Add parameter aggregation to** `mt-pytorch` **code example** ([#1785]" -"(https://github.com/adap/flower/pull/1785))" +"**Add parameter aggregation to** `mt-pytorch` **code example** " +"([#1785](https://github.com/adap/flower/pull/1785))" msgstr "" -"为 `mt-pytorch` **代码示例**添加参数聚合 ([#1785](https://github.com/adap/" -"flower/pull/1785))" +"为 `mt-pytorch` **代码示例**添加参数聚合 " +"([#1785](https://github.com/adap/flower/pull/1785))" -#: ../../source/ref-changelog.md:248 +#: ../../source/ref-changelog.md:314 msgid "" -"The `mt-pytorch` example shows how to aggregate parameters when writing a " -"driver script. The included `driver.py` and `server.py` have been aligned to " -"demonstrate both the low-level way and the high-level way of building server-" -"side logic." +"The `mt-pytorch` example shows how to aggregate parameters when writing a" +" driver script. The included `driver.py` and `server.py` have been " +"aligned to demonstrate both the low-level way and the high-level way of " +"building server-side logic." msgstr "" -"`mt-pytorch`示例展示了如何在编写驱动程序脚本时聚合参数。附带的 `driver.py` " -"和 `server.py` 已经进行了调整,以演示构建服务器端逻辑的低级方法和高级方法。" +"`mt-pytorch`示例展示了如何在编写驱动程序脚本时聚合参数。附带的 `driver.py` 和 `server.py` " +"已经进行了调整,以演示构建服务器端逻辑的低级方法和高级方法。" -#: ../../source/ref-changelog.md:250 +#: ../../source/ref-changelog.md:316 msgid "" -"**Migrate experimental REST API to Starlette** ([2171](https://github.com/" -"adap/flower/pull/2171))" +"**Migrate experimental REST API to Starlette** " +"([2171](https://github.com/adap/flower/pull/2171))" msgstr "" -"**将实验性 REST API 移植到 Starlette** ([2171](https://github.com/adap/" -"flower/pull/2171))" +"**将实验性 REST API 移植到 Starlette** " +"([2171](https://github.com/adap/flower/pull/2171))" -#: ../../source/ref-changelog.md:252 +#: ../../source/ref-changelog.md:318 msgid "" -"The (experimental) REST API used to be implemented in [FastAPI](https://" -"fastapi.tiangolo.com/), but it has now been migrated to use [Starlette]" -"(https://www.starlette.io/) directly." +"The (experimental) REST API used to be implemented in " +"[FastAPI](https://fastapi.tiangolo.com/), but it has now been migrated to" +" use [Starlette](https://www.starlette.io/) directly." msgstr "" -"REST API(试验性)曾在 [FastAPI](https://fastapi.tiangolo.com/) 中实现,但现" -"在已迁移到直接使用 [Starlette](https://www.starlette.io/) 。" +"REST API(试验性)曾在 [FastAPI](https://fastapi.tiangolo.com/) 中实现,但现在已迁移到直接使用 " +"[Starlette](https://www.starlette.io/) 。" -#: ../../source/ref-changelog.md:254 +#: ../../source/ref-changelog.md:320 msgid "" -"Please note: The REST request-response API is still experimental and will " -"likely change significantly over time." -msgstr "" -"请注意:REST 请求-响应 API 仍处于试验阶段,随着时间的推移可能会发生重大变化。" +"Please note: The REST request-response API is still experimental and will" +" likely change significantly over time." +msgstr "请注意:REST 请求-响应 API 仍处于试验阶段,随着时间的推移可能会发生重大变化。" -#: ../../source/ref-changelog.md:256 +#: ../../source/ref-changelog.md:322 msgid "" -"**Introduce experimental gRPC request-response API** ([#1867](https://github." -"com/adap/flower/pull/1867), [#1901](https://github.com/adap/flower/" -"pull/1901))" +"**Introduce experimental gRPC request-response API** " +"([#1867](https://github.com/adap/flower/pull/1867), " +"[#1901](https://github.com/adap/flower/pull/1901))" msgstr "" -"**引入实验性 gRPC 请求-响应 API** ([#1867](https://github.com/adap/flower/" -"pull/1867), [#1901](https://github.com/adap/flower/pull/1901)" +"**引入实验性 gRPC 请求-响应 API** " +"([#1867](https://github.com/adap/flower/pull/1867), " +"[#1901](https://github.com/adap/flower/pull/1901)" -#: ../../source/ref-changelog.md:258 +#: ../../source/ref-changelog.md:324 msgid "" -"In addition to the existing gRPC API (based on bidirectional streaming) and " -"the experimental REST API, there is now a new gRPC API that uses a request-" -"response model to communicate with client nodes." +"In addition to the existing gRPC API (based on bidirectional streaming) " +"and the experimental REST API, there is now a new gRPC API that uses a " +"request-response model to communicate with client nodes." msgstr "" -"除了现有的 gRPC 应用程序接口(基于双向流)和试验性 REST 应用程序接口外,现在" -"还有一个新的 gRPC 应用程序接口,它使用请求-响应模型与客户端节点通信。" +"除了现有的 gRPC 应用程序接口(基于双向流)和试验性 REST 应用程序接口外,现在还有一个新的 gRPC " +"应用程序接口,它使用请求-响应模型与客户端节点通信。" -#: ../../source/ref-changelog.md:260 +#: ../../source/ref-changelog.md:326 msgid "" -"Please note: The gRPC request-response API is still experimental and will " -"likely change significantly over time." -msgstr "" -"请注意:gRPC 请求-响应 API 仍处于试验阶段,随着时间的推移可能会发生重大变化。" +"Please note: The gRPC request-response API is still experimental and will" +" likely change significantly over time." +msgstr "请注意:gRPC 请求-响应 API 仍处于试验阶段,随着时间的推移可能会发生重大变化。" -#: ../../source/ref-changelog.md:262 +#: ../../source/ref-changelog.md:328 msgid "" "**Replace the experimental** `start_client(rest=True)` **with the new** " -"`start_client(transport=\"rest\")` ([#1880](https://github.com/adap/flower/" -"pull/1880))" +"`start_client(transport=\"rest\")` " +"([#1880](https://github.com/adap/flower/pull/1880))" msgstr "" "**用新的** `start_client(transport=\"rest\")` 替换实验性** " -"`start_client(rest=True)` ([#1880](https://github.com/adap/flower/pull/1880))" +"`start_client(rest=True)` " +"([#1880](https://github.com/adap/flower/pull/1880))" -#: ../../source/ref-changelog.md:264 +#: ../../source/ref-changelog.md:330 msgid "" -"The (experimental) `start_client` argument `rest` was deprecated in favour " -"of a new argument `transport`. `start_client(transport=\"rest\")` will yield " -"the same behaviour as `start_client(rest=True)` did before. All code should " -"migrate to the new argument `transport`. The deprecated argument `rest` will " -"be removed in a future release." +"The (experimental) `start_client` argument `rest` was deprecated in " +"favour of a new argument `transport`. `start_client(transport=\"rest\")` " +"will yield the same behaviour as `start_client(rest=True)` did before. " +"All code should migrate to the new argument `transport`. The deprecated " +"argument `rest` will be removed in a future release." msgstr "" -"已废弃(试验性的)`start_client`参数`rest`,改用新参数`transport`。" -"`start_client(transport=\"rest\")`将产生与以前的`start_client(rest=True)`相同" -"的行为。所有代码都应迁移到新参数 `transport`。过时的参数 `rest` 将在今后的版" -"本中删除。" +"已废弃(试验性的)`start_client`参数`rest`,改用新参数`transport`。`start_client(transport=\"rest\")`将产生与以前的`start_client(rest=True)`相同的行为。所有代码都应迁移到新参数" +" `transport`。过时的参数 `rest` 将在今后的版本中删除。" -#: ../../source/ref-changelog.md:266 +#: ../../source/ref-changelog.md:332 msgid "" -"**Add a new gRPC option** ([#2197](https://github.com/adap/flower/pull/2197))" -msgstr "" -"** 添加一个新的 gRPC 选项**([#2197](https://github.com/adap/flower/" -"pull/2197))" +"**Add a new gRPC option** " +"([#2197](https://github.com/adap/flower/pull/2197))" +msgstr "** 添加一个新的 gRPC 选项**([#2197](https://github.com/adap/flower/pull/2197))" -#: ../../source/ref-changelog.md:268 +#: ../../source/ref-changelog.md:334 msgid "" -"We now start a gRPC server with the `grpc.keepalive_permit_without_calls` " -"option set to 0 by default. This prevents the clients from sending keepalive " -"pings when there is no outstanding stream." +"We now start a gRPC server with the `grpc.keepalive_permit_without_calls`" +" option set to 0 by default. This prevents the clients from sending " +"keepalive pings when there is no outstanding stream." msgstr "" -"现在我们启动一个 gRPC 服务器,并将 `grpc.keepalive_permit_without_calls` 选项" -"默认设置为 0。这将防止客户端在没有未处理数据流时发送 keepalive pings。" +"现在我们启动一个 gRPC 服务器,并将 `grpc.keepalive_permit_without_calls` 选项默认设置为 " +"0。这将防止客户端在没有未处理数据流时发送 keepalive pings。" -#: ../../source/ref-changelog.md:270 +#: ../../source/ref-changelog.md:336 msgid "" -"**Improve example notebooks** ([#2005](https://github.com/adap/flower/" -"pull/2005))" +"**Improve example notebooks** " +"([#2005](https://github.com/adap/flower/pull/2005))" msgstr "**改进示例笔记** ([#2005](https://github.com/adap/flower/pull/2005))" -#: ../../source/ref-changelog.md:272 +#: ../../source/ref-changelog.md:338 msgid "There's a new 30min Federated Learning PyTorch tutorial!" msgstr "有一个新的 30 分钟的联邦学习 PyTorch 教程!" -#: ../../source/ref-changelog.md:274 +#: ../../source/ref-changelog.md:340 msgid "" "**Example updates** ([#1772](https://github.com/adap/flower/pull/1772), " -"[#1873](https://github.com/adap/flower/pull/1873), [#1981](https://github." -"com/adap/flower/pull/1981), [#1988](https://github.com/adap/flower/" -"pull/1988), [#1984](https://github.com/adap/flower/pull/1984), [#1982]" -"(https://github.com/adap/flower/pull/1982), [#2112](https://github.com/adap/" -"flower/pull/2112), [#2144](https://github.com/adap/flower/pull/2144), [#2174]" -"(https://github.com/adap/flower/pull/2174), [#2225](https://github.com/adap/" -"flower/pull/2225), [#2183](https://github.com/adap/flower/pull/2183))" -msgstr "" -"**更新Example** ([#1772](https://github.com/adap/flower/pull/1772), [#1873]" -"(https://github.com/adap/flower/pull/1873), [#1981](https://github.com/adap/" -"flower/pull/1981), [#1988](https://github.com/adap/flower/pull/1988), [#1984]" -"(https://github.com/adap/flower/pull/1984), [#1982](https://github.com/adap/" -"flower/pull/1982), [#2112](https://github.com/adap/flower/pull/2112), [#2144]" -"(https://github.com/adap/flower/pull/2144), [#2174](https://github.com/adap/" -"flower/pull/2174), [#2225](https://github.com/adap/flower/pull/2225), [#2183]" -"(https://github.com/adap/flower/pull/2183))" - -#: ../../source/ref-changelog.md:276 +"[#1873](https://github.com/adap/flower/pull/1873), " +"[#1981](https://github.com/adap/flower/pull/1981), " +"[#1988](https://github.com/adap/flower/pull/1988), " +"[#1984](https://github.com/adap/flower/pull/1984), " +"[#1982](https://github.com/adap/flower/pull/1982), " +"[#2112](https://github.com/adap/flower/pull/2112), " +"[#2144](https://github.com/adap/flower/pull/2144), " +"[#2174](https://github.com/adap/flower/pull/2174), " +"[#2225](https://github.com/adap/flower/pull/2225), " +"[#2183](https://github.com/adap/flower/pull/2183))" +msgstr "" +"**更新Example** ([#1772](https://github.com/adap/flower/pull/1772), " +"[#1873](https://github.com/adap/flower/pull/1873), " +"[#1981](https://github.com/adap/flower/pull/1981), " +"[#1988](https://github.com/adap/flower/pull/1988), " +"[#1984](https://github.com/adap/flower/pull/1984), " +"[#1982](https://github.com/adap/flower/pull/1982), " +"[#2112](https://github.com/adap/flower/pull/2112), " +"[#2144](https://github.com/adap/flower/pull/2144), " +"[#2174](https://github.com/adap/flower/pull/2174), " +"[#2225](https://github.com/adap/flower/pull/2225), " +"[#2183](https://github.com/adap/flower/pull/2183))" + +#: ../../source/ref-changelog.md:342 msgid "" "Many examples have received significant updates, including simplified " "advanced-tensorflow and advanced-pytorch examples, improved macOS " -"compatibility of TensorFlow examples, and code examples for simulation. A " -"major upgrade is that all code examples now have a `requirements.txt` (in " -"addition to `pyproject.toml`)." -msgstr "" -"许多示例都进行了重大更新,包括简化了 advanced-tensorflow 和 advanced-pytorch " -"示例,改进了 TensorFlow 示例的 macOS 兼容性,以及模拟代码示例。一项重大升级是" -"所有代码示例现在都有了 \"requirements.txt\"(除 \"pyproject.toml \"外)。" - -#: ../../source/ref-changelog.md:278 -msgid "" -"**General improvements** ([#1872](https://github.com/adap/flower/pull/1872), " -"[#1866](https://github.com/adap/flower/pull/1866), [#1884](https://github." -"com/adap/flower/pull/1884), [#1837](https://github.com/adap/flower/" -"pull/1837), [#1477](https://github.com/adap/flower/pull/1477), [#2171]" -"(https://github.com/adap/flower/pull/2171))" -msgstr "" -"**普通改进**([#1872](https://github.com/adap/flower/pull/1872), [#1866]" -"(https://github.com/adap/flower/pull/1866), [#1884](https://github.com/adap/" -"flower/pull/1884), [#1837](https://github.com/adap/flower/pull/1837), [#1477]" -"(https://github.com/adap/flower/pull/1477), [#2171](https://github.com/adap/" -"flower/pull/2171))" - -#: ../../source/ref-changelog.md:284 ../../source/ref-changelog.md:348 -#: ../../source/ref-changelog.md:406 ../../source/ref-changelog.md:475 -#: ../../source/ref-changelog.md:537 -msgid "None" -msgstr "无" - -#: ../../source/ref-changelog.md:286 +"compatibility of TensorFlow examples, and code examples for simulation. A" +" major upgrade is that all code examples now have a `requirements.txt` " +"(in addition to `pyproject.toml`)." +msgstr "" +"许多示例都进行了重大更新,包括简化了 advanced-tensorflow 和 advanced-pytorch 示例,改进了 " +"TensorFlow 示例的 macOS 兼容性,以及模拟代码示例。一项重大升级是所有代码示例现在都有了 " +"\"requirements.txt\"(除 \"pyproject.toml \"外)。" + +#: ../../source/ref-changelog.md:344 +msgid "" +"**General improvements** " +"([#1872](https://github.com/adap/flower/pull/1872), " +"[#1866](https://github.com/adap/flower/pull/1866), " +"[#1884](https://github.com/adap/flower/pull/1884), " +"[#1837](https://github.com/adap/flower/pull/1837), " +"[#1477](https://github.com/adap/flower/pull/1477), " +"[#2171](https://github.com/adap/flower/pull/2171))" +msgstr "" +"**普通改进**([#1872](https://github.com/adap/flower/pull/1872), " +"[#1866](https://github.com/adap/flower/pull/1866), " +"[#1884](https://github.com/adap/flower/pull/1884), " +"[#1837](https://github.com/adap/flower/pull/1837), " +"[#1477](https://github.com/adap/flower/pull/1477), " +"[#2171](https://github.com/adap/flower/pull/2171))" + +#: ../../source/ref-changelog.md:352 msgid "v1.4.0 (2023-04-21)" msgstr "v1.4.0 (2023-04-21)" -#: ../../source/ref-changelog.md:292 +#: ../../source/ref-changelog.md:358 msgid "" "`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, " -"`Chenyang Ma (Danny)`, `Daniel J. Beutel`, `Edoardo`, `Gautam Jajoo`, `Iacob-" -"Alexandru-Andrei`, `JDRanpariya`, `Jean Charle Yaacoub`, `Kunal Sarkhel`, " -"`L. Jiang`, `Lennart Behme`, `Max Kapsecker`, `Michał`, `Nic Lane`, " -"`Nikolaos Episkopos`, `Ragy`, `Saurav Maheshkar`, `Semo Yang`, `Steve " -"Laskaridis`, `Steven Hé (Sīchàng)`, `Taner Topal`" +"`Chenyang Ma (Danny)`, `Daniel J. Beutel`, `Edoardo`, `Gautam Jajoo`, " +"`Iacob-Alexandru-Andrei`, `JDRanpariya`, `Jean Charle Yaacoub`, `Kunal " +"Sarkhel`, `L. Jiang`, `Lennart Behme`, `Max Kapsecker`, `Michał`, `Nic " +"Lane`, `Nikolaos Episkopos`, `Ragy`, `Saurav Maheshkar`, `Semo Yang`, " +"`Steve Laskaridis`, `Steven Hé (Sīchàng)`, `Taner Topal`" msgstr "" "`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, " -"`Chenyang Ma (Danny)`, `Daniel J. Beutel`, `Edoardo`, `Gautam Jajoo`, `Iacob-" -"Alexandru-Andrei`, `JDRanpariya`, `Jean Charle Yaacoub`, `Kunal Sarkhel`, " -"`L. Jiang`, `Lennart Behme`, `Max Kapsecker`, `Michał`, `Nic Lane`, " -"`Nikolaos Episkopos`, `Ragy`, `Saurav Maheshkar`, `Semo Yang`, `Steve " -"Laskaridis`, `Steven Hé (Sīchàng)`, `Taner Topal`" +"`Chenyang Ma (Danny)`, `Daniel J. Beutel`, `Edoardo`, `Gautam Jajoo`, " +"`Iacob-Alexandru-Andrei`, `JDRanpariya`, `Jean Charle Yaacoub`, `Kunal " +"Sarkhel`, `L. Jiang`, `Lennart Behme`, `Max Kapsecker`, `Michał`, `Nic " +"Lane`, `Nikolaos Episkopos`, `Ragy`, `Saurav Maheshkar`, `Semo Yang`, " +"`Steve Laskaridis`, `Steven Hé (Sīchàng)`, `Taner Topal`" -#: ../../source/ref-changelog.md:296 +#: ../../source/ref-changelog.md:362 msgid "" -"**Introduce support for XGBoost (**`FedXgbNnAvg` **strategy and example)** " -"([#1694](https://github.com/adap/flower/pull/1694), [#1709](https://github." -"com/adap/flower/pull/1709), [#1715](https://github.com/adap/flower/" -"pull/1715), [#1717](https://github.com/adap/flower/pull/1717), [#1763]" -"(https://github.com/adap/flower/pull/1763), [#1795](https://github.com/adap/" -"flower/pull/1795))" -msgstr "" -"**引入对XGBoost的支持(**`FedXgbNnAvg` **策略和示例)** ([#1694](https://" -"github.com/adap/flower/pull/1694), [#1709](https://github.com/adap/flower/" -"pull/1709), [#1715](https://github.com/adap/flower/pull/1715), [#1717]" -"(https://github.com/adap/flower/pull/1717), [#1763](https://github.com/adap/" -"flower/pull/1763), [#1795](https://github.com/adap/flower/pull/1795))" +"**Introduce support for XGBoost (**`FedXgbNnAvg` **strategy and " +"example)** ([#1694](https://github.com/adap/flower/pull/1694), " +"[#1709](https://github.com/adap/flower/pull/1709), " +"[#1715](https://github.com/adap/flower/pull/1715), " +"[#1717](https://github.com/adap/flower/pull/1717), " +"[#1763](https://github.com/adap/flower/pull/1763), " +"[#1795](https://github.com/adap/flower/pull/1795))" +msgstr "" +"**引入对XGBoost的支持(**`FedXgbNnAvg` **策略和示例)** " +"([#1694](https://github.com/adap/flower/pull/1694), " +"[#1709](https://github.com/adap/flower/pull/1709), " +"[#1715](https://github.com/adap/flower/pull/1715), " +"[#1717](https://github.com/adap/flower/pull/1717), " +"[#1763](https://github.com/adap/flower/pull/1763), " +"[#1795](https://github.com/adap/flower/pull/1795))" -#: ../../source/ref-changelog.md:298 +#: ../../source/ref-changelog.md:364 msgid "" "XGBoost is a tree-based ensemble machine learning algorithm that uses " -"gradient boosting to improve model accuracy. We added a new `FedXgbNnAvg` " -"[strategy](https://github.com/adap/flower/tree/main/src/py/flwr/server/" -"strategy/fedxgb_nn_avg.py), and a [code example](https://github.com/adap/" -"flower/tree/main/examples/xgboost-quickstart) that demonstrates the usage of " -"this new strategy in an XGBoost project." -msgstr "" -"XGBoost 是一种基于树的集合机器学习算法,它使用梯度提升来提高模型的准确性。我" -"们添加了一个新的 \"FedXgbNnAvg\"[策略](https://github.com/adap/flower/tree/" -"main/src/py/flwr/server/strategy/fedxgb_nn_avg.py)和一个[代码示例](https://" -"github.com/adap/flower/tree/main/examples/xgboost-quickstart),演示如何在 " -"XGBoost 项目中使用这个新策略。" +"gradient boosting to improve model accuracy. We added a new `FedXgbNnAvg`" +" " +"[strategy](https://github.com/adap/flower/tree/main/src/py/flwr/server/strategy/fedxgb_nn_avg.py)," +" and a [code example](https://github.com/adap/flower/tree/main/examples" +"/xgboost-quickstart) that demonstrates the usage of this new strategy in " +"an XGBoost project." +msgstr "" +"XGBoost 是一种基于树的集合机器学习算法,它使用梯度提升来提高模型的准确性。我们添加了一个新的 " +"\"FedXgbNnAvg\"[策略](https://github.com/adap/flower/tree/main/src/py/flwr/server/strategy/fedxgb_nn_avg.py)和一个[代码示例](https://github.com/adap/flower/tree/main/examples" +"/xgboost-quickstart),演示如何在 XGBoost 项目中使用这个新策略。" -#: ../../source/ref-changelog.md:300 +#: ../../source/ref-changelog.md:366 msgid "" -"**Introduce iOS SDK (preview)** ([#1621](https://github.com/adap/flower/" -"pull/1621), [#1764](https://github.com/adap/flower/pull/1764))" +"**Introduce iOS SDK (preview)** " +"([#1621](https://github.com/adap/flower/pull/1621), " +"[#1764](https://github.com/adap/flower/pull/1764))" msgstr "" -"**介绍 iOS SDK(预览版)** ([#1621](https://github.com/adap/flower/" -"pull/1621), [#1764](https://github.com/adap/flower/pull/1764))" +"**介绍 iOS SDK(预览版)** ([#1621](https://github.com/adap/flower/pull/1621), " +"[#1764](https://github.com/adap/flower/pull/1764))" -#: ../../source/ref-changelog.md:302 +#: ../../source/ref-changelog.md:368 msgid "" -"This is a major update for anyone wanting to implement Federated Learning on " -"iOS mobile devices. We now have a swift iOS SDK present under [src/swift/" -"flwr](https://github.com/adap/flower/tree/main/src/swift/flwr) that will " -"facilitate greatly the app creating process. To showcase its use, the [iOS " +"This is a major update for anyone wanting to implement Federated Learning" +" on iOS mobile devices. We now have a swift iOS SDK present under " +"[src/swift/flwr](https://github.com/adap/flower/tree/main/src/swift/flwr)" +" that will facilitate greatly the app creating process. To showcase its " +"use, the [iOS " "example](https://github.com/adap/flower/tree/main/examples/ios) has also " "been updated!" msgstr "" -"对于想要在 iOS 移动设备上实施联邦学习的人来说,这是一次重大更新。现在,我们" -"在 [src/swift/flwr](https://github.com/adap/flower/tree/main/src/swift/flwr) " -"下提供了一个迅捷的 iOS SDK,这将大大方便应用程序的创建过程。为了展示其使用情" -"况,我们还更新了 [iOS 示例](https://github.com/adap/flower/tree/main/" -"examples/ios)!" +"对于想要在 iOS 移动设备上实施联邦学习的人来说,这是一次重大更新。现在,我们在 " +"[src/swift/flwr](https://github.com/adap/flower/tree/main/src/swift/flwr)" +" 下提供了一个迅捷的 iOS SDK,这将大大方便应用程序的创建过程。为了展示其使用情况,我们还更新了 [iOS " +"示例](https://github.com/adap/flower/tree/main/examples/ios)!" -#: ../../source/ref-changelog.md:304 +#: ../../source/ref-changelog.md:370 msgid "" -"**Introduce new \"What is Federated Learning?\" tutorial** ([#1657](https://" -"github.com/adap/flower/pull/1657), [#1721](https://github.com/adap/flower/" -"pull/1721))" +"**Introduce new \"What is Federated Learning?\" tutorial** " +"([#1657](https://github.com/adap/flower/pull/1657), " +"[#1721](https://github.com/adap/flower/pull/1721))" msgstr "" -"**引入新的 \"什么是联邦学习?\"教程**([#1657](https://github.com/adap/" -"flower/pull/1657), [#1721](https://github.com/adap/flower/pull/1721)" +"**引入新的 " +"\"什么是联邦学习?\"教程**([#1657](https://github.com/adap/flower/pull/1657), " +"[#1721](https://github.com/adap/flower/pull/1721)" -#: ../../source/ref-changelog.md:306 +#: ../../source/ref-changelog.md:372 msgid "" -"A new [entry-level tutorial](https://flower.ai/docs/framework/tutorial-what-" -"is-federated-learning.html) in our documentation explains the basics of " -"Fedetated Learning. It enables anyone who's unfamiliar with Federated " -"Learning to start their journey with Flower. Forward it to anyone who's " +"A new [entry-level tutorial](https://flower.ai/docs/framework/tutorial-" +"what-is-federated-learning.html) in our documentation explains the basics" +" of Fedetated Learning. It enables anyone who's unfamiliar with Federated" +" Learning to start their journey with Flower. Forward it to anyone who's " "interested in Federated Learning!" msgstr "" -"我们的文档中新增了一个[入门级教程](https://flower.ai/docs/framework/tutorial-" -"what-is-federated-learning.html),解释了联邦学习的基础知识。它让任何不熟悉联" -"邦学习的人都能开始 Flower 之旅。请转发给对联邦学习感兴趣的人!" +"我们的文档中新增了一个[入门级教程](https://flower.ai/docs/framework/tutorial-what-is-" +"federated-learning.html),解释了联邦学习的基础知识。它让任何不熟悉联邦学习的人都能开始 Flower " +"之旅。请转发给对联邦学习感兴趣的人!" -#: ../../source/ref-changelog.md:308 +#: ../../source/ref-changelog.md:374 msgid "" -"**Introduce new Flower Baseline: FedProx MNIST** ([#1513](https://github.com/" -"adap/flower/pull/1513), [#1680](https://github.com/adap/flower/pull/1680), " -"[#1681](https://github.com/adap/flower/pull/1681), [#1679](https://github." -"com/adap/flower/pull/1679))" +"**Introduce new Flower Baseline: FedProx MNIST** " +"([#1513](https://github.com/adap/flower/pull/1513), " +"[#1680](https://github.com/adap/flower/pull/1680), " +"[#1681](https://github.com/adap/flower/pull/1681), " +"[#1679](https://github.com/adap/flower/pull/1679))" msgstr "" -"**引入新的 Flower Baseline: FedProx MNIST** ([#1513](https://github.com/" -"adap/flower/pull/1513), [#1680](https://github.com/adap/flower/pull/1680), " -"[#1681](https://github.com/adap/flower/pull/1681), [#1679](https://github." -"com/adap/flower/pull/1679)" +"**引入新的 Flower Baseline: FedProx MNIST** " +"([#1513](https://github.com/adap/flower/pull/1513), " +"[#1680](https://github.com/adap/flower/pull/1680), " +"[#1681](https://github.com/adap/flower/pull/1681), " +"[#1679](https://github.com/adap/flower/pull/1679)" -#: ../../source/ref-changelog.md:310 +#: ../../source/ref-changelog.md:376 msgid "" -"This new baseline replicates the MNIST+CNN task from the paper [Federated " -"Optimization in Heterogeneous Networks (Li et al., 2018)](https://arxiv.org/" -"abs/1812.06127). It uses the `FedProx` strategy, which aims at making " -"convergence more robust in heterogeneous settings." +"This new baseline replicates the MNIST+CNN task from the paper [Federated" +" Optimization in Heterogeneous Networks (Li et al., " +"2018)](https://arxiv.org/abs/1812.06127). It uses the `FedProx` strategy," +" which aims at making convergence more robust in heterogeneous settings." msgstr "" -"这条新Baseline复现了论文[Federated Optimization in Heterogeneous Networks " -"(Li et al., 2018)](https://arxiv.org/abs/1812.06127)中的 MNIST+CNN 任务。它使" -"用 \"FedProx \"策略,旨在使收敛在异构环境中更加稳健。" +"这条新Baseline复现了论文[Federated Optimization in Heterogeneous Networks (Li et " +"al., 2018)](https://arxiv.org/abs/1812.06127)中的 MNIST+CNN 任务。它使用 " +"\"FedProx \"策略,旨在使收敛在异构环境中更加稳健。" -#: ../../source/ref-changelog.md:312 +#: ../../source/ref-changelog.md:378 msgid "" -"**Introduce new Flower Baseline: FedAvg FEMNIST** ([#1655](https://github." -"com/adap/flower/pull/1655))" +"**Introduce new Flower Baseline: FedAvg FEMNIST** " +"([#1655](https://github.com/adap/flower/pull/1655))" msgstr "" -"**引入新的 Flower Baseline: FedAvg FEMNIST** ([#1655](https://github.com/" -"adap/flower/pull/1655))" +"**引入新的 Flower Baseline: FedAvg FEMNIST** " +"([#1655](https://github.com/adap/flower/pull/1655))" -#: ../../source/ref-changelog.md:314 +#: ../../source/ref-changelog.md:380 msgid "" -"This new baseline replicates an experiment evaluating the performance of the " -"FedAvg algorithm on the FEMNIST dataset from the paper [LEAF: A Benchmark " -"for Federated Settings (Caldas et al., 2018)](https://arxiv.org/" -"abs/1812.01097)." +"This new baseline replicates an experiment evaluating the performance of " +"the FedAvg algorithm on the FEMNIST dataset from the paper [LEAF: A " +"Benchmark for Federated Settings (Caldas et al., " +"2018)](https://arxiv.org/abs/1812.01097)." msgstr "" -"这一新Baseline复现了论文[LEAF: A Benchmark for Federated Settings(Caldas 等" -"人,2018 年)](https://arxiv.org/abs/1812.01097)中评估 FedAvg 算法在 FEMNIST " -"数据集上性能的实验。" +"这一新Baseline复现了论文[LEAF: A Benchmark for Federated Settings(Caldas 等人,2018 " +"年)](https://arxiv.org/abs/1812.01097)中评估 FedAvg 算法在 FEMNIST 数据集上性能的实验。" -#: ../../source/ref-changelog.md:316 +#: ../../source/ref-changelog.md:382 msgid "" -"**Introduce (experimental) REST API** ([#1594](https://github.com/adap/" -"flower/pull/1594), [#1690](https://github.com/adap/flower/pull/1690), [#1695]" -"(https://github.com/adap/flower/pull/1695), [#1712](https://github.com/adap/" -"flower/pull/1712), [#1802](https://github.com/adap/flower/pull/1802), [#1770]" -"(https://github.com/adap/flower/pull/1770), [#1733](https://github.com/adap/" -"flower/pull/1733))" -msgstr "" -"**引入(试验性)REST API** ([#1594](https://github.com/adap/flower/" -"pull/1594), [#1690](https://github.com/adap/flower/pull/1690), [#1695]" -"(https://github.com/adap/flower/pull/1695), [#1712](https://github.com/adap/" -"flower/pull/1712), [#1802](https://github.com/adap/flower/pull/1802), [#1770]" -"(https://github.com/adap/flower/pull/1770), [#1733](https://github.com/adap/" -"flower/pull/1733))" +"**Introduce (experimental) REST API** " +"([#1594](https://github.com/adap/flower/pull/1594), " +"[#1690](https://github.com/adap/flower/pull/1690), " +"[#1695](https://github.com/adap/flower/pull/1695), " +"[#1712](https://github.com/adap/flower/pull/1712), " +"[#1802](https://github.com/adap/flower/pull/1802), " +"[#1770](https://github.com/adap/flower/pull/1770), " +"[#1733](https://github.com/adap/flower/pull/1733))" +msgstr "" +"**引入(试验性)REST API** ([#1594](https://github.com/adap/flower/pull/1594), " +"[#1690](https://github.com/adap/flower/pull/1690), " +"[#1695](https://github.com/adap/flower/pull/1695), " +"[#1712](https://github.com/adap/flower/pull/1712), " +"[#1802](https://github.com/adap/flower/pull/1802), " +"[#1770](https://github.com/adap/flower/pull/1770), " +"[#1733](https://github.com/adap/flower/pull/1733))" -#: ../../source/ref-changelog.md:318 +#: ../../source/ref-changelog.md:384 msgid "" "A new REST API has been introduced as an alternative to the gRPC-based " "communication stack. In this initial version, the REST API only supports " "anonymous clients." -msgstr "" -"作为基于 gRPC 的通信栈的替代方案,我们引入了新的 REST API。在初始版本中," -"REST API 仅支持匿名客户端。" +msgstr "作为基于 gRPC 的通信栈的替代方案,我们引入了新的 REST API。在初始版本中,REST API 仅支持匿名客户端。" -#: ../../source/ref-changelog.md:320 +#: ../../source/ref-changelog.md:386 msgid "" "Please note: The REST API is still experimental and will likely change " "significantly over time." msgstr "请注意:REST API 仍处于试验阶段,随着时间的推移可能会发生重大变化。" -#: ../../source/ref-changelog.md:322 +#: ../../source/ref-changelog.md:388 msgid "" -"**Improve the (experimental) Driver API** ([#1663](https://github.com/adap/" -"flower/pull/1663), [#1666](https://github.com/adap/flower/pull/1666), [#1667]" -"(https://github.com/adap/flower/pull/1667), [#1664](https://github.com/adap/" -"flower/pull/1664), [#1675](https://github.com/adap/flower/pull/1675), [#1676]" -"(https://github.com/adap/flower/pull/1676), [#1693](https://github.com/adap/" -"flower/pull/1693), [#1662](https://github.com/adap/flower/pull/1662), [#1794]" -"(https://github.com/adap/flower/pull/1794))" -msgstr "" -"**改进(试验性)驱动程序应用程序接口** ([#1663](https://github.com/adap/" -"flower/pull/1663), [#1666](https://github.com/adap/flower/pull/1666), [#1667]" -"(https://github.com/adap/flower/pull/1667), [#1664](https://github.com/adap/" -"flower/pull/1664), [#1675](https://github.com/adap/flower/pull/1675), [#1676]" -"(https://github.com/adap/flower/pull/1676), [#1693](https://github.com/adap/" -"flower/pull/1693), [#1662](https://github.com/adap/flower/pull/1662), [#1794]" -"(https://github.com/adap/flower/pull/1794))" +"**Improve the (experimental) Driver API** " +"([#1663](https://github.com/adap/flower/pull/1663), " +"[#1666](https://github.com/adap/flower/pull/1666), " +"[#1667](https://github.com/adap/flower/pull/1667), " +"[#1664](https://github.com/adap/flower/pull/1664), " +"[#1675](https://github.com/adap/flower/pull/1675), " +"[#1676](https://github.com/adap/flower/pull/1676), " +"[#1693](https://github.com/adap/flower/pull/1693), " +"[#1662](https://github.com/adap/flower/pull/1662), " +"[#1794](https://github.com/adap/flower/pull/1794))" +msgstr "" +"**改进(试验性)驱动程序应用程序接口** ([#1663](https://github.com/adap/flower/pull/1663)," +" [#1666](https://github.com/adap/flower/pull/1666), " +"[#1667](https://github.com/adap/flower/pull/1667), " +"[#1664](https://github.com/adap/flower/pull/1664), " +"[#1675](https://github.com/adap/flower/pull/1675), " +"[#1676](https://github.com/adap/flower/pull/1676), " +"[#1693](https://github.com/adap/flower/pull/1693), " +"[#1662](https://github.com/adap/flower/pull/1662), " +"[#1794](https://github.com/adap/flower/pull/1794))" -#: ../../source/ref-changelog.md:324 +#: ../../source/ref-changelog.md:390 msgid "" -"The Driver API is still an experimental feature, but this release introduces " -"some major upgrades. One of the main improvements is the introduction of an " -"SQLite database to store server state on disk (instead of in-memory). " -"Another improvement is that tasks (instructions or results) that have been " -"delivered will now be deleted. This greatly improves the memory efficiency " -"of a long-running Flower server." +"The Driver API is still an experimental feature, but this release " +"introduces some major upgrades. One of the main improvements is the " +"introduction of an SQLite database to store server state on disk (instead" +" of in-memory). Another improvement is that tasks (instructions or " +"results) that have been delivered will now be deleted. This greatly " +"improves the memory efficiency of a long-running Flower server." msgstr "" -"驱动程序应用程序接口(Driver API)仍是一项试验性功能,但这一版本引入了一些重" -"大升级。主要改进之一是引入了 SQLite 数据库,将服务器状态存储在磁盘上(而不是" -"内存中)。另一项改进是,已交付的任务(指令或结果)现在将被删除。这大大提高了" -"长期运行的 Flower 服务器的内存效率。" +"驱动程序应用程序接口(Driver API)仍是一项试验性功能,但这一版本引入了一些重大升级。主要改进之一是引入了 SQLite " +"数据库,将服务器状态存储在磁盘上(而不是内存中)。另一项改进是,已交付的任务(指令或结果)现在将被删除。这大大提高了长期运行的 Flower " +"服务器的内存效率。" -#: ../../source/ref-changelog.md:326 +#: ../../source/ref-changelog.md:392 msgid "" -"**Fix spilling issues related to Ray during simulations** ([#1698](https://" -"github.com/adap/flower/pull/1698))" -msgstr "" -"**修复模拟过程中与Ray有关的溢出问题** ([#1698](https://github.com/adap/" -"flower/pull/1698))" +"**Fix spilling issues related to Ray during simulations** " +"([#1698](https://github.com/adap/flower/pull/1698))" +msgstr "**修复模拟过程中与Ray有关的溢出问题** ([#1698](https://github.com/adap/flower/pull/1698))" -#: ../../source/ref-changelog.md:328 +#: ../../source/ref-changelog.md:394 msgid "" -"While running long simulations, `ray` was sometimes spilling huge amounts of " -"data that would make the training unable to continue. This is now fixed! 🎉" -msgstr "" -"在运行长时间模拟时,`ray` 有时会溢出大量数据,导致训练无法继续。现在这个问题" -"已经解决!🎉" +"While running long simulations, `ray` was sometimes spilling huge amounts" +" of data that would make the training unable to continue. This is now " +"fixed! 🎉" +msgstr "在运行长时间模拟时,`ray` 有时会溢出大量数据,导致训练无法继续。现在这个问题已经解决!🎉" -#: ../../source/ref-changelog.md:330 +#: ../../source/ref-changelog.md:396 msgid "" -"**Add new example using** `TabNet` **and Flower** ([#1725](https://github." -"com/adap/flower/pull/1725))" +"**Add new example using** `TabNet` **and Flower** " +"([#1725](https://github.com/adap/flower/pull/1725))" msgstr "" -"** 添加使用** `TabNet` ** 的新示例** ([#1725](https://github.com/adap/flower/" -"pull/1725))" +"** 添加使用** `TabNet` ** 的新示例** " +"([#1725](https://github.com/adap/flower/pull/1725))" -#: ../../source/ref-changelog.md:332 +#: ../../source/ref-changelog.md:398 msgid "" -"TabNet is a powerful and flexible framework for training machine learning " -"models on tabular data. We now have a federated example using Flower: " -"[quickstart-tabnet](https://github.com/adap/flower/tree/main/examples/" -"quickstart-tabnet)." +"TabNet is a powerful and flexible framework for training machine learning" +" models on tabular data. We now have a federated example using Flower: " +"[quickstart-tabnet](https://github.com/adap/flower/tree/main/examples" +"/quickstart-tabnet)." msgstr "" -"TabNet 是一个强大而灵活的框架,用于在表格数据上训练机器学习模型。我们现在有一" -"个使用 Flower 的联邦示例:[quickstart-tabnet](https://github.com/adap/flower/" -"tree/main/examples/quickstart-tabnet)。" +"TabNet 是一个强大而灵活的框架,用于在表格数据上训练机器学习模型。我们现在有一个使用 Flower 的联邦示例:[quickstart-" +"tabnet](https://github.com/adap/flower/tree/main/examples/quickstart-" +"tabnet)。" -#: ../../source/ref-changelog.md:334 +#: ../../source/ref-changelog.md:400 msgid "" -"**Add new how-to guide for monitoring simulations** ([#1649](https://github." -"com/adap/flower/pull/1649))" -msgstr "" -"** 添加新的模拟监控指南** ([#1649](https://github.com/adap/flower/pull/1649))" +"**Add new how-to guide for monitoring simulations** " +"([#1649](https://github.com/adap/flower/pull/1649))" +msgstr "** 添加新的模拟监控指南** ([#1649](https://github.com/adap/flower/pull/1649))" -#: ../../source/ref-changelog.md:336 +#: ../../source/ref-changelog.md:402 msgid "" -"We now have a documentation guide to help users monitor their performance " -"during simulations." +"We now have a documentation guide to help users monitor their performance" +" during simulations." msgstr "我们现在有一份文档指南,可帮助用户在模拟过程中监控其性能。" -#: ../../source/ref-changelog.md:338 +#: ../../source/ref-changelog.md:404 msgid "" -"**Add training metrics to** `History` **object during simulations** ([#1696]" -"(https://github.com/adap/flower/pull/1696))" +"**Add training metrics to** `History` **object during simulations** " +"([#1696](https://github.com/adap/flower/pull/1696))" msgstr "" -"**在模拟过程中为***`历史`***对象添加训练指标*** ([#1696](https://github.com/" -"adap/flower/pull/1696))" +"**在模拟过程中为***`历史`***对象添加训练指标*** " +"([#1696](https://github.com/adap/flower/pull/1696))" -#: ../../source/ref-changelog.md:340 +#: ../../source/ref-changelog.md:406 msgid "" -"The `fit_metrics_aggregation_fn` can be used to aggregate training metrics, " -"but previous releases did not save the results in the `History` object. This " -"is now the case!" +"The `fit_metrics_aggregation_fn` can be used to aggregate training " +"metrics, but previous releases did not save the results in the `History` " +"object. This is now the case!" msgstr "" -"`fit_metrics_aggregation_fn`可用于汇总训练指标,但以前的版本不会将结果保存在 " -"\"History \"对象中。现在可以了!" +"`fit_metrics_aggregation_fn`可用于汇总训练指标,但以前的版本不会将结果保存在 \"History " +"\"对象中。现在可以了!" -#: ../../source/ref-changelog.md:342 +#: ../../source/ref-changelog.md:408 msgid "" -"**General improvements** ([#1659](https://github.com/adap/flower/pull/1659), " -"[#1646](https://github.com/adap/flower/pull/1646), [#1647](https://github." -"com/adap/flower/pull/1647), [#1471](https://github.com/adap/flower/" -"pull/1471), [#1648](https://github.com/adap/flower/pull/1648), [#1651]" -"(https://github.com/adap/flower/pull/1651), [#1652](https://github.com/adap/" -"flower/pull/1652), [#1653](https://github.com/adap/flower/pull/1653), [#1659]" -"(https://github.com/adap/flower/pull/1659), [#1665](https://github.com/adap/" -"flower/pull/1665), [#1670](https://github.com/adap/flower/pull/1670), [#1672]" -"(https://github.com/adap/flower/pull/1672), [#1677](https://github.com/adap/" -"flower/pull/1677), [#1684](https://github.com/adap/flower/pull/1684), [#1683]" -"(https://github.com/adap/flower/pull/1683), [#1686](https://github.com/adap/" -"flower/pull/1686), [#1682](https://github.com/adap/flower/pull/1682), [#1685]" -"(https://github.com/adap/flower/pull/1685), [#1692](https://github.com/adap/" -"flower/pull/1692), [#1705](https://github.com/adap/flower/pull/1705), [#1708]" -"(https://github.com/adap/flower/pull/1708), [#1711](https://github.com/adap/" -"flower/pull/1711), [#1713](https://github.com/adap/flower/pull/1713), [#1714]" -"(https://github.com/adap/flower/pull/1714), [#1718](https://github.com/adap/" -"flower/pull/1718), [#1716](https://github.com/adap/flower/pull/1716), [#1723]" -"(https://github.com/adap/flower/pull/1723), [#1735](https://github.com/adap/" -"flower/pull/1735), [#1678](https://github.com/adap/flower/pull/1678), [#1750]" -"(https://github.com/adap/flower/pull/1750), [#1753](https://github.com/adap/" -"flower/pull/1753), [#1736](https://github.com/adap/flower/pull/1736), [#1766]" -"(https://github.com/adap/flower/pull/1766), [#1760](https://github.com/adap/" -"flower/pull/1760), [#1775](https://github.com/adap/flower/pull/1775), [#1776]" -"(https://github.com/adap/flower/pull/1776), [#1777](https://github.com/adap/" -"flower/pull/1777), [#1779](https://github.com/adap/flower/pull/1779), [#1784]" -"(https://github.com/adap/flower/pull/1784), [#1773](https://github.com/adap/" -"flower/pull/1773), [#1755](https://github.com/adap/flower/pull/1755), [#1789]" -"(https://github.com/adap/flower/pull/1789), [#1788](https://github.com/adap/" -"flower/pull/1788), [#1798](https://github.com/adap/flower/pull/1798), [#1799]" -"(https://github.com/adap/flower/pull/1799), [#1739](https://github.com/adap/" -"flower/pull/1739), [#1800](https://github.com/adap/flower/pull/1800), [#1804]" -"(https://github.com/adap/flower/pull/1804), [#1805](https://github.com/adap/" -"flower/pull/1805))" -msgstr "" -"**普通改进** ([#1659](https://github.com/adap/flower/pull/1659), [#1646]" -"(https://github.com/adap/flower/pull/1646), [#1647](https://github.com/adap/" -"flower/pull/1647), [#1471](https://github.com/adap/flower/pull/1471), [#1648]" -"(https://github.com/adap/flower/pull/1648), [#1651](https://github.com/adap/" -"flower/pull/1651), [#1652](https://github.com/adap/flower/pull/1652), [#1653]" -"(https://github.com/adap/flower/pull/1653), [#1659](https://github.com/adap/" -"flower/pull/1659), [#1665](https://github.com/adap/flower/pull/1665), [#1670]" -"(https://github.com/adap/flower/pull/1670), [#1672](https://github.com/adap/" -"flower/pull/1672), [#1677](https://github.com/adap/flower/pull/1677), [#1684]" -"(https://github.com/adap/flower/pull/1684), [#1683](https://github.com/adap/" -"flower/pull/1683), [#1686](https://github.com/adap/flower/pull/1686), [#1682]" -"(https://github.com/adap/flower/pull/1682), [#1685](https://github.com/adap/" -"flower/pull/1685), [#1692](https://github.com/adap/flower/pull/1692), [#1705]" -"(https://github.com/adap/flower/pull/1705), [#1708](https://github.com/adap/" -"flower/pull/1708), [#1711](https://github.com/adap/flower/pull/1711), [#1713]" -"(https://github.com/adap/flower/pull/1713), [#1714](https://github.com/adap/" -"flower/pull/1714), [#1718](https://github.com/adap/flower/pull/1718), [#1716]" -"(https://github.com/adap/flower/pull/1716), [#1723](https://github.com/adap/" -"flower/pull/1723), [#1735](https://github.com/adap/flower/pull/1735), [#1678]" -"(https://github.com/adap/flower/pull/1678), [#1750](https://github.com/adap/" -"flower/pull/1750), [#1753](https://github.com/adap/flower/pull/1753), [#1736]" -"(https://github.com/adap/flower/pull/1736), [#1766](https://github.com/adap/" -"flower/pull/1766), [#1760](https://github.com/adap/flower/pull/1760), [#1775]" -"(https://github.com/adap/flower/pull/1775), [#1776](https://github.com/adap/" -"flower/pull/1776), [#1777](https://github.com/adap/flower/pull/1777), [#1779]" -"(https://github.com/adap/flower/pull/1779), [#1784](https://github.com/adap/" -"flower/pull/1784), [#1773](https://github.com/adap/flower/pull/1773), [#1755]" -"(https://github.com/adap/flower/pull/1755), [#1789](https://github.com/adap/" -"flower/pull/1789), [#1788](https://github.com/adap/flower/pull/1788), [#1798]" -"(https://github.com/adap/flower/pull/1798), [#1799](https://github.com/adap/" -"flower/pull/1799), [#1739](https://github.com/adap/flower/pull/1739), [#1800]" -"(https://github.com/adap/flower/pull/1800), [#1804](https://github.com/adap/" -"flower/pull/1804), [#1805](https://github.com/adap/flower/pull/1805))" - -#: ../../source/ref-changelog.md:350 +"**General improvements** " +"([#1659](https://github.com/adap/flower/pull/1659), " +"[#1646](https://github.com/adap/flower/pull/1646), " +"[#1647](https://github.com/adap/flower/pull/1647), " +"[#1471](https://github.com/adap/flower/pull/1471), " +"[#1648](https://github.com/adap/flower/pull/1648), " +"[#1651](https://github.com/adap/flower/pull/1651), " +"[#1652](https://github.com/adap/flower/pull/1652), " +"[#1653](https://github.com/adap/flower/pull/1653), " +"[#1659](https://github.com/adap/flower/pull/1659), " +"[#1665](https://github.com/adap/flower/pull/1665), " +"[#1670](https://github.com/adap/flower/pull/1670), " +"[#1672](https://github.com/adap/flower/pull/1672), " +"[#1677](https://github.com/adap/flower/pull/1677), " +"[#1684](https://github.com/adap/flower/pull/1684), " +"[#1683](https://github.com/adap/flower/pull/1683), " +"[#1686](https://github.com/adap/flower/pull/1686), " +"[#1682](https://github.com/adap/flower/pull/1682), " +"[#1685](https://github.com/adap/flower/pull/1685), " +"[#1692](https://github.com/adap/flower/pull/1692), " +"[#1705](https://github.com/adap/flower/pull/1705), " +"[#1708](https://github.com/adap/flower/pull/1708), " +"[#1711](https://github.com/adap/flower/pull/1711), " +"[#1713](https://github.com/adap/flower/pull/1713), " +"[#1714](https://github.com/adap/flower/pull/1714), " +"[#1718](https://github.com/adap/flower/pull/1718), " +"[#1716](https://github.com/adap/flower/pull/1716), " +"[#1723](https://github.com/adap/flower/pull/1723), " +"[#1735](https://github.com/adap/flower/pull/1735), " +"[#1678](https://github.com/adap/flower/pull/1678), " +"[#1750](https://github.com/adap/flower/pull/1750), " +"[#1753](https://github.com/adap/flower/pull/1753), " +"[#1736](https://github.com/adap/flower/pull/1736), " +"[#1766](https://github.com/adap/flower/pull/1766), " +"[#1760](https://github.com/adap/flower/pull/1760), " +"[#1775](https://github.com/adap/flower/pull/1775), " +"[#1776](https://github.com/adap/flower/pull/1776), " +"[#1777](https://github.com/adap/flower/pull/1777), " +"[#1779](https://github.com/adap/flower/pull/1779), " +"[#1784](https://github.com/adap/flower/pull/1784), " +"[#1773](https://github.com/adap/flower/pull/1773), " +"[#1755](https://github.com/adap/flower/pull/1755), " +"[#1789](https://github.com/adap/flower/pull/1789), " +"[#1788](https://github.com/adap/flower/pull/1788), " +"[#1798](https://github.com/adap/flower/pull/1798), " +"[#1799](https://github.com/adap/flower/pull/1799), " +"[#1739](https://github.com/adap/flower/pull/1739), " +"[#1800](https://github.com/adap/flower/pull/1800), " +"[#1804](https://github.com/adap/flower/pull/1804), " +"[#1805](https://github.com/adap/flower/pull/1805))" +msgstr "" +"**普通改进** ([#1659](https://github.com/adap/flower/pull/1659), " +"[#1646](https://github.com/adap/flower/pull/1646), " +"[#1647](https://github.com/adap/flower/pull/1647), " +"[#1471](https://github.com/adap/flower/pull/1471), " +"[#1648](https://github.com/adap/flower/pull/1648), " +"[#1651](https://github.com/adap/flower/pull/1651), " +"[#1652](https://github.com/adap/flower/pull/1652), " +"[#1653](https://github.com/adap/flower/pull/1653), " +"[#1659](https://github.com/adap/flower/pull/1659), " +"[#1665](https://github.com/adap/flower/pull/1665), " +"[#1670](https://github.com/adap/flower/pull/1670), " +"[#1672](https://github.com/adap/flower/pull/1672), " +"[#1677](https://github.com/adap/flower/pull/1677), " +"[#1684](https://github.com/adap/flower/pull/1684), " +"[#1683](https://github.com/adap/flower/pull/1683), " +"[#1686](https://github.com/adap/flower/pull/1686), " +"[#1682](https://github.com/adap/flower/pull/1682), " +"[#1685](https://github.com/adap/flower/pull/1685), " +"[#1692](https://github.com/adap/flower/pull/1692), " +"[#1705](https://github.com/adap/flower/pull/1705), " +"[#1708](https://github.com/adap/flower/pull/1708), " +"[#1711](https://github.com/adap/flower/pull/1711), " +"[#1713](https://github.com/adap/flower/pull/1713), " +"[#1714](https://github.com/adap/flower/pull/1714), " +"[#1718](https://github.com/adap/flower/pull/1718), " +"[#1716](https://github.com/adap/flower/pull/1716), " +"[#1723](https://github.com/adap/flower/pull/1723), " +"[#1735](https://github.com/adap/flower/pull/1735), " +"[#1678](https://github.com/adap/flower/pull/1678), " +"[#1750](https://github.com/adap/flower/pull/1750), " +"[#1753](https://github.com/adap/flower/pull/1753), " +"[#1736](https://github.com/adap/flower/pull/1736), " +"[#1766](https://github.com/adap/flower/pull/1766), " +"[#1760](https://github.com/adap/flower/pull/1760), " +"[#1775](https://github.com/adap/flower/pull/1775), " +"[#1776](https://github.com/adap/flower/pull/1776), " +"[#1777](https://github.com/adap/flower/pull/1777), " +"[#1779](https://github.com/adap/flower/pull/1779), " +"[#1784](https://github.com/adap/flower/pull/1784), " +"[#1773](https://github.com/adap/flower/pull/1773), " +"[#1755](https://github.com/adap/flower/pull/1755), " +"[#1789](https://github.com/adap/flower/pull/1789), " +"[#1788](https://github.com/adap/flower/pull/1788), " +"[#1798](https://github.com/adap/flower/pull/1798), " +"[#1799](https://github.com/adap/flower/pull/1799), " +"[#1739](https://github.com/adap/flower/pull/1739), " +"[#1800](https://github.com/adap/flower/pull/1800), " +"[#1804](https://github.com/adap/flower/pull/1804), " +"[#1805](https://github.com/adap/flower/pull/1805))" + +#: ../../source/ref-changelog.md:416 msgid "v1.3.0 (2023-02-06)" msgstr "v1.3.0 (2023-02-06)" -#: ../../source/ref-changelog.md:356 +#: ../../source/ref-changelog.md:422 msgid "" -"`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, `Daniel " -"J. Beutel`, `JDRanpariya`, `Lennart Behme`, `Taner Topal`" +"`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, " +"`Daniel J. Beutel`, `JDRanpariya`, `Lennart Behme`, `Taner Topal`" msgstr "" -"`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, `Daniel " -"J. Beutel`, `JDRanpariya`, `Lennart Behme`, `Taner Topal`" +"`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, " +"`Daniel J. Beutel`, `JDRanpariya`, `Lennart Behme`, `Taner Topal`" -#: ../../source/ref-changelog.md:360 +#: ../../source/ref-changelog.md:426 msgid "" "**Add support for** `workload_id` **and** `group_id` **in Driver API** " "([#1595](https://github.com/adap/flower/pull/1595))" msgstr "" -"**在驱动程序应用程序接口中添加对** `workload_id` **和** `group_id` **的支持" -"** ([#1595](https://github.com/adap/flower/pull/1595))" +"**在驱动程序应用程序接口中添加对** `workload_id` **和** `group_id` **的支持** " +"([#1595](https://github.com/adap/flower/pull/1595))" -#: ../../source/ref-changelog.md:362 +#: ../../source/ref-changelog.md:428 msgid "" -"The (experimental) Driver API now supports a `workload_id` that can be used " -"to identify which workload a task belongs to. It also supports a new " -"`group_id` that can be used, for example, to indicate the current training " -"round. Both the `workload_id` and `group_id` enable client nodes to decide " -"whether they want to handle a task or not." +"The (experimental) Driver API now supports a `workload_id` that can be " +"used to identify which workload a task belongs to. It also supports a new" +" `group_id` that can be used, for example, to indicate the current " +"training round. Both the `workload_id` and `group_id` enable client nodes" +" to decide whether they want to handle a task or not." msgstr "" -"驱动程序 API(试验性)现在支持 `workload_id`,可用于识别任务所属的工作量。它" -"还支持新的 `group_id`,例如,可用于指示当前的训练轮次。通过 `workload_id` 和 " -"`group_id` 客户端节点可以决定是否要处理某个任务。" +"驱动程序 API(试验性)现在支持 `workload_id`,可用于识别任务所属的工作量。它还支持新的 " +"`group_id`,例如,可用于指示当前的训练轮次。通过 `workload_id` 和 `group_id` " +"客户端节点可以决定是否要处理某个任务。" -#: ../../source/ref-changelog.md:364 +#: ../../source/ref-changelog.md:430 msgid "" -"**Make Driver API and Fleet API address configurable** ([#1637](https://" -"github.com/adap/flower/pull/1637))" +"**Make Driver API and Fleet API address configurable** " +"([#1637](https://github.com/adap/flower/pull/1637))" msgstr "" -"**使Driver API 和Fleet API地址可配置**([#1637](https://github.com/adap/" -"flower/pull/1637))" +"**使Driver API 和Fleet " +"API地址可配置**([#1637](https://github.com/adap/flower/pull/1637))" -#: ../../source/ref-changelog.md:366 +#: ../../source/ref-changelog.md:432 msgid "" -"The (experimental) long-running Flower server (Driver API and Fleet API) can " -"now configure the server address of both Driver API (via `--driver-api-" -"address`) and Fleet API (via `--fleet-api-address`) when starting:" +"The (experimental) long-running Flower server (Driver API and Fleet API) " +"can now configure the server address of both Driver API (via `--driver-" +"api-address`) and Fleet API (via `--fleet-api-address`) when starting:" msgstr "" -"长期运行的 Flower 服务器(Driver API 和 Fleet API)现在可以在启动时配置 " -"Driver API(通过 `--driver-api-address`)和 Fleet API(通过 `-fleet-api-" -"address`)的服务器地址:" +"长期运行的 Flower 服务器(Driver API 和 Fleet API)现在可以在启动时配置 Driver API(通过 " +"`--driver-api-address`)和 Fleet API(通过 `-fleet-api-address`)的服务器地址:" -#: ../../source/ref-changelog.md:368 +#: ../../source/ref-changelog.md:434 #, fuzzy msgid "" "`flower-server --driver-api-address \"0.0.0.0:8081\" --fleet-api-address " @@ -16821,931 +18284,977 @@ msgstr "" "`flower-server --driver-api-address \"0.0.0.0:8081\" --fleet-api-address " "\"0.0.0.0:8086\"`" -#: ../../source/ref-changelog.md:370 +#: ../../source/ref-changelog.md:436 msgid "Both IPv4 and IPv6 addresses are supported." msgstr "支持 IPv4 和 IPv6 地址。" -#: ../../source/ref-changelog.md:372 +#: ../../source/ref-changelog.md:438 msgid "" -"**Add new example of Federated Learning using fastai and Flower** ([#1598]" -"(https://github.com/adap/flower/pull/1598))" +"**Add new example of Federated Learning using fastai and Flower** " +"([#1598](https://github.com/adap/flower/pull/1598))" msgstr "" -"** 添加使用 fastai 和 Flower 进行联邦学习的新示例** ([#1598](https://github." -"com/adap/flower/pull/1598))" +"** 添加使用 fastai 和 Flower 进行联邦学习的新示例** " +"([#1598](https://github.com/adap/flower/pull/1598))" -#: ../../source/ref-changelog.md:374 +#: ../../source/ref-changelog.md:440 msgid "" "A new code example (`quickstart-fastai`) demonstrates federated learning " "with [fastai](https://www.fast.ai/) and Flower. You can find it here: " -"[quickstart-fastai](https://github.com/adap/flower/tree/main/examples/" -"quickstart-fastai)." -msgstr "" -"一个新的代码示例(`quickstart-fastai`)演示了使用 [fastai](https://www.fast." -"ai/) 和 Flower 的联邦学习。您可以在这里找到它: [quickstart-fastai](https://" -"github.com/adap/flower/tree/main/examples/quickstart-fastai)。" - -#: ../../source/ref-changelog.md:376 -msgid "" -"**Make Android example compatible with** `flwr >= 1.0.0` **and the latest " -"versions of Android** ([#1603](https://github.com/adap/flower/pull/1603))" -msgstr "" -"**使安卓示例兼容** `flwr >= 1.0.0` **和最新版本的安卓** ([#1603](https://" -"github.com/adap/flower/pull/1603))" - -#: ../../source/ref-changelog.md:378 -msgid "" -"The Android code example has received a substantial update: the project is " -"compatible with Flower 1.0 (and later), the UI received a full refresh, and " -"the project is updated to be compatible with newer Android tooling." -msgstr "" -"Android 代码示例已进行了大幅更新:项目兼容 Flower 1.0(及更高版本),用户界面" -"已全面刷新,项目已更新为兼容较新的 Android 工具。" - -#: ../../source/ref-changelog.md:380 -msgid "" -"**Add new `FedProx` strategy** ([#1619](https://github.com/adap/flower/" -"pull/1619))" -msgstr "" -"**添加新的`FedProx`策略** ([#1619](https://github.com/adap/flower/" -"pull/1619))" - -#: ../../source/ref-changelog.md:382 -msgid "" -"This [strategy](https://github.com/adap/flower/blob/main/src/py/flwr/server/" -"strategy/fedprox.py) is almost identical to [`FedAvg`](https://github.com/" -"adap/flower/blob/main/src/py/flwr/server/strategy/fedavg.py), but helps " -"users replicate what is described in this [paper](https://arxiv.org/" -"abs/1812.06127). It essentially adds a parameter called `proximal_mu` to " -"regularize the local models with respect to the global models." -msgstr "" -"该[策略](https://github.com/adap/flower/blob/main/src/py/flwr/server/" -"strategy/fedprox.py)与[`FedAvg`](https://github.com/adap/flower/blob/main/" -"src/py/flwr/server/strategy/fedavg.py)几乎相同,但可以帮助用户复现本[论文]" -"(https://arxiv.org/abs/1812.06127)中的描述。它的本质是添加一个名为 " -"`proximal_mu`的参数,使局部模型与全局模型正则化。" - -#: ../../source/ref-changelog.md:384 -msgid "" -"**Add new metrics to telemetry events** ([#1640](https://github.com/adap/" -"flower/pull/1640))" +"[quickstart-fastai](https://github.com/adap/flower/tree/main/examples" +"/quickstart-fastai)." msgstr "" -"**为遥测事件添加新指标**([#1640](https://github.com/adap/flower/pull/1640))" - -#: ../../source/ref-changelog.md:386 -msgid "" -"An updated event structure allows, for example, the clustering of events " -"within the same workload." -msgstr "例如,更新后的事件结构可以将同一工作负载中的事件集中在一起。" +"一个新的代码示例(`quickstart-fastai`)演示了使用 [fastai](https://www.fast.ai/) 和 " +"Flower 的联邦学习。您可以在这里找到它: [quickstart-" +"fastai](https://github.com/adap/flower/tree/main/examples/quickstart-" +"fastai)。" -#: ../../source/ref-changelog.md:388 +#: ../../source/ref-changelog.md:442 msgid "" -"**Add new custom strategy tutorial section** [#1623](https://github.com/adap/" -"flower/pull/1623)" +"**Make Android example compatible with** `flwr >= 1.0.0` **and the latest" +" versions of Android** " +"([#1603](https://github.com/adap/flower/pull/1603))" msgstr "" -"**添加新的自定义策略教程部分** [#1623](https://github.com/adap/flower/" -"pull/1623)" +"**使安卓示例兼容** `flwr >= 1.0.0` **和最新版本的安卓** " +"([#1603](https://github.com/adap/flower/pull/1603))" -#: ../../source/ref-changelog.md:390 +#: ../../source/ref-changelog.md:444 msgid "" -"The Flower tutorial now has a new section that covers implementing a custom " -"strategy from scratch: [Open in Colab](https://colab.research.google.com/" -"github/adap/flower/blob/main/doc/source/tutorial-build-a-strategy-from-" -"scratch-pytorch.ipynb)" +"The Android code example has received a substantial update: the project " +"is compatible with Flower 1.0 (and later), the UI received a full " +"refresh, and the project is updated to be compatible with newer Android " +"tooling." msgstr "" -"Flower 教程新增了一个章节,介绍如何从零开始实施自定义策略: [在 Colab 中打开]" -"(https://colab.research.google.com/github/adap/flower/blob/main/doc/source/" -"tutorial-build-a-strategy-from-scratch-pytorch.ipynb)" +"Android 代码示例已进行了大幅更新:项目兼容 Flower 1.0(及更高版本),用户界面已全面刷新,项目已更新为兼容较新的 Android" +" 工具。" -#: ../../source/ref-changelog.md:392 +#: ../../source/ref-changelog.md:446 msgid "" -"**Add new custom serialization tutorial section** ([#1622](https://github." -"com/adap/flower/pull/1622))" -msgstr "" -"** 添加新的自定义序列化教程部分** ([#1622](https://github.com/adap/flower/" -"pull/1622))" +"**Add new `FedProx` strategy** " +"([#1619](https://github.com/adap/flower/pull/1619))" +msgstr "**添加新的`FedProx`策略** ([#1619](https://github.com/adap/flower/pull/1619))" -#: ../../source/ref-changelog.md:394 +#: ../../source/ref-changelog.md:448 msgid "" -"The Flower tutorial now has a new section that covers custom serialization: " -"[Open in Colab](https://colab.research.google.com/github/adap/flower/blob/" -"main/doc/source/tutorial-customize-the-client-pytorch.ipynb)" +"This " +"[strategy](https://github.com/adap/flower/blob/main/src/py/flwr/server/strategy/fedprox.py)" +" is almost identical to " +"[`FedAvg`](https://github.com/adap/flower/blob/main/src/py/flwr/server/strategy/fedavg.py)," +" but helps users replicate what is described in this " +"[paper](https://arxiv.org/abs/1812.06127). It essentially adds a " +"parameter called `proximal_mu` to regularize the local models with " +"respect to the global models." msgstr "" -"Flower 教程现在新增了一个章节,介绍自定义序列化: [在 Colab 中打开](https://" -"colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-" -"customize-the-client-pytorch.ipynb)" - -#: ../../source/ref-changelog.md:396 -msgid "" -"**General improvements** ([#1638](https://github.com/adap/flower/pull/1638), " -"[#1634](https://github.com/adap/flower/pull/1634), [#1636](https://github." -"com/adap/flower/pull/1636), [#1635](https://github.com/adap/flower/" -"pull/1635), [#1633](https://github.com/adap/flower/pull/1633), [#1632]" -"(https://github.com/adap/flower/pull/1632), [#1631](https://github.com/adap/" -"flower/pull/1631), [#1630](https://github.com/adap/flower/pull/1630), [#1627]" -"(https://github.com/adap/flower/pull/1627), [#1593](https://github.com/adap/" -"flower/pull/1593), [#1616](https://github.com/adap/flower/pull/1616), [#1615]" -"(https://github.com/adap/flower/pull/1615), [#1607](https://github.com/adap/" -"flower/pull/1607), [#1609](https://github.com/adap/flower/pull/1609), [#1608]" -"(https://github.com/adap/flower/pull/1608), [#1603](https://github.com/adap/" -"flower/pull/1603), [#1590](https://github.com/adap/flower/pull/1590), [#1580]" -"(https://github.com/adap/flower/pull/1580), [#1599](https://github.com/adap/" -"flower/pull/1599), [#1600](https://github.com/adap/flower/pull/1600), [#1601]" -"(https://github.com/adap/flower/pull/1601), [#1597](https://github.com/adap/" -"flower/pull/1597), [#1595](https://github.com/adap/flower/pull/1595), [#1591]" -"(https://github.com/adap/flower/pull/1591), [#1588](https://github.com/adap/" -"flower/pull/1588), [#1589](https://github.com/adap/flower/pull/1589), [#1587]" -"(https://github.com/adap/flower/pull/1587), [#1573](https://github.com/adap/" -"flower/pull/1573), [#1581](https://github.com/adap/flower/pull/1581), [#1578]" -"(https://github.com/adap/flower/pull/1578), [#1574](https://github.com/adap/" -"flower/pull/1574), [#1572](https://github.com/adap/flower/pull/1572), [#1586]" -"(https://github.com/adap/flower/pull/1586))" -msgstr "" -"**普通改进** ([#1638](https://github.com/adap/flower/pull/1638), [#1634]" -"(https://github.com/adap/flower/pull/1634), [#1636](https://github.com/adap/" -"flower/pull/1636), [#1635](https://github.com/adap/flower/pull/1635), [#1633]" -"(https://github.com/adap/flower/pull/1633), [#1632](https://github.com/adap/" -"flower/pull/1632), [#1631](https://github.com/adap/flower/pull/1631), [#1630]" -"(https://github.com/adap/flower/pull/1630), [#1627](https://github. com/adap/" -"flower/pull/1627), [#1593](https://github.com/adap/flower/pull/1593), [#1616]" -"(https://github.com/adap/flower/pull/1616), [#1615](https://github.com/adap/" -"flower/pull/1615), [#1607](https://github.com/adap/flower/pull/1607), [#1609]" -"(https://github.com/adap/flower/pull/1609), [#1608](https://github.com/adap/" -"flower/pull/1608), [#1603](https://github.com/adap/flower/pull/1603), [#1590]" -"(https://github. com/adap/flower/pull/1590), [#1580](https://github.com/adap/" -"flower/pull/1580), [#1599](https://github.com/adap/flower/pull/1599), [#1600]" -"(https://github.com/adap/flower/pull/1600), [#1601](https://github.com/adap/" -"flower/pull/1601), [#1597](https://github.com/adap/flower/pull/1597), [#1595]" -"(https://github.com/adap/flower/pull/1595), [#1591](https://github.com/adap/" -"flower/pull/1591), [#1588](https://github. com/adap/flower/pull/1588), " -"[#1589](https://github.com/adap/flower/pull/1589), [#1587](https://github." -"com/adap/flower/pull/1587), [#1573](https://github.com/adap/flower/" -"pull/1573), [#1581](https://github.com/adap/flower/pull/1581), [#1578]" -"(https://github.com/adap/flower/pull/1578), [#1574](https://github.com/adap/" -"flower/pull/1574), [#1572](https://github.com/adap/flower/pull/1572), [#1586]" -"(https://github.com/adap/flower/pull/1586))" +"该[策略](https://github.com/adap/flower/blob/main/src/py/flwr/server/strategy/fedprox.py)与[`FedAvg`](https://github.com/adap/flower/blob/main/src/py/flwr/server/strategy/fedavg.py)几乎相同,但可以帮助用户复现本[论文](https://arxiv.org/abs/1812.06127)中的描述。它的本质是添加一个名为" +" `proximal_mu`的参数,使局部模型与全局模型正则化。" -#: ../../source/ref-changelog.md:400 +#: ../../source/ref-changelog.md:450 msgid "" -"**Updated documentation** ([#1629](https://github.com/adap/flower/" -"pull/1629), [#1628](https://github.com/adap/flower/pull/1628), [#1620]" -"(https://github.com/adap/flower/pull/1620), [#1618](https://github.com/adap/" -"flower/pull/1618), [#1617](https://github.com/adap/flower/pull/1617), [#1613]" -"(https://github.com/adap/flower/pull/1613), [#1614](https://github.com/adap/" -"flower/pull/1614))" -msgstr "" -"** 更新文档** ([#1629](https://github.com/adap/flower/pull/1629), [#1628]" -"(https://github.com/adap/flower/pull/1628), [#1620](https://github.com/adap/" -"flower/pull/1620), [#1618](https://github.com/adap/flower/pull/1618), [#1617]" -"(https://github.com/adap/flower/pull/1617), [#1613](https://github.com/adap/" -"flower/pull/1613), [#1614](https://github.com/adap/flower/pull/1614)))" +"**Add new metrics to telemetry events** " +"([#1640](https://github.com/adap/flower/pull/1640))" +msgstr "**为遥测事件添加新指标**([#1640](https://github.com/adap/flower/pull/1640))" -#: ../../source/ref-changelog.md:402 ../../source/ref-changelog.md:469 +#: ../../source/ref-changelog.md:452 msgid "" -"As usual, the documentation has improved quite a bit. It is another step in " -"our effort to make the Flower documentation the best documentation of any " -"project. Stay tuned and as always, feel free to provide feedback!" -msgstr "" -"和往常一样,我们的文档有了很大的改进。这是我们努力使 Flower 文档成为所有项目" -"中最好文档的又一步骤。请继续关注,并随时提供反馈意见!" +"An updated event structure allows, for example, the clustering of events " +"within the same workload." +msgstr "例如,更新后的事件结构可以将同一工作负载中的事件集中在一起。" -#: ../../source/ref-changelog.md:408 +#: ../../source/ref-changelog.md:454 +msgid "" +"**Add new custom strategy tutorial section** " +"[#1623](https://github.com/adap/flower/pull/1623)" +msgstr "**添加新的自定义策略教程部分** [#1623](https://github.com/adap/flower/pull/1623)" + +#: ../../source/ref-changelog.md:456 +msgid "" +"The Flower tutorial now has a new section that covers implementing a " +"custom strategy from scratch: [Open in " +"Colab](https://colab.research.google.com/github/adap/flower/blob/main/doc/source" +"/tutorial-build-a-strategy-from-scratch-pytorch.ipynb)" +msgstr "" +"Flower 教程新增了一个章节,介绍如何从零开始实施自定义策略: [在 Colab " +"中打开](https://colab.research.google.com/github/adap/flower/blob/main/doc/source" +"/tutorial-build-a-strategy-from-scratch-pytorch.ipynb)" + +#: ../../source/ref-changelog.md:458 +msgid "" +"**Add new custom serialization tutorial section** " +"([#1622](https://github.com/adap/flower/pull/1622))" +msgstr "** 添加新的自定义序列化教程部分** ([#1622](https://github.com/adap/flower/pull/1622))" + +#: ../../source/ref-changelog.md:460 +msgid "" +"The Flower tutorial now has a new section that covers custom " +"serialization: [Open in " +"Colab](https://colab.research.google.com/github/adap/flower/blob/main/doc/source" +"/tutorial-customize-the-client-pytorch.ipynb)" +msgstr "" +"Flower 教程现在新增了一个章节,介绍自定义序列化: [在 Colab " +"中打开](https://colab.research.google.com/github/adap/flower/blob/main/doc/source" +"/tutorial-customize-the-client-pytorch.ipynb)" + +#: ../../source/ref-changelog.md:462 +msgid "" +"**General improvements** " +"([#1638](https://github.com/adap/flower/pull/1638), " +"[#1634](https://github.com/adap/flower/pull/1634), " +"[#1636](https://github.com/adap/flower/pull/1636), " +"[#1635](https://github.com/adap/flower/pull/1635), " +"[#1633](https://github.com/adap/flower/pull/1633), " +"[#1632](https://github.com/adap/flower/pull/1632), " +"[#1631](https://github.com/adap/flower/pull/1631), " +"[#1630](https://github.com/adap/flower/pull/1630), " +"[#1627](https://github.com/adap/flower/pull/1627), " +"[#1593](https://github.com/adap/flower/pull/1593), " +"[#1616](https://github.com/adap/flower/pull/1616), " +"[#1615](https://github.com/adap/flower/pull/1615), " +"[#1607](https://github.com/adap/flower/pull/1607), " +"[#1609](https://github.com/adap/flower/pull/1609), " +"[#1608](https://github.com/adap/flower/pull/1608), " +"[#1603](https://github.com/adap/flower/pull/1603), " +"[#1590](https://github.com/adap/flower/pull/1590), " +"[#1580](https://github.com/adap/flower/pull/1580), " +"[#1599](https://github.com/adap/flower/pull/1599), " +"[#1600](https://github.com/adap/flower/pull/1600), " +"[#1601](https://github.com/adap/flower/pull/1601), " +"[#1597](https://github.com/adap/flower/pull/1597), " +"[#1595](https://github.com/adap/flower/pull/1595), " +"[#1591](https://github.com/adap/flower/pull/1591), " +"[#1588](https://github.com/adap/flower/pull/1588), " +"[#1589](https://github.com/adap/flower/pull/1589), " +"[#1587](https://github.com/adap/flower/pull/1587), " +"[#1573](https://github.com/adap/flower/pull/1573), " +"[#1581](https://github.com/adap/flower/pull/1581), " +"[#1578](https://github.com/adap/flower/pull/1578), " +"[#1574](https://github.com/adap/flower/pull/1574), " +"[#1572](https://github.com/adap/flower/pull/1572), " +"[#1586](https://github.com/adap/flower/pull/1586))" +msgstr "" +"**普通改进** ([#1638](https://github.com/adap/flower/pull/1638), " +"[#1634](https://github.com/adap/flower/pull/1634), " +"[#1636](https://github.com/adap/flower/pull/1636), " +"[#1635](https://github.com/adap/flower/pull/1635), " +"[#1633](https://github.com/adap/flower/pull/1633), " +"[#1632](https://github.com/adap/flower/pull/1632), " +"[#1631](https://github.com/adap/flower/pull/1631), " +"[#1630](https://github.com/adap/flower/pull/1630), " +"[#1627](https://github. com/adap/flower/pull/1627), " +"[#1593](https://github.com/adap/flower/pull/1593), " +"[#1616](https://github.com/adap/flower/pull/1616), " +"[#1615](https://github.com/adap/flower/pull/1615), " +"[#1607](https://github.com/adap/flower/pull/1607), " +"[#1609](https://github.com/adap/flower/pull/1609), " +"[#1608](https://github.com/adap/flower/pull/1608), " +"[#1603](https://github.com/adap/flower/pull/1603), " +"[#1590](https://github. com/adap/flower/pull/1590), " +"[#1580](https://github.com/adap/flower/pull/1580), " +"[#1599](https://github.com/adap/flower/pull/1599), " +"[#1600](https://github.com/adap/flower/pull/1600), " +"[#1601](https://github.com/adap/flower/pull/1601), " +"[#1597](https://github.com/adap/flower/pull/1597), " +"[#1595](https://github.com/adap/flower/pull/1595), " +"[#1591](https://github.com/adap/flower/pull/1591), " +"[#1588](https://github. com/adap/flower/pull/1588), " +"[#1589](https://github.com/adap/flower/pull/1589), " +"[#1587](https://github.com/adap/flower/pull/1587), " +"[#1573](https://github.com/adap/flower/pull/1573), " +"[#1581](https://github.com/adap/flower/pull/1581), " +"[#1578](https://github.com/adap/flower/pull/1578), " +"[#1574](https://github.com/adap/flower/pull/1574), " +"[#1572](https://github.com/adap/flower/pull/1572), " +"[#1586](https://github.com/adap/flower/pull/1586))" + +#: ../../source/ref-changelog.md:466 +msgid "" +"**Updated documentation** " +"([#1629](https://github.com/adap/flower/pull/1629), " +"[#1628](https://github.com/adap/flower/pull/1628), " +"[#1620](https://github.com/adap/flower/pull/1620), " +"[#1618](https://github.com/adap/flower/pull/1618), " +"[#1617](https://github.com/adap/flower/pull/1617), " +"[#1613](https://github.com/adap/flower/pull/1613), " +"[#1614](https://github.com/adap/flower/pull/1614))" +msgstr "" +"** 更新文档** ([#1629](https://github.com/adap/flower/pull/1629), " +"[#1628](https://github.com/adap/flower/pull/1628), " +"[#1620](https://github.com/adap/flower/pull/1620), " +"[#1618](https://github.com/adap/flower/pull/1618), " +"[#1617](https://github.com/adap/flower/pull/1617), " +"[#1613](https://github.com/adap/flower/pull/1613), " +"[#1614](https://github.com/adap/flower/pull/1614)))" + +#: ../../source/ref-changelog.md:468 ../../source/ref-changelog.md:535 +msgid "" +"As usual, the documentation has improved quite a bit. It is another step " +"in our effort to make the Flower documentation the best documentation of " +"any project. Stay tuned and as always, feel free to provide feedback!" +msgstr "和往常一样,我们的文档有了很大的改进。这是我们努力使 Flower 文档成为所有项目中最好文档的又一步骤。请继续关注,并随时提供反馈意见!" + +#: ../../source/ref-changelog.md:474 msgid "v1.2.0 (2023-01-13)" msgstr "v1.2.0 (2023-01-13)" -#: ../../source/ref-changelog.md:414 +#: ../../source/ref-changelog.md:480 msgid "" -"`Adam Narozniak`, `Charles Beauville`, `Daniel J. Beutel`, `Edoardo`, `L. " -"Jiang`, `Ragy`, `Taner Topal`, `dannymcy`" +"`Adam Narozniak`, `Charles Beauville`, `Daniel J. Beutel`, `Edoardo`, `L." +" Jiang`, `Ragy`, `Taner Topal`, `dannymcy`" msgstr "" -"`Adam Narozniak`, `Charles Beauville`, `Daniel J. Beutel`, `Edoardo`, `L. " -"Jiang`, `Ragy`, `Taner Topal`, `dannymcy`" +"`Adam Narozniak`, `Charles Beauville`, `Daniel J. Beutel`, `Edoardo`, `L." +" Jiang`, `Ragy`, `Taner Topal`, `dannymcy`" -#: ../../source/ref-changelog.md:418 +#: ../../source/ref-changelog.md:484 msgid "" -"**Introduce new Flower Baseline: FedAvg MNIST** ([#1497](https://github.com/" -"adap/flower/pull/1497), [#1552](https://github.com/adap/flower/pull/1552))" +"**Introduce new Flower Baseline: FedAvg MNIST** " +"([#1497](https://github.com/adap/flower/pull/1497), " +"[#1552](https://github.com/adap/flower/pull/1552))" msgstr "" -"**引入新的 Flower Baseline: FedAvg MNIST** ([#1497](https://github.com/adap/" -"flower/pull/1497), [#1552](https://github.com/adap/flower/pull/1552))" +"**引入新的 Flower Baseline: FedAvg MNIST** " +"([#1497](https://github.com/adap/flower/pull/1497), " +"[#1552](https://github.com/adap/flower/pull/1552))" -#: ../../source/ref-changelog.md:420 +#: ../../source/ref-changelog.md:486 msgid "" "Over the coming weeks, we will be releasing a number of new reference " "implementations useful especially to FL newcomers. They will typically " "revisit well known papers from the literature, and be suitable for " "integration in your own application or for experimentation, in order to " -"deepen your knowledge of FL in general. Today's release is the first in this " -"series. [Read more.](https://flower.ai/blog/2023-01-12-fl-starter-pack-" -"fedavg-mnist-cnn/)" +"deepen your knowledge of FL in general. Today's release is the first in " +"this series. [Read more.](https://flower.ai/blog/2023-01-12-fl-starter-" +"pack-fedavg-mnist-cnn/)" msgstr "" -"在未来几周内,我们将发布一些新的参考,特别是对 FL 新手有用的方法。它们通常会" -"重温文献中的知名论文,适合集成到您自己的应用程序中或用于实验,以加深您对 FL " -"的总体了解。今天发布的是该系列中的第一篇。[阅读全文](https://flower.ai/" -"blog/2023-01-12-fl-starter-pack-fedavg-mnist-cnn/)" +"在未来几周内,我们将发布一些新的参考,特别是对 FL " +"新手有用的方法。它们通常会重温文献中的知名论文,适合集成到您自己的应用程序中或用于实验,以加深您对 FL " +"的总体了解。今天发布的是该系列中的第一篇。[阅读全文](https://flower.ai/blog/2023-01-12-fl-starter-" +"pack-fedavg-mnist-cnn/)" -#: ../../source/ref-changelog.md:422 +#: ../../source/ref-changelog.md:488 msgid "" -"**Improve GPU support in simulations** ([#1555](https://github.com/adap/" -"flower/pull/1555))" -msgstr "" -"**改进模拟中的 GPU 支持**([#1555](https://github.com/adap/flower/" -"pull/1555))" +"**Improve GPU support in simulations** " +"([#1555](https://github.com/adap/flower/pull/1555))" +msgstr "**改进模拟中的 GPU 支持**([#1555](https://github.com/adap/flower/pull/1555))" -#: ../../source/ref-changelog.md:424 +#: ../../source/ref-changelog.md:490 msgid "" -"The Ray-based Virtual Client Engine (`start_simulation`) has been updated to " -"improve GPU support. The update includes some of the hard-earned lessons " -"from scaling simulations in GPU cluster environments. New defaults make " -"running GPU-based simulations substantially more robust." +"The Ray-based Virtual Client Engine (`start_simulation`) has been updated" +" to improve GPU support. The update includes some of the hard-earned " +"lessons from scaling simulations in GPU cluster environments. New " +"defaults make running GPU-based simulations substantially more robust." msgstr "" -"基于 Ray 的虚拟客户端引擎 (`start_simulation`)已更新,以改进对 GPU 的支持。此" -"次更新包含了在 GPU 集群环境中扩展模拟的一些经验教训。新的默认设置使基于 GPU " -"的模拟运行更加稳健。" +"基于 Ray 的虚拟客户端引擎 (`start_simulation`)已更新,以改进对 GPU 的支持。此次更新包含了在 GPU " +"集群环境中扩展模拟的一些经验教训。新的默认设置使基于 GPU 的模拟运行更加稳健。" -#: ../../source/ref-changelog.md:426 +#: ../../source/ref-changelog.md:492 msgid "" -"**Improve GPU support in Jupyter Notebook tutorials** ([#1527](https://" -"github.com/adap/flower/pull/1527), [#1558](https://github.com/adap/flower/" -"pull/1558))" +"**Improve GPU support in Jupyter Notebook tutorials** " +"([#1527](https://github.com/adap/flower/pull/1527), " +"[#1558](https://github.com/adap/flower/pull/1558))" msgstr "" -"**改进 Jupyter Notebook 教程中的 GPU 支持** ([#1527](https://github.com/adap/" -"flower/pull/1527), [#1558](https://github.com/adap/flower/pull/1558))" +"**改进 Jupyter Notebook 教程中的 GPU 支持** " +"([#1527](https://github.com/adap/flower/pull/1527), " +"[#1558](https://github.com/adap/flower/pull/1558))" -#: ../../source/ref-changelog.md:428 +#: ../../source/ref-changelog.md:494 msgid "" -"Some users reported that Jupyter Notebooks have not always been easy to use " -"on GPU instances. We listened and made improvements to all of our Jupyter " -"notebooks! Check out the updated notebooks here:" +"Some users reported that Jupyter Notebooks have not always been easy to " +"use on GPU instances. We listened and made improvements to all of our " +"Jupyter notebooks! Check out the updated notebooks here:" msgstr "" -"一些用户报告说,在 GPU 实例上使用 Jupyter 笔记本并不是很方便。我们听取了他们" -"的意见,并对所有 Jupyter 笔记本进行了改进!点击这里查看更新后的笔记本:" +"一些用户报告说,在 GPU 实例上使用 Jupyter 笔记本并不是很方便。我们听取了他们的意见,并对所有 Jupyter " +"笔记本进行了改进!点击这里查看更新后的笔记本:" -#: ../../source/ref-changelog.md:430 +#: ../../source/ref-changelog.md:496 msgid "" -"[An Introduction to Federated Learning](https://flower.ai/docs/framework/" -"tutorial-get-started-with-flower-pytorch.html)" +"[An Introduction to Federated Learning](https://flower.ai/docs/framework" +"/tutorial-get-started-with-flower-pytorch.html)" msgstr "" "[联邦学习简介](https://flower.ai/docs/framework/tutorial-get-started-with-" "flower-pytorch.html)" -#: ../../source/ref-changelog.md:431 +#: ../../source/ref-changelog.md:497 msgid "" -"[Strategies in Federated Learning](https://flower.ai/docs/framework/tutorial-" -"use-a-federated-learning-strategy-pytorch.html)" +"[Strategies in Federated Learning](https://flower.ai/docs/framework" +"/tutorial-use-a-federated-learning-strategy-pytorch.html)" msgstr "" "[联邦学习策略](https://flower.ai/docs/framework/tutorial-use-a-federated-" "learning-strategy-pytorch.html)" -#: ../../source/ref-changelog.md:432 +#: ../../source/ref-changelog.md:498 msgid "" -"[Building a Strategy](https://flower.ai/docs/framework/tutorial-build-a-" -"strategy-from-scratch-pytorch.html)" +"[Building a Strategy](https://flower.ai/docs/framework/tutorial-build-a" +"-strategy-from-scratch-pytorch.html)" msgstr "" "[制定策略](https://flower.ai/docs/framework/tutorial-build-a-strategy-from-" "scratch-pytorch.html)" -#: ../../source/ref-changelog.md:433 +#: ../../source/ref-changelog.md:499 msgid "" -"[Client and NumPyClient](https://flower.ai/docs/framework/tutorial-customize-" -"the-client-pytorch.html)" +"[Client and NumPyClient](https://flower.ai/docs/framework/tutorial-" +"customize-the-client-pytorch.html)" msgstr "" "[客户端和 NumPyClient](https://flower.ai/docs/framework/tutorial-customize-" "the-client-pytorch.html)" -#: ../../source/ref-changelog.md:435 +#: ../../source/ref-changelog.md:501 msgid "" -"**Introduce optional telemetry** ([#1533](https://github.com/adap/flower/" -"pull/1533), [#1544](https://github.com/adap/flower/pull/1544), [#1584]" -"(https://github.com/adap/flower/pull/1584))" +"**Introduce optional telemetry** " +"([#1533](https://github.com/adap/flower/pull/1533), " +"[#1544](https://github.com/adap/flower/pull/1544), " +"[#1584](https://github.com/adap/flower/pull/1584))" msgstr "" -"**引入可选遥测**([#1533](https://github.com/adap/flower/pull/1533), [#1544]" -"(https://github.com/adap/flower/pull/1544), [#1584](https://github.com/adap/" -"flower/pull/1584)" +"**引入可选遥测**([#1533](https://github.com/adap/flower/pull/1533), " +"[#1544](https://github.com/adap/flower/pull/1544), " +"[#1584](https://github.com/adap/flower/pull/1584)" -#: ../../source/ref-changelog.md:437 +#: ../../source/ref-changelog.md:503 msgid "" -"After a [request for feedback](https://github.com/adap/flower/issues/1534) " -"from the community, the Flower open-source project introduces optional " -"collection of *anonymous* usage metrics to make well-informed decisions to " -"improve Flower. Doing this enables the Flower team to understand how Flower " -"is used and what challenges users might face." +"After a [request for " +"feedback](https://github.com/adap/flower/issues/1534) from the community," +" the Flower open-source project introduces optional collection of " +"*anonymous* usage metrics to make well-informed decisions to improve " +"Flower. Doing this enables the Flower team to understand how Flower is " +"used and what challenges users might face." msgstr "" "在社区发出[反馈请求](https://github.com/adap/flower/issues/1534)之后,Flower " -"开放源码项目引入了可选的*匿名*使用指标收集,以便在充分知情的情况下做出改进 " -"Flower 的决定。这样做能让 Flower 团队了解 Flower 的使用情况以及用户可能面临的" -"挑战。" +"开放源码项目引入了可选的*匿名*使用指标收集,以便在充分知情的情况下做出改进 Flower 的决定。这样做能让 Flower 团队了解 " +"Flower 的使用情况以及用户可能面临的挑战。" -#: ../../source/ref-changelog.md:439 +#: ../../source/ref-changelog.md:505 msgid "" -"**Flower is a friendly framework for collaborative AI and data science.** " -"Staying true to this statement, Flower makes it easy to disable telemetry " -"for users who do not want to share anonymous usage metrics. [Read more.]" -"(https://flower.ai/docs/telemetry.html)." +"**Flower is a friendly framework for collaborative AI and data science.**" +" Staying true to this statement, Flower makes it easy to disable " +"telemetry for users who do not want to share anonymous usage metrics. " +"[Read more.](https://flower.ai/docs/telemetry.html)." msgstr "" -"**Flower 是一个用于协作式人工智能和数据科学的友好框架。** Flower 遵循这一声" -"明,让不想分享匿名使用指标的用户可以轻松禁用遥测技术。[阅读全文](https://" -"flower.ai/docs/telemetry.html)。" +"**Flower 是一个用于协作式人工智能和数据科学的友好框架。** Flower " +"遵循这一声明,让不想分享匿名使用指标的用户可以轻松禁用遥测技术。[阅读全文](https://flower.ai/docs/telemetry.html)。" -#: ../../source/ref-changelog.md:441 +#: ../../source/ref-changelog.md:507 msgid "" -"**Introduce (experimental) Driver API** ([#1520](https://github.com/adap/" -"flower/pull/1520), [#1525](https://github.com/adap/flower/pull/1525), [#1545]" -"(https://github.com/adap/flower/pull/1545), [#1546](https://github.com/adap/" -"flower/pull/1546), [#1550](https://github.com/adap/flower/pull/1550), [#1551]" -"(https://github.com/adap/flower/pull/1551), [#1567](https://github.com/adap/" -"flower/pull/1567))" -msgstr "" -"**引入(试验性)Driver API** ([#1520](https://github.com/adap/flower/" -"pull/1520), [#1525](https://github.com/adap/flower/pull/1525), [#1545]" -"(https://github.com/adap/flower/pull/1545), [#1546](https://github.com/adap/" -"flower/pull/1546), [#1550](https://github.com/adap/flower/pull/1550), [#1551]" -"(https://github.com/adap/flower/pull/1551), [#1567](https://github.com/adap/" -"flower/pull/1567))" +"**Introduce (experimental) Driver API** " +"([#1520](https://github.com/adap/flower/pull/1520), " +"[#1525](https://github.com/adap/flower/pull/1525), " +"[#1545](https://github.com/adap/flower/pull/1545), " +"[#1546](https://github.com/adap/flower/pull/1546), " +"[#1550](https://github.com/adap/flower/pull/1550), " +"[#1551](https://github.com/adap/flower/pull/1551), " +"[#1567](https://github.com/adap/flower/pull/1567))" +msgstr "" +"**引入(试验性)Driver API** ([#1520](https://github.com/adap/flower/pull/1520)," +" [#1525](https://github.com/adap/flower/pull/1525), " +"[#1545](https://github.com/adap/flower/pull/1545), " +"[#1546](https://github.com/adap/flower/pull/1546), " +"[#1550](https://github.com/adap/flower/pull/1550), " +"[#1551](https://github.com/adap/flower/pull/1551), " +"[#1567](https://github.com/adap/flower/pull/1567))" -#: ../../source/ref-changelog.md:443 +#: ../../source/ref-changelog.md:509 msgid "" "Flower now has a new (experimental) Driver API which will enable fully " "programmable, async, and multi-tenant Federated Learning and Federated " -"Analytics applications. Phew, that's a lot! Going forward, the Driver API " -"will be the abstraction that many upcoming features will be built on - and " -"you can start building those things now, too." +"Analytics applications. Phew, that's a lot! Going forward, the Driver API" +" will be the abstraction that many upcoming features will be built on - " +"and you can start building those things now, too." msgstr "" -"Flower 现在有了一个新的(试验性的)驱动程序应用程序接口(Driver API),它将支" -"持完全可编程、异步和多租户的联邦学习(Federated Learning)和联邦分析" -"(Federated Analytics)应用程序。展望未来,Driver API 将成为许多即将推出的功" -"能的抽象基础,您现在就可以开始构建这些功能。" +"Flower 现在有了一个新的(试验性的)驱动程序应用程序接口(Driver " +"API),它将支持完全可编程、异步和多租户的联邦学习(Federated Learning)和联邦分析(Federated " +"Analytics)应用程序。展望未来,Driver API 将成为许多即将推出的功能的抽象基础,您现在就可以开始构建这些功能。" -#: ../../source/ref-changelog.md:445 +#: ../../source/ref-changelog.md:511 msgid "" -"The Driver API also enables a new execution mode in which the server runs " -"indefinitely. Multiple individual workloads can run concurrently and start " -"and stop their execution independent of the server. This is especially " -"useful for users who want to deploy Flower in production." +"The Driver API also enables a new execution mode in which the server runs" +" indefinitely. Multiple individual workloads can run concurrently and " +"start and stop their execution independent of the server. This is " +"especially useful for users who want to deploy Flower in production." msgstr "" -"驱动程序应用程序接口还支持一种新的执行模式,在这种模式下,服务器可无限期运" -"行。多个单独的工作负载可以同时运行,并独立于服务器启动和停止执行。这对于希望" -"在生产中部署 Flower 的用户来说尤其有用。" +"驱动程序应用程序接口还支持一种新的执行模式,在这种模式下,服务器可无限期运行。多个单独的工作负载可以同时运行,并独立于服务器启动和停止执行。这对于希望在生产中部署" +" Flower 的用户来说尤其有用。" -#: ../../source/ref-changelog.md:447 +#: ../../source/ref-changelog.md:513 msgid "" -"To learn more, check out the `mt-pytorch` code example. We look forward to " -"you feedback!" +"To learn more, check out the `mt-pytorch` code example. We look forward " +"to you feedback!" msgstr "要了解更多信息,请查看 `mt-pytorch` 代码示例。我们期待您的反馈!" -#: ../../source/ref-changelog.md:449 +#: ../../source/ref-changelog.md:515 msgid "" -"Please note: *The Driver API is still experimental and will likely change " -"significantly over time.*" +"Please note: *The Driver API is still experimental and will likely change" +" significantly over time.*" msgstr "请注意:Driver API仍处于试验阶段,随着时间的推移可能会发生重大变化。*" -#: ../../source/ref-changelog.md:451 +#: ../../source/ref-changelog.md:517 msgid "" -"**Add new Federated Analytics with Pandas example** ([#1469](https://github." -"com/adap/flower/pull/1469), [#1535](https://github.com/adap/flower/" -"pull/1535))" +"**Add new Federated Analytics with Pandas example** " +"([#1469](https://github.com/adap/flower/pull/1469), " +"[#1535](https://github.com/adap/flower/pull/1535))" msgstr "" -"** 添加新的使用 Pandas 的联邦分析示例**([#1469](https://github.com/adap/" -"flower/pull/1469), [#1535](https://github.com/adap/flower/pull/1535)" +"** 添加新的使用 Pandas " +"的联邦分析示例**([#1469](https://github.com/adap/flower/pull/1469), " +"[#1535](https://github.com/adap/flower/pull/1535)" -#: ../../source/ref-changelog.md:453 +#: ../../source/ref-changelog.md:519 msgid "" -"A new code example (`quickstart-pandas`) demonstrates federated analytics " -"with Pandas and Flower. You can find it here: [quickstart-pandas](https://" -"github.com/adap/flower/tree/main/examples/quickstart-pandas)." +"A new code example (`quickstart-pandas`) demonstrates federated analytics" +" with Pandas and Flower. You can find it here: [quickstart-" +"pandas](https://github.com/adap/flower/tree/main/examples/quickstart-" +"pandas)." msgstr "" -"新代码示例(`quickstart-pandas`)演示了使用 Pandas 和 Flower 进行联邦分析。您" -"可以在此处找到它: [quickstart-pandas](https://github.com/adap/flower/tree/" -"main/examples/quickstart-pandas)。" +"新代码示例(`quickstart-pandas`)演示了使用 Pandas 和 Flower 进行联邦分析。您可以在此处找到它: " +"[quickstart-pandas](https://github.com/adap/flower/tree/main/examples" +"/quickstart-pandas)。" -#: ../../source/ref-changelog.md:455 +#: ../../source/ref-changelog.md:521 msgid "" -"**Add new strategies: Krum and MultiKrum** ([#1481](https://github.com/adap/" -"flower/pull/1481))" +"**Add new strategies: Krum and MultiKrum** " +"([#1481](https://github.com/adap/flower/pull/1481))" msgstr "" -"**添加新策略: Krum 和 MultiKrum** ([#1481](https://github.com/adap/flower/" -"pull/1481))" +"**添加新策略: Krum 和 MultiKrum** " +"([#1481](https://github.com/adap/flower/pull/1481))" -#: ../../source/ref-changelog.md:457 +#: ../../source/ref-changelog.md:523 msgid "" "Edoardo, a computer science student at the Sapienza University of Rome, " -"contributed a new `Krum` strategy that enables users to easily use Krum and " -"MultiKrum in their workloads." +"contributed a new `Krum` strategy that enables users to easily use Krum " +"and MultiKrum in their workloads." msgstr "" -"罗马萨皮恩扎大学(Sapienza University)计算机科学专业的学生埃多尔多" -"(Edoardo)提出了一种新的 \"Krum \"策略,使用户能够在其工作负载中轻松使用 " -"Krum 和 MultiKrum。" +"罗马萨皮恩扎大学(Sapienza University)计算机科学专业的学生埃多尔多(Edoardo)提出了一种新的 \"Krum " +"\"策略,使用户能够在其工作负载中轻松使用 Krum 和 MultiKrum。" -#: ../../source/ref-changelog.md:459 +#: ../../source/ref-changelog.md:525 msgid "" -"**Update C++ example to be compatible with Flower v1.2.0** ([#1495](https://" -"github.com/adap/flower/pull/1495))" +"**Update C++ example to be compatible with Flower v1.2.0** " +"([#1495](https://github.com/adap/flower/pull/1495))" msgstr "" -"** 更新 C++ 示例,与 Flower v1.2.0 兼容** ([#1495](https://github.com/adap/" -"flower/pull/1495))" +"** 更新 C++ 示例,与 Flower v1.2.0 兼容** " +"([#1495](https://github.com/adap/flower/pull/1495))" -#: ../../source/ref-changelog.md:461 +#: ../../source/ref-changelog.md:527 msgid "" -"The C++ code example has received a substantial update to make it compatible " -"with the latest version of Flower." +"The C++ code example has received a substantial update to make it " +"compatible with the latest version of Flower." msgstr "为了与最新版本的 Flower 兼容,C++ 示例代码进行了大幅更新。" -#: ../../source/ref-changelog.md:463 -msgid "" -"**General improvements** ([#1491](https://github.com/adap/flower/pull/1491), " -"[#1504](https://github.com/adap/flower/pull/1504), [#1506](https://github." -"com/adap/flower/pull/1506), [#1514](https://github.com/adap/flower/" -"pull/1514), [#1522](https://github.com/adap/flower/pull/1522), [#1523]" -"(https://github.com/adap/flower/pull/1523), [#1526](https://github.com/adap/" -"flower/pull/1526), [#1528](https://github.com/adap/flower/pull/1528), [#1547]" -"(https://github.com/adap/flower/pull/1547), [#1549](https://github.com/adap/" -"flower/pull/1549), [#1560](https://github.com/adap/flower/pull/1560), [#1564]" -"(https://github.com/adap/flower/pull/1564), [#1566](https://github.com/adap/" -"flower/pull/1566))" -msgstr "" -"**普通改进** ([#1491](https://github.com/adap/flower/pull/1491), [#1504]" -"(https://github.com/adap/flower/pull/1504), [#1506](https://github.com/adap/" -"flower/pull/1506), [#1514](https://github.com/adap/flower/pull/1514), [#1522]" -"(https://github.com/adap/flower/pull/1522), [#1523](https://github.com/adap/" -"flower/pull/1523), [#1526](https://github. com/adap/flower/pull/1526), " -"[#1528](https://github.com/adap/flower/pull/1528), [#1547](https://github." -"com/adap/flower/pull/1547), [#1549](https://github.com/adap/flower/" -"pull/1549), [#1560](https://github.com/adap/flower/pull/1560), [#1564]" -"(https://github.com/adap/flower/pull/1564), [#1566](https://github.com/adap/" -"flower/pull/1566))" - -#: ../../source/ref-changelog.md:467 -msgid "" -"**Updated documentation** ([#1494](https://github.com/adap/flower/" -"pull/1494), [#1496](https://github.com/adap/flower/pull/1496), [#1500]" -"(https://github.com/adap/flower/pull/1500), [#1503](https://github.com/adap/" -"flower/pull/1503), [#1505](https://github.com/adap/flower/pull/1505), [#1524]" -"(https://github.com/adap/flower/pull/1524), [#1518](https://github.com/adap/" -"flower/pull/1518), [#1519](https://github.com/adap/flower/pull/1519), [#1515]" -"(https://github.com/adap/flower/pull/1515))" -msgstr "" -"** 更新文档** ([#1494](https://github.com/adap/flower/pull/1494), [#1496]" -"(https://github.com/adap/flower/pull/1496), [#1500](https://github.com/adap/" -"flower/pull/1500), [#1503](https://github.com/adap/flower/pull/1503), [#1505]" -"(https://github.com/adap/flower/pull/1505), [#1524](https://github.com/adap/" -"flower/pull/1524), [#1518](https://github.com/adap/flower/pull/1518), [#1519]" -"(https://github.com/adap/flower/pull/1519), [#1515](https://github.com/adap/" -"flower/pull/1515))" - -#: ../../source/ref-changelog.md:471 -msgid "" -"One highlight is the new [first time contributor guide](https://flower.ai/" -"docs/first-time-contributors.html): if you've never contributed on GitHub " -"before, this is the perfect place to start!" +#: ../../source/ref-changelog.md:529 +msgid "" +"**General improvements** " +"([#1491](https://github.com/adap/flower/pull/1491), " +"[#1504](https://github.com/adap/flower/pull/1504), " +"[#1506](https://github.com/adap/flower/pull/1506), " +"[#1514](https://github.com/adap/flower/pull/1514), " +"[#1522](https://github.com/adap/flower/pull/1522), " +"[#1523](https://github.com/adap/flower/pull/1523), " +"[#1526](https://github.com/adap/flower/pull/1526), " +"[#1528](https://github.com/adap/flower/pull/1528), " +"[#1547](https://github.com/adap/flower/pull/1547), " +"[#1549](https://github.com/adap/flower/pull/1549), " +"[#1560](https://github.com/adap/flower/pull/1560), " +"[#1564](https://github.com/adap/flower/pull/1564), " +"[#1566](https://github.com/adap/flower/pull/1566))" +msgstr "" +"**普通改进** ([#1491](https://github.com/adap/flower/pull/1491), " +"[#1504](https://github.com/adap/flower/pull/1504), " +"[#1506](https://github.com/adap/flower/pull/1506), " +"[#1514](https://github.com/adap/flower/pull/1514), " +"[#1522](https://github.com/adap/flower/pull/1522), " +"[#1523](https://github.com/adap/flower/pull/1523), " +"[#1526](https://github. com/adap/flower/pull/1526), " +"[#1528](https://github.com/adap/flower/pull/1528), " +"[#1547](https://github.com/adap/flower/pull/1547), " +"[#1549](https://github.com/adap/flower/pull/1549), " +"[#1560](https://github.com/adap/flower/pull/1560), " +"[#1564](https://github.com/adap/flower/pull/1564), " +"[#1566](https://github.com/adap/flower/pull/1566))" + +#: ../../source/ref-changelog.md:533 +msgid "" +"**Updated documentation** " +"([#1494](https://github.com/adap/flower/pull/1494), " +"[#1496](https://github.com/adap/flower/pull/1496), " +"[#1500](https://github.com/adap/flower/pull/1500), " +"[#1503](https://github.com/adap/flower/pull/1503), " +"[#1505](https://github.com/adap/flower/pull/1505), " +"[#1524](https://github.com/adap/flower/pull/1524), " +"[#1518](https://github.com/adap/flower/pull/1518), " +"[#1519](https://github.com/adap/flower/pull/1519), " +"[#1515](https://github.com/adap/flower/pull/1515))" +msgstr "" +"** 更新文档** ([#1494](https://github.com/adap/flower/pull/1494), " +"[#1496](https://github.com/adap/flower/pull/1496), " +"[#1500](https://github.com/adap/flower/pull/1500), " +"[#1503](https://github.com/adap/flower/pull/1503), " +"[#1505](https://github.com/adap/flower/pull/1505), " +"[#1524](https://github.com/adap/flower/pull/1524), " +"[#1518](https://github.com/adap/flower/pull/1518), " +"[#1519](https://github.com/adap/flower/pull/1519), " +"[#1515](https://github.com/adap/flower/pull/1515))" + +#: ../../source/ref-changelog.md:537 +msgid "" +"One highlight is the new [first time contributor " +"guide](https://flower.ai/docs/first-time-contributors.html): if you've " +"never contributed on GitHub before, this is the perfect place to start!" msgstr "" "其中一个亮点是新的[首次贡献者指南](https://flower.ai/docs/first-time-" -"contributors.html):如果你以前从未在 GitHub 上做过贡献,这将是一个完美的开" -"始!" +"contributors.html):如果你以前从未在 GitHub 上做过贡献,这将是一个完美的开始!" -#: ../../source/ref-changelog.md:477 +#: ../../source/ref-changelog.md:543 msgid "v1.1.0 (2022-10-31)" msgstr "v1.1.0 (2022-10-31)" -#: ../../source/ref-changelog.md:481 +#: ../../source/ref-changelog.md:547 msgid "" "We would like to give our **special thanks** to all the contributors who " "made the new version of Flower possible (in `git shortlog` order):" -msgstr "" -"在此,我们向所有促成 Flower 新版本的贡献者致以**特别的谢意(按 \"git " -"shortlog \"顺序排列):" +msgstr "在此,我们向所有促成 Flower 新版本的贡献者致以**特别的谢意(按 \"git shortlog \"顺序排列):" -#: ../../source/ref-changelog.md:483 +#: ../../source/ref-changelog.md:549 msgid "" "`Akis Linardos`, `Christopher S`, `Daniel J. Beutel`, `George`, `Jan " "Schlicht`, `Mohammad Fares`, `Pedro Porto Buarque de Gusmão`, `Philipp " -"Wiesner`, `Rob Luke`, `Taner Topal`, `VasundharaAgarwal`, `danielnugraha`, " -"`edogab33`" +"Wiesner`, `Rob Luke`, `Taner Topal`, `VasundharaAgarwal`, " +"`danielnugraha`, `edogab33`" msgstr "" "`Akis Linardos`, `Christopher S`, `Daniel J. Beutel`, `George`, `Jan " "Schlicht`, `Mohammad Fares`, `Pedro Porto Buarque de Gusmão`, `Philipp " -"Wiesner`, `Rob Luke`, `Taner Topal`, `VasundharaAgarwal`, `danielnugraha`, " -"`edogab33`" +"Wiesner`, `Rob Luke`, `Taner Topal`, `VasundharaAgarwal`, " +"`danielnugraha`, `edogab33`" -#: ../../source/ref-changelog.md:487 +#: ../../source/ref-changelog.md:553 msgid "" -"**Introduce Differential Privacy wrappers (preview)** ([#1357](https://" -"github.com/adap/flower/pull/1357), [#1460](https://github.com/adap/flower/" -"pull/1460))" +"**Introduce Differential Privacy wrappers (preview)** " +"([#1357](https://github.com/adap/flower/pull/1357), " +"[#1460](https://github.com/adap/flower/pull/1460))" msgstr "" -"**引入差分隐私包装器(预览)** ([#1357](https://github.com/adap/flower/" -"pull/1357), [#1460](https://github.com/adap/flower/pull/1460))" +"**引入差分隐私包装器(预览)** ([#1357](https://github.com/adap/flower/pull/1357), " +"[#1460](https://github.com/adap/flower/pull/1460))" -#: ../../source/ref-changelog.md:489 +#: ../../source/ref-changelog.md:555 msgid "" -"The first (experimental) preview of pluggable Differential Privacy wrappers " -"enables easy configuration and usage of differential privacy (DP). The " -"pluggable DP wrappers enable framework-agnostic **and** strategy-agnostic " -"usage of both client-side DP and server-side DP. Head over to the Flower " -"docs, a new explainer goes into more detail." +"The first (experimental) preview of pluggable Differential Privacy " +"wrappers enables easy configuration and usage of differential privacy " +"(DP). The pluggable DP wrappers enable framework-agnostic **and** " +"strategy-agnostic usage of both client-side DP and server-side DP. Head " +"over to the Flower docs, a new explainer goes into more detail." msgstr "" -"可插拔差分隐私封装器的首个(实验性)预览版可轻松配置和使用差分隐私(DP)。可" -"插拔的差分隐私封装器可实现客户端差分隐私和服务器端差分隐私的框架无关**以及**" -"策略无关的使用。请访问 Flower 文档,新的解释器会提供更多细节。" +"可插拔差分隐私封装器的首个(实验性)预览版可轻松配置和使用差分隐私(DP)。可插拔的差分隐私封装器可实现客户端差分隐私和服务器端差分隐私的框架无关**以及**策略无关的使用。请访问" +" Flower 文档,新的解释器会提供更多细节。" -#: ../../source/ref-changelog.md:491 +#: ../../source/ref-changelog.md:557 msgid "" -"**New iOS CoreML code example** ([#1289](https://github.com/adap/flower/" -"pull/1289))" -msgstr "" -"**新的 iOS CoreML 代码示例**([#1289](https://github.com/adap/flower/" -"pull/1289))" +"**New iOS CoreML code example** " +"([#1289](https://github.com/adap/flower/pull/1289))" +msgstr "**新的 iOS CoreML 代码示例**([#1289](https://github.com/adap/flower/pull/1289))" -#: ../../source/ref-changelog.md:493 +#: ../../source/ref-changelog.md:559 msgid "" -"Flower goes iOS! A massive new code example shows how Flower clients can be " -"built for iOS. The code example contains both Flower iOS SDK components that " -"can be used for many tasks, and one task example running on CoreML." +"Flower goes iOS! A massive new code example shows how Flower clients can " +"be built for iOS. The code example contains both Flower iOS SDK " +"components that can be used for many tasks, and one task example running " +"on CoreML." msgstr "" -"Flower 进入 iOS!大量新代码示例展示了如何为 iOS 构建 Flower 客户端。该代码示" -"例包含可用于多种任务的 Flower iOS SDK 组件,以及在 CoreML 上运行的一个任务示" -"例。" +"Flower 进入 iOS!大量新代码示例展示了如何为 iOS 构建 Flower 客户端。该代码示例包含可用于多种任务的 Flower iOS " +"SDK 组件,以及在 CoreML 上运行的一个任务示例。" -#: ../../source/ref-changelog.md:495 +#: ../../source/ref-changelog.md:561 msgid "" -"**New FedMedian strategy** ([#1461](https://github.com/adap/flower/" -"pull/1461))" -msgstr "" -"**新的联邦医疗策略** ([#1461](https://github.com/adap/flower/pull/1461))" +"**New FedMedian strategy** " +"([#1461](https://github.com/adap/flower/pull/1461))" +msgstr "**新的联邦医疗策略** ([#1461](https://github.com/adap/flower/pull/1461))" -#: ../../source/ref-changelog.md:497 +#: ../../source/ref-changelog.md:563 msgid "" -"The new `FedMedian` strategy implements Federated Median (FedMedian) by [Yin " -"et al., 2018](https://arxiv.org/pdf/1803.01498v1.pdf)." +"The new `FedMedian` strategy implements Federated Median (FedMedian) by " +"[Yin et al., 2018](https://arxiv.org/pdf/1803.01498v1.pdf)." msgstr "" -"新的 \"FedMedian \"战略实现了[Yin 等人,2018]的联邦中值(FedMedian)(https://" -"arxiv.org/pdf/1803.01498v1.pdf)。" +"新的 \"FedMedian \"战略实现了[Yin " +"等人,2018]的联邦中值(FedMedian)(https://arxiv.org/pdf/1803.01498v1.pdf)。" -#: ../../source/ref-changelog.md:499 +#: ../../source/ref-changelog.md:565 msgid "" -"**Log** `Client` **exceptions in Virtual Client Engine** ([#1493](https://" -"github.com/adap/flower/pull/1493))" -msgstr "" -"**虚拟客户端引擎中的**日志**`客户端`**异常([#1493](https://github.com/adap/" -"flower/pull/1493))" +"**Log** `Client` **exceptions in Virtual Client Engine** " +"([#1493](https://github.com/adap/flower/pull/1493))" +msgstr "**虚拟客户端引擎中的**日志**`客户端`**异常([#1493](https://github.com/adap/flower/pull/1493))" -#: ../../source/ref-changelog.md:501 +#: ../../source/ref-changelog.md:567 msgid "" -"All `Client` exceptions happening in the VCE are now logged by default and " -"not just exposed to the configured `Strategy` (via the `failures` argument)." -msgstr "" -"VCE 中发生的所有 \"客户端 \"异常现在都会被默认记录下来,而不只是暴露给配置的 " -"`Strategy`(通过 `failures`参数)。" +"All `Client` exceptions happening in the VCE are now logged by default " +"and not just exposed to the configured `Strategy` (via the `failures` " +"argument)." +msgstr "VCE 中发生的所有 \"客户端 \"异常现在都会被默认记录下来,而不只是暴露给配置的 `Strategy`(通过 `failures`参数)。" -#: ../../source/ref-changelog.md:503 +#: ../../source/ref-changelog.md:569 msgid "" -"**Improve Virtual Client Engine internals** ([#1401](https://github.com/adap/" -"flower/pull/1401), [#1453](https://github.com/adap/flower/pull/1453))" -msgstr "" -"**改进虚拟客户端引擎内部**([#1401](https://github.com/adap/flower/" -"pull/1401)、[#1453](https://github.com/adap/flower/pull/1453))" +"**Improve Virtual Client Engine internals** " +"([#1401](https://github.com/adap/flower/pull/1401), " +"[#1453](https://github.com/adap/flower/pull/1453))" +msgstr "**改进虚拟客户端引擎内部**([#1401](https://github.com/adap/flower/pull/1401)、[#1453](https://github.com/adap/flower/pull/1453))" -#: ../../source/ref-changelog.md:505 +#: ../../source/ref-changelog.md:571 msgid "" -"Some internals of the Virtual Client Engine have been revamped. The VCE now " -"uses Ray 2.0 under the hood, the value type of the `client_resources` " -"dictionary changed to `float` to allow fractions of resources to be " +"Some internals of the Virtual Client Engine have been revamped. The VCE " +"now uses Ray 2.0 under the hood, the value type of the `client_resources`" +" dictionary changed to `float` to allow fractions of resources to be " "allocated." msgstr "" -"虚拟客户端引擎的部分内部结构已进行了修改。VCE 现在使用 Ray " -"2.0,\"client_resources \"字典的值类型改为 \"float\",以允许分配分数资源。" +"虚拟客户端引擎的部分内部结构已进行了修改。VCE 现在使用 Ray 2.0,\"client_resources \"字典的值类型改为 " +"\"float\",以允许分配分数资源。" -#: ../../source/ref-changelog.md:507 +#: ../../source/ref-changelog.md:573 msgid "" -"**Support optional** `Client`**/**`NumPyClient` **methods in Virtual Client " -"Engine**" +"**Support optional** `Client`**/**`NumPyClient` **methods in Virtual " +"Client Engine**" msgstr "**支持虚拟客户端引擎中的可选** `Client`**/**`NumPyClient` **方法**" -#: ../../source/ref-changelog.md:509 +#: ../../source/ref-changelog.md:575 msgid "" -"The Virtual Client Engine now has full support for optional `Client` (and " -"`NumPyClient`) methods." +"The Virtual Client Engine now has full support for optional `Client` (and" +" `NumPyClient`) methods." msgstr "虚拟客户端引擎现在完全支持可选的 `Client`(和 `NumPyClient`)方法。" -#: ../../source/ref-changelog.md:511 +#: ../../source/ref-changelog.md:577 msgid "" -"**Provide type information to packages using** `flwr` ([#1377](https://" -"github.com/adap/flower/pull/1377))" +"**Provide type information to packages using** `flwr` " +"([#1377](https://github.com/adap/flower/pull/1377))" msgstr "" -"**使用** `flwr`向软件包提供类型信息 ([#1377](https://github.com/adap/flower/" -"pull/1377))" +"**使用** `flwr`向软件包提供类型信息 " +"([#1377](https://github.com/adap/flower/pull/1377))" -#: ../../source/ref-changelog.md:513 +#: ../../source/ref-changelog.md:579 msgid "" -"The package `flwr` is now bundled with a `py.typed` file indicating that the " -"package is typed. This enables typing support for projects or packages that " -"use `flwr` by enabling them to improve their code using static type checkers " -"like `mypy`." +"The package `flwr` is now bundled with a `py.typed` file indicating that " +"the package is typed. This enables typing support for projects or " +"packages that use `flwr` by enabling them to improve their code using " +"static type checkers like `mypy`." msgstr "" -"软件包 `flwr` 现在捆绑了一个 `py.typed` 文件,表明该软件包是类型化的。这样," -"使用 `flwr` 的项目或软件包就可以使用 `mypy` 等静态类型检查器改进代码,从而获" -"得类型支持。" +"软件包 `flwr` 现在捆绑了一个 `py.typed` 文件,表明该软件包是类型化的。这样,使用 `flwr` 的项目或软件包就可以使用 " +"`mypy` 等静态类型检查器改进代码,从而获得类型支持。" -#: ../../source/ref-changelog.md:515 +#: ../../source/ref-changelog.md:581 msgid "" -"**Updated code example** ([#1344](https://github.com/adap/flower/pull/1344), " +"**Updated code example** " +"([#1344](https://github.com/adap/flower/pull/1344), " "[#1347](https://github.com/adap/flower/pull/1347))" msgstr "" -"** 更新代码示例** ([#1344](https://github.com/adap/flower/pull/1344), [#1347]" -"(https://github.com/adap/flower/pull/1347))" +"** 更新代码示例** ([#1344](https://github.com/adap/flower/pull/1344), " +"[#1347](https://github.com/adap/flower/pull/1347))" -#: ../../source/ref-changelog.md:517 +#: ../../source/ref-changelog.md:583 msgid "" "The code examples covering scikit-learn and PyTorch Lightning have been " "updated to work with the latest version of Flower." -msgstr "" -"涵盖 scikit-learn 和 PyTorch Lightning 的代码示例已更新,以便与最新版本的 " -"Flower 配合使用。" - -#: ../../source/ref-changelog.md:519 -msgid "" -"**Updated documentation** ([#1355](https://github.com/adap/flower/" -"pull/1355), [#1558](https://github.com/adap/flower/pull/1558), [#1379]" -"(https://github.com/adap/flower/pull/1379), [#1380](https://github.com/adap/" -"flower/pull/1380), [#1381](https://github.com/adap/flower/pull/1381), [#1332]" -"(https://github.com/adap/flower/pull/1332), [#1391](https://github.com/adap/" -"flower/pull/1391), [#1403](https://github.com/adap/flower/pull/1403), [#1364]" -"(https://github.com/adap/flower/pull/1364), [#1409](https://github.com/adap/" -"flower/pull/1409), [#1419](https://github.com/adap/flower/pull/1419), [#1444]" -"(https://github.com/adap/flower/pull/1444), [#1448](https://github.com/adap/" -"flower/pull/1448), [#1417](https://github.com/adap/flower/pull/1417), [#1449]" -"(https://github.com/adap/flower/pull/1449), [#1465](https://github.com/adap/" -"flower/pull/1465), [#1467](https://github.com/adap/flower/pull/1467))" -msgstr "" -"**更新文档** ([#1355](https://github.com/adap/flower/pull/1355), [#1558]" -"(https://github.com/adap/flower/pull/1558), [#1379](https://github.com/adap/" -"flower/pull/1379), [#1380](https://github.com/adap/flower/pull/1380), [#1381]" -"(https://github.com/adap/flower/pull/1381), [#1332](https://github.com/adap/" -"flower/pull/1332), [#1391](https://github.com/adap/flower/pull/1391), [#1403]" -"(https://github.com/adap/flower/pull/1403), [#1364](https://github. com/adap/" -"flower/pull/1364), [#1409](https://github.com/adap/flower/pull/1409), [#1419]" -"(https://github.com/adap/flower/pull/1419), [#1444](https://github.com/adap/" -"flower/pull/1444), [#1448](https://github.com/adap/flower/pull/1448), [#1417]" -"(https://github.com/adap/flower/pull/1417), [#1449](https://github.com/adap/" -"flower/pull/1449), [#1465](https://github.com/adap/flower/pull/1465), [#1467]" -"(https://github.com/adap/flower/pull/1467))" - -#: ../../source/ref-changelog.md:521 +msgstr "涵盖 scikit-learn 和 PyTorch Lightning 的代码示例已更新,以便与最新版本的 Flower 配合使用。" + +#: ../../source/ref-changelog.md:585 +msgid "" +"**Updated documentation** " +"([#1355](https://github.com/adap/flower/pull/1355), " +"[#1558](https://github.com/adap/flower/pull/1558), " +"[#1379](https://github.com/adap/flower/pull/1379), " +"[#1380](https://github.com/adap/flower/pull/1380), " +"[#1381](https://github.com/adap/flower/pull/1381), " +"[#1332](https://github.com/adap/flower/pull/1332), " +"[#1391](https://github.com/adap/flower/pull/1391), " +"[#1403](https://github.com/adap/flower/pull/1403), " +"[#1364](https://github.com/adap/flower/pull/1364), " +"[#1409](https://github.com/adap/flower/pull/1409), " +"[#1419](https://github.com/adap/flower/pull/1419), " +"[#1444](https://github.com/adap/flower/pull/1444), " +"[#1448](https://github.com/adap/flower/pull/1448), " +"[#1417](https://github.com/adap/flower/pull/1417), " +"[#1449](https://github.com/adap/flower/pull/1449), " +"[#1465](https://github.com/adap/flower/pull/1465), " +"[#1467](https://github.com/adap/flower/pull/1467))" +msgstr "" +"**更新文档** ([#1355](https://github.com/adap/flower/pull/1355), " +"[#1558](https://github.com/adap/flower/pull/1558), " +"[#1379](https://github.com/adap/flower/pull/1379), " +"[#1380](https://github.com/adap/flower/pull/1380), " +"[#1381](https://github.com/adap/flower/pull/1381), " +"[#1332](https://github.com/adap/flower/pull/1332), " +"[#1391](https://github.com/adap/flower/pull/1391), " +"[#1403](https://github.com/adap/flower/pull/1403), " +"[#1364](https://github. com/adap/flower/pull/1364), " +"[#1409](https://github.com/adap/flower/pull/1409), " +"[#1419](https://github.com/adap/flower/pull/1419), " +"[#1444](https://github.com/adap/flower/pull/1444), " +"[#1448](https://github.com/adap/flower/pull/1448), " +"[#1417](https://github.com/adap/flower/pull/1417), " +"[#1449](https://github.com/adap/flower/pull/1449), " +"[#1465](https://github.com/adap/flower/pull/1465), " +"[#1467](https://github.com/adap/flower/pull/1467))" + +#: ../../source/ref-changelog.md:587 msgid "" "There have been so many documentation updates that it doesn't even make " "sense to list them individually." msgstr "文档更新的数量之多,甚至没有必要逐一列出。" -#: ../../source/ref-changelog.md:523 +#: ../../source/ref-changelog.md:589 msgid "" -"**Restructured documentation** ([#1387](https://github.com/adap/flower/" -"pull/1387))" +"**Restructured documentation** " +"([#1387](https://github.com/adap/flower/pull/1387))" msgstr "**重构文档**([#1387](https://github.com/adap/flower/pull/1387))" -#: ../../source/ref-changelog.md:525 +#: ../../source/ref-changelog.md:591 msgid "" -"The documentation has been restructured to make it easier to navigate. This " -"is just the first step in a larger effort to make the Flower documentation " -"the best documentation of any project ever. Stay tuned!" -msgstr "" -"我们对文档进行了重组,使其更易于浏览。这只是让 Flower 文档成为所有项目中最好" -"文档的第一步。敬请期待!" +"The documentation has been restructured to make it easier to navigate. " +"This is just the first step in a larger effort to make the Flower " +"documentation the best documentation of any project ever. Stay tuned!" +msgstr "我们对文档进行了重组,使其更易于浏览。这只是让 Flower 文档成为所有项目中最好文档的第一步。敬请期待!" -#: ../../source/ref-changelog.md:527 +#: ../../source/ref-changelog.md:593 msgid "" -"**Open in Colab button** ([#1389](https://github.com/adap/flower/pull/1389))" -msgstr "" -"**在 Colab 中打开按钮** ([#1389](https://github.com/adap/flower/pull/1389))" +"**Open in Colab button** " +"([#1389](https://github.com/adap/flower/pull/1389))" +msgstr "**在 Colab 中打开按钮** ([#1389](https://github.com/adap/flower/pull/1389))" -#: ../../source/ref-changelog.md:529 +#: ../../source/ref-changelog.md:595 msgid "" -"The four parts of the Flower Federated Learning Tutorial now come with a new " -"`Open in Colab` button. No need to install anything on your local machine, " -"you can now use and learn about Flower in your browser, it's only a single " -"click away." +"The four parts of the Flower Federated Learning Tutorial now come with a " +"new `Open in Colab` button. No need to install anything on your local " +"machine, you can now use and learn about Flower in your browser, it's " +"only a single click away." msgstr "" -"Flower 联邦学习教程的四个部分现在都带有一个新的 \"在 Colab 中打开 \"按钮。现" -"在,您无需在本地计算机上安装任何软件,只需点击一下,就可以在浏览器中使用和学" -"习 Flower。" +"Flower 联邦学习教程的四个部分现在都带有一个新的 \"在 Colab 中打开 " +"\"按钮。现在,您无需在本地计算机上安装任何软件,只需点击一下,就可以在浏览器中使用和学习 Flower。" -#: ../../source/ref-changelog.md:531 +#: ../../source/ref-changelog.md:597 msgid "" -"**Improved tutorial** ([#1468](https://github.com/adap/flower/pull/1468), " -"[#1470](https://github.com/adap/flower/pull/1470), [#1472](https://github." -"com/adap/flower/pull/1472), [#1473](https://github.com/adap/flower/" -"pull/1473), [#1474](https://github.com/adap/flower/pull/1474), [#1475]" -"(https://github.com/adap/flower/pull/1475))" +"**Improved tutorial** ([#1468](https://github.com/adap/flower/pull/1468)," +" [#1470](https://github.com/adap/flower/pull/1470), " +"[#1472](https://github.com/adap/flower/pull/1472), " +"[#1473](https://github.com/adap/flower/pull/1473), " +"[#1474](https://github.com/adap/flower/pull/1474), " +"[#1475](https://github.com/adap/flower/pull/1475))" msgstr "" -"**改进教程** ([#1468](https://github.com/adap/flower/pull/1468), [#1470]" -"(https://github.com/adap/flower/pull/1470), [#1472](https://github.com/adap/" -"flower/pull/1472), [#1473](https://github.com/adap/flower/pull/1473), [#1474]" -"(https://github.com/adap/flower/pull/1474), [#1475](https://github.com/adap/" -"flower/pull/1475)))" +"**改进教程** ([#1468](https://github.com/adap/flower/pull/1468), " +"[#1470](https://github.com/adap/flower/pull/1470), " +"[#1472](https://github.com/adap/flower/pull/1472), " +"[#1473](https://github.com/adap/flower/pull/1473), " +"[#1474](https://github.com/adap/flower/pull/1474), " +"[#1475](https://github.com/adap/flower/pull/1475)))" -#: ../../source/ref-changelog.md:533 +#: ../../source/ref-changelog.md:599 msgid "" "The Flower Federated Learning Tutorial has two brand-new parts covering " "custom strategies (still WIP) and the distinction between `Client` and " -"`NumPyClient`. The existing parts one and two have also been improved (many " -"small changes and fixes)." +"`NumPyClient`. The existing parts one and two have also been improved " +"(many small changes and fixes)." msgstr "" -"Flower 联邦学习教程有两个全新的部分,涉及自定义策略(仍处于 WIP 阶段)和 " -"`Client` 与 `NumPyClient` 之间的区别。现有的第一和第二部分也得到了改进(许多" -"小改动和修正)。" +"Flower 联邦学习教程有两个全新的部分,涉及自定义策略(仍处于 WIP 阶段)和 `Client` 与 `NumPyClient` " +"之间的区别。现有的第一和第二部分也得到了改进(许多小改动和修正)。" -#: ../../source/ref-changelog.md:539 +#: ../../source/ref-changelog.md:605 msgid "v1.0.0 (2022-07-28)" msgstr "v1.0.0 (2022-07-28)" -#: ../../source/ref-changelog.md:541 +#: ../../source/ref-changelog.md:607 msgid "Highlights" msgstr "亮点" -#: ../../source/ref-changelog.md:543 +#: ../../source/ref-changelog.md:609 msgid "Stable **Virtual Client Engine** (accessible via `start_simulation`)" msgstr "稳定的**虚拟客户端引擎**(可通过`start_simulation`访问)" -#: ../../source/ref-changelog.md:544 +#: ../../source/ref-changelog.md:610 msgid "All `Client`/`NumPyClient` methods are now optional" msgstr "所有 `Client`/`NumPyClient` 方法现在都是可选的了" -#: ../../source/ref-changelog.md:545 +#: ../../source/ref-changelog.md:611 msgid "Configurable `get_parameters`" msgstr "可配置的`get_parameters`" -#: ../../source/ref-changelog.md:546 +#: ../../source/ref-changelog.md:612 msgid "" -"Tons of small API cleanups resulting in a more coherent developer experience" +"Tons of small API cleanups resulting in a more coherent developer " +"experience" msgstr "对大量小型应用程序接口进行了清理,使开发人员的体验更加一致" -#: ../../source/ref-changelog.md:550 +#: ../../source/ref-changelog.md:616 msgid "" "We would like to give our **special thanks** to all the contributors who " -"made Flower 1.0 possible (in reverse [GitHub Contributors](https://github." -"com/adap/flower/graphs/contributors) order):" -msgstr "" -"在此,我们谨向所有促成 Flower 1.0 的贡献者致以**特别的谢意(按[GitHub 贡献者]" -"(https://github.com/adap/flower/graphs/contributors) 倒序排列):" - -#: ../../source/ref-changelog.md:552 -msgid "" -"[@rtaiello](https://github.com/rtaiello), [@g-pichler](https://github.com/g-" -"pichler), [@rob-luke](https://github.com/rob-luke), [@andreea-zaharia]" -"(https://github.com/andreea-zaharia), [@kinshukdua](https://github.com/" -"kinshukdua), [@nfnt](https://github.com/nfnt), [@tatiana-s](https://github." -"com/tatiana-s), [@TParcollet](https://github.com/TParcollet), [@vballoli]" -"(https://github.com/vballoli), [@negedng](https://github.com/negedng), " -"[@RISHIKESHAVAN](https://github.com/RISHIKESHAVAN), [@hei411](https://github." -"com/hei411), [@SebastianSpeitel](https://github.com/SebastianSpeitel), " -"[@AmitChaulwar](https://github.com/AmitChaulwar), [@Rubiel1](https://github." -"com/Rubiel1), [@FANTOME-PAN](https://github.com/FANTOME-PAN), [@Rono-BC]" -"(https://github.com/Rono-BC), [@lbhm](https://github.com/lbhm), [@sishtiaq]" -"(https://github.com/sishtiaq), [@remde](https://github.com/remde), [@Jueun-" -"Park](https://github.com/Jueun-Park), [@architjen](https://github.com/" -"architjen), [@PratikGarai](https://github.com/PratikGarai), [@mrinaald]" -"(https://github.com/mrinaald), [@zliel](https://github.com/zliel), " -"[@MeiruiJiang](https://github.com/MeiruiJiang), [@sancarlim](https://github." -"com/sancarlim), [@gubertoli](https://github.com/gubertoli), [@Vingt100]" -"(https://github.com/Vingt100), [@MakGulati](https://github.com/MakGulati), " -"[@cozek](https://github.com/cozek), [@jafermarq](https://github.com/" -"jafermarq), [@sisco0](https://github.com/sisco0), [@akhilmathurs](https://" -"github.com/akhilmathurs), [@CanTuerk](https://github.com/CanTuerk), " -"[@mariaboerner1987](https://github.com/mariaboerner1987), [@pedropgusmao]" -"(https://github.com/pedropgusmao), [@tanertopal](https://github.com/" -"tanertopal), [@danieljanes](https://github.com/danieljanes)." -msgstr "" -"[@rtaiello](https://github.com/rtaiello), [@g-pichler](https://github.com/g-" -"pichler), [@rob-luke](https://github.com/rob-luke), [@andreea-zaharia]" -"(https://github.com/andreea-zaharia), [@kinshukdua](https://github.com/" -"kinshukdua), [@nfnt](https://github.com/nfnt), [@tatiana-s](https://github." -"com/tatiana-s), [@TParcollet](https://github.com/TParcollet), [@vballoli]" -"(https://github.com/vballoli), [@negedng](https://github.com/negedng), " -"[@RISHIKESHAVAN](https://github.com/RISHIKESHAVAN), [@hei411](https://github." -"com/hei411), [@SebastianSpeitel](https://github.com/SebastianSpeitel), " -"[@AmitChaulwar](https://github.com/AmitChaulwar), [@Rubiel1](https://github." -"com/Rubiel1), [@FANTOME-PAN](https://github.com/FANTOME-PAN), [@Rono-BC]" -"(https://github.com/Rono-BC), [@lbhm](https://github.com/lbhm), [@sishtiaq]" -"(https://github.com/sishtiaq), [@remde](https://github.com/remde), [@Jueun-" -"Park](https://github.com/Jueun-Park), [@architjen](https://github.com/" -"architjen), [@PratikGarai](https://github.com/PratikGarai), [@mrinaald]" -"(https://github.com/mrinaald), [@zliel](https://github.com/zliel), " -"[@MeiruiJiang](https://github.com/MeiruiJiang), [@sancarlim](https://github." -"com/sancarlim), [@gubertoli](https://github.com/gubertoli), [@Vingt100]" -"(https://github.com/Vingt100), [@MakGulati](https://github.com/MakGulati), " -"[@cozek](https://github.com/cozek), [@jafermarq](https://github.com/" -"jafermarq), [@sisco0](https://github.com/sisco0), [@akhilmathurs](https://" -"github.com/akhilmathurs), [@CanTuerk](https://github.com/CanTuerk), " -"[@mariaboerner1987](https://github.com/mariaboerner1987), [@pedropgusmao]" -"(https://github.com/pedropgusmao), [@tanertopal](https://github.com/" -"tanertopal), [@danieljanes](https://github.com/danieljanes)." - -#: ../../source/ref-changelog.md:556 -msgid "" -"**All arguments must be passed as keyword arguments** ([#1338](https://" -"github.com/adap/flower/pull/1338))" -msgstr "" -"** 所有参数必须作为关键字参数传递** ([#1338](https://github.com/adap/flower/" -"pull/1338))" - -#: ../../source/ref-changelog.md:558 -msgid "" -"Pass all arguments as keyword arguments, positional arguments are not longer " -"supported. Code that uses positional arguments (e.g., " -"`start_client(\"127.0.0.1:8080\", FlowerClient())`) must add the keyword for " -"each positional argument (e.g., " -"`start_client(server_address=\"127.0.0.1:8080\", client=FlowerClient())`)." -msgstr "" -"以关键字参数传递所有参数,不再支持位置参数。使用位置参数的代码(例如," -"`start_client(\"127.0.0.1:8080\", FlowerClient())`)必须为每个位置参数添加关" -"键字(例如,`start_client(server_address=\"127.0.0.1:8080\", " -"client=FlowerClient())`)。" - -#: ../../source/ref-changelog.md:560 +"made Flower 1.0 possible (in reverse [GitHub " +"Contributors](https://github.com/adap/flower/graphs/contributors) order):" +msgstr "" +"在此,我们谨向所有促成 Flower 1.0 的贡献者致以**特别的谢意(按[GitHub " +"贡献者](https://github.com/adap/flower/graphs/contributors) 倒序排列):" + +#: ../../source/ref-changelog.md:618 +msgid "" +"[@rtaiello](https://github.com/rtaiello), " +"[@g-pichler](https://github.com/g-pichler), [@rob-" +"luke](https://github.com/rob-luke), [@andreea-zaharia](https://github.com" +"/andreea-zaharia), [@kinshukdua](https://github.com/kinshukdua), " +"[@nfnt](https://github.com/nfnt), " +"[@tatiana-s](https://github.com/tatiana-s), " +"[@TParcollet](https://github.com/TParcollet), " +"[@vballoli](https://github.com/vballoli), " +"[@negedng](https://github.com/negedng), " +"[@RISHIKESHAVAN](https://github.com/RISHIKESHAVAN), " +"[@hei411](https://github.com/hei411), " +"[@SebastianSpeitel](https://github.com/SebastianSpeitel), " +"[@AmitChaulwar](https://github.com/AmitChaulwar), " +"[@Rubiel1](https://github.com/Rubiel1), [@FANTOME-PAN](https://github.com" +"/FANTOME-PAN), [@Rono-BC](https://github.com/Rono-BC), " +"[@lbhm](https://github.com/lbhm), " +"[@sishtiaq](https://github.com/sishtiaq), " +"[@remde](https://github.com/remde), [@Jueun-Park](https://github.com" +"/Jueun-Park), [@architjen](https://github.com/architjen), " +"[@PratikGarai](https://github.com/PratikGarai), " +"[@mrinaald](https://github.com/mrinaald), " +"[@zliel](https://github.com/zliel), " +"[@MeiruiJiang](https://github.com/MeiruiJiang), " +"[@sancarlim](https://github.com/sancarlim), " +"[@gubertoli](https://github.com/gubertoli), " +"[@Vingt100](https://github.com/Vingt100), " +"[@MakGulati](https://github.com/MakGulati), " +"[@cozek](https://github.com/cozek), " +"[@jafermarq](https://github.com/jafermarq), " +"[@sisco0](https://github.com/sisco0), " +"[@akhilmathurs](https://github.com/akhilmathurs), " +"[@CanTuerk](https://github.com/CanTuerk), " +"[@mariaboerner1987](https://github.com/mariaboerner1987), " +"[@pedropgusmao](https://github.com/pedropgusmao), " +"[@tanertopal](https://github.com/tanertopal), " +"[@danieljanes](https://github.com/danieljanes)." +msgstr "" +"[@rtaiello](https://github.com/rtaiello), " +"[@g-pichler](https://github.com/g-pichler), [@rob-" +"luke](https://github.com/rob-luke), [@andreea-zaharia](https://github.com" +"/andreea-zaharia), [@kinshukdua](https://github.com/kinshukdua), " +"[@nfnt](https://github.com/nfnt), " +"[@tatiana-s](https://github.com/tatiana-s), " +"[@TParcollet](https://github.com/TParcollet), " +"[@vballoli](https://github.com/vballoli), " +"[@negedng](https://github.com/negedng), " +"[@RISHIKESHAVAN](https://github.com/RISHIKESHAVAN), " +"[@hei411](https://github.com/hei411), " +"[@SebastianSpeitel](https://github.com/SebastianSpeitel), " +"[@AmitChaulwar](https://github.com/AmitChaulwar), " +"[@Rubiel1](https://github.com/Rubiel1), [@FANTOME-PAN](https://github.com" +"/FANTOME-PAN), [@Rono-BC](https://github.com/Rono-BC), " +"[@lbhm](https://github.com/lbhm), " +"[@sishtiaq](https://github.com/sishtiaq), " +"[@remde](https://github.com/remde), [@Jueun-Park](https://github.com" +"/Jueun-Park), [@architjen](https://github.com/architjen), " +"[@PratikGarai](https://github.com/PratikGarai), " +"[@mrinaald](https://github.com/mrinaald), " +"[@zliel](https://github.com/zliel), " +"[@MeiruiJiang](https://github.com/MeiruiJiang), " +"[@sancarlim](https://github.com/sancarlim), " +"[@gubertoli](https://github.com/gubertoli), " +"[@Vingt100](https://github.com/Vingt100), " +"[@MakGulati](https://github.com/MakGulati), " +"[@cozek](https://github.com/cozek), " +"[@jafermarq](https://github.com/jafermarq), " +"[@sisco0](https://github.com/sisco0), " +"[@akhilmathurs](https://github.com/akhilmathurs), " +"[@CanTuerk](https://github.com/CanTuerk), " +"[@mariaboerner1987](https://github.com/mariaboerner1987), " +"[@pedropgusmao](https://github.com/pedropgusmao), " +"[@tanertopal](https://github.com/tanertopal), " +"[@danieljanes](https://github.com/danieljanes)." + +#: ../../source/ref-changelog.md:622 +msgid "" +"**All arguments must be passed as keyword arguments** " +"([#1338](https://github.com/adap/flower/pull/1338))" +msgstr "** 所有参数必须作为关键字参数传递** ([#1338](https://github.com/adap/flower/pull/1338))" + +#: ../../source/ref-changelog.md:624 +msgid "" +"Pass all arguments as keyword arguments, positional arguments are not " +"longer supported. Code that uses positional arguments (e.g., " +"`start_client(\"127.0.0.1:8080\", FlowerClient())`) must add the keyword " +"for each positional argument (e.g., " +"`start_client(server_address=\"127.0.0.1:8080\", " +"client=FlowerClient())`)." +msgstr "" +"以关键字参数传递所有参数,不再支持位置参数。使用位置参数的代码(例如,`start_client(\"127.0.0.1:8080\", " +"FlowerClient())`)必须为每个位置参数添加关键字(例如,`start_client(server_address=\"127.0.0.1:8080\"," +" client=FlowerClient())`)。" + +#: ../../source/ref-changelog.md:626 msgid "" "**Introduce configuration object** `ServerConfig` **in** `start_server` " -"**and** `start_simulation` ([#1317](https://github.com/adap/flower/" -"pull/1317))" +"**and** `start_simulation` " +"([#1317](https://github.com/adap/flower/pull/1317))" msgstr "" "**在*** `start_server` ***和*** `start_simulation` 中引入配置对象*** " "`ServerConfig` ([#1317](https://github.com/adap/flower/pull/1317))" -#: ../../source/ref-changelog.md:562 +#: ../../source/ref-changelog.md:628 msgid "" -"Instead of a config dictionary `{\"num_rounds\": 3, \"round_timeout\": 600.0}" -"`, `start_server` and `start_simulation` now expect a configuration object " -"of type `flwr.server.ServerConfig`. `ServerConfig` takes the same arguments " -"that as the previous config dict, but it makes writing type-safe code easier " -"and the default parameters values more transparent." +"Instead of a config dictionary `{\"num_rounds\": 3, \"round_timeout\": " +"600.0}`, `start_server` and `start_simulation` now expect a configuration" +" object of type `flwr.server.ServerConfig`. `ServerConfig` takes the same" +" arguments that as the previous config dict, but it makes writing type-" +"safe code easier and the default parameters values more transparent." msgstr "" -"并非配置字典`{\"num_rounds\": 3, \"round_timeout\": 600.0}`, `start_server`" -"和 `start_simulation`现在用一个类型为 `flwr.server.ServerConfig`的配置对象。" -"`ServerConfig`接收的参数与之前的 config dict 相同,但它使编写类型安全代码变得" -"更容易,默认参数值也更加透明。" +"并非配置字典`{\"num_rounds\": 3, \"round_timeout\": 600.0}`, `start_server`和 " +"`start_simulation`现在用一个类型为 " +"`flwr.server.ServerConfig`的配置对象。`ServerConfig`接收的参数与之前的 config dict " +"相同,但它使编写类型安全代码变得更容易,默认参数值也更加透明。" -#: ../../source/ref-changelog.md:564 +#: ../../source/ref-changelog.md:630 msgid "" -"**Rename built-in strategy parameters for clarity** ([#1334](https://github." -"com/adap/flower/pull/1334))" -msgstr "" -"**重新命名内置策略参数,使其更加清晰** ([#1334](https://github.com/adap/" -"flower/pull/1334))" +"**Rename built-in strategy parameters for clarity** " +"([#1334](https://github.com/adap/flower/pull/1334))" +msgstr "**重新命名内置策略参数,使其更加清晰** ([#1334](https://github.com/adap/flower/pull/1334))" -#: ../../source/ref-changelog.md:566 +#: ../../source/ref-changelog.md:632 msgid "" "The following built-in strategy parameters were renamed to improve " "readability and consistency with other API's:" msgstr "以下内置策略参数已重新命名,以提高可读性并与其他 API 保持一致:" -#: ../../source/ref-changelog.md:568 +#: ../../source/ref-changelog.md:634 msgid "`fraction_eval` --> `fraction_evaluate`" msgstr "`fraction_eval` --> `fraction_evaluate`" -#: ../../source/ref-changelog.md:569 +#: ../../source/ref-changelog.md:635 msgid "`min_eval_clients` --> `min_evaluate_clients`" msgstr "`min_eval_clients` --> `min_evaluate_clients`" -#: ../../source/ref-changelog.md:570 +#: ../../source/ref-changelog.md:636 msgid "`eval_fn` --> `evaluate_fn`" msgstr "`eval_fn` --> `evaluate_fn`" -#: ../../source/ref-changelog.md:572 +#: ../../source/ref-changelog.md:638 msgid "" -"**Update default arguments of built-in strategies** ([#1278](https://github." -"com/adap/flower/pull/1278))" -msgstr "" -"**更新内置策略的默认参数** ([#1278](https://github.com/adap/flower/" -"pull/1278))" +"**Update default arguments of built-in strategies** " +"([#1278](https://github.com/adap/flower/pull/1278))" +msgstr "**更新内置策略的默认参数** ([#1278](https://github.com/adap/flower/pull/1278))" -#: ../../source/ref-changelog.md:574 +#: ../../source/ref-changelog.md:640 msgid "" "All built-in strategies now use `fraction_fit=1.0` and " -"`fraction_evaluate=1.0`, which means they select *all* currently available " -"clients for training and evaluation. Projects that relied on the previous " -"default values can get the previous behaviour by initializing the strategy " -"in the following way:" +"`fraction_evaluate=1.0`, which means they select *all* currently " +"available clients for training and evaluation. Projects that relied on " +"the previous default values can get the previous behaviour by " +"initializing the strategy in the following way:" msgstr "" -"所有内置策略现在都使用 \"fraction_fit=1.0 \"和 \"fraction_evaluate=1.0\",这" -"意味着它们会选择*所有*当前可用的客户端进行训练和评估。依赖以前默认值的项目可" -"以通过以下方式初始化策略,获得以前的行为:" +"所有内置策略现在都使用 \"fraction_fit=1.0 \"和 " +"\"fraction_evaluate=1.0\",这意味着它们会选择*所有*当前可用的客户端进行训练和评估。依赖以前默认值的项目可以通过以下方式初始化策略,获得以前的行为:" -#: ../../source/ref-changelog.md:576 +#: ../../source/ref-changelog.md:642 msgid "`strategy = FedAvg(fraction_fit=0.1, fraction_evaluate=0.1)`" msgstr "`strategy = FedAvg(fraction_fit=0.1, fraction_evaluate=0.1)`" -#: ../../source/ref-changelog.md:578 +#: ../../source/ref-changelog.md:644 msgid "" -"**Add** `server_round` **to** `Strategy.evaluate` ([#1334](https://github." -"com/adap/flower/pull/1334))" +"**Add** `server_round` **to** `Strategy.evaluate` " +"([#1334](https://github.com/adap/flower/pull/1334))" msgstr "" -"**添加*** `server_round` ***到*** `Strategy.evaluate` ([#1334](https://" -"github.com/adap/flower/pull/1334))" +"**添加*** `server_round` ***到*** `Strategy.evaluate` " +"([#1334](https://github.com/adap/flower/pull/1334))" -#: ../../source/ref-changelog.md:580 +#: ../../source/ref-changelog.md:646 msgid "" -"The `Strategy` method `evaluate` now receives the current round of federated " -"learning/evaluation as the first parameter." -msgstr "" -"`Strategy`的`evaluate` 方法现在会接收当前一轮联邦学习/评估作为第一个参数。" +"The `Strategy` method `evaluate` now receives the current round of " +"federated learning/evaluation as the first parameter." +msgstr "`Strategy`的`evaluate` 方法现在会接收当前一轮联邦学习/评估作为第一个参数。" -#: ../../source/ref-changelog.md:582 +#: ../../source/ref-changelog.md:648 msgid "" "**Add** `server_round` **and** `config` **parameters to** `evaluate_fn` " "([#1334](https://github.com/adap/flower/pull/1334))" @@ -17753,106 +19262,103 @@ msgstr "" "**将*** `server_round` **和*** `config` **参数添加到*** `evaluate_fn` " "([#1334](https://github.com/adap/flower/pull/1334))" -#: ../../source/ref-changelog.md:584 +#: ../../source/ref-changelog.md:650 msgid "" "The `evaluate_fn` passed to built-in strategies like `FedAvg` now takes " "three parameters: (1) The current round of federated learning/evaluation " -"(`server_round`), (2) the model parameters to evaluate (`parameters`), and " -"(3) a config dictionary (`config`)." +"(`server_round`), (2) the model parameters to evaluate (`parameters`), " +"and (3) a config dictionary (`config`)." msgstr "" -"传递给内置策略(如 `FedAvg`)的 `evaluate_fn` 现在需要三个参数:(1) 当前一轮" -"联邦学习/评估 (`server_round`),(2) 要评估的模型参数 (`parameters`),(3) 配置" -"字典 (`config`)。" +"传递给内置策略(如 `FedAvg`)的 `evaluate_fn` 现在需要三个参数:(1) 当前一轮联邦学习/评估 " +"(`server_round`),(2) 要评估的模型参数 (`parameters`),(3) 配置字典 (`config`)。" -#: ../../source/ref-changelog.md:586 +#: ../../source/ref-changelog.md:652 msgid "" -"**Rename** `rnd` **to** `server_round` ([#1321](https://github.com/adap/" -"flower/pull/1321))" +"**Rename** `rnd` **to** `server_round` " +"([#1321](https://github.com/adap/flower/pull/1321))" msgstr "" -"**重新命名** `rnd` ** to** `server_round` ([#1321](https://github.com/adap/" -"flower/pull/1321))" +"**重新命名** `rnd` ** to** `server_round` " +"([#1321](https://github.com/adap/flower/pull/1321))" -#: ../../source/ref-changelog.md:588 +#: ../../source/ref-changelog.md:654 msgid "" "Several Flower methods and functions (`evaluate_fn`, `configure_fit`, " "`aggregate_fit`, `configure_evaluate`, `aggregate_evaluate`) receive the " -"current round of federated learning/evaluation as their first parameter. To " -"improve reaability and avoid confusion with *random*, this parameter has " -"been renamed from `rnd` to `server_round`." +"current round of federated learning/evaluation as their first parameter. " +"To improve reaability and avoid confusion with *random*, this parameter " +"has been renamed from `rnd` to `server_round`." msgstr "" -"几个 Flower 方法和函数(`evaluate_fn`、`configure_fit`、`aggregate_fit`、" -"`configure_evaluate`、`aggregate_evaluate`)的第一个参数是当前一轮的联邦学习/" -"评估。为提高可重复性并避免与 *random* 混淆,该参数已从 `rnd` 更名为 " -"`server_round`。" +"几个 Flower " +"方法和函数(`evaluate_fn`、`configure_fit`、`aggregate_fit`、`configure_evaluate`、`aggregate_evaluate`)的第一个参数是当前一轮的联邦学习/评估。为提高可重复性并避免与" +" *random* 混淆,该参数已从 `rnd` 更名为 `server_round`。" -#: ../../source/ref-changelog.md:590 +#: ../../source/ref-changelog.md:656 msgid "" -"**Move** `flwr.dataset` **to** `flwr_baselines` ([#1273](https://github.com/" -"adap/flower/pull/1273))" +"**Move** `flwr.dataset` **to** `flwr_baselines` " +"([#1273](https://github.com/adap/flower/pull/1273))" msgstr "" -"**移动*** `flwr.dataset` **到*** `flwr_baselines` ([#1273](https://github." -"com/adap/flower/pull/1273))" +"**移动*** `flwr.dataset` **到*** `flwr_baselines` " +"([#1273](https://github.com/adap/flower/pull/1273))" -#: ../../source/ref-changelog.md:592 -msgid "" -"The experimental package `flwr.dataset` was migrated to Flower Baselines." +#: ../../source/ref-changelog.md:658 +msgid "The experimental package `flwr.dataset` was migrated to Flower Baselines." msgstr "实验软件包 `flwr.dataset` 已迁移至 Flower Baselines。" -#: ../../source/ref-changelog.md:594 +#: ../../source/ref-changelog.md:660 msgid "" -"**Remove experimental strategies** ([#1280](https://github.com/adap/flower/" -"pull/1280))" +"**Remove experimental strategies** " +"([#1280](https://github.com/adap/flower/pull/1280))" msgstr "**删除实验策略** ([#1280](https://github.com/adap/flower/pull/1280))" -#: ../../source/ref-changelog.md:596 +#: ../../source/ref-changelog.md:662 msgid "" "Remove unmaintained experimental strategies (`FastAndSlow`, `FedFSv0`, " "`FedFSv1`)." msgstr "移除未维护的试验性策略(`FastAndSlow`、`FedFSv0`、`FedFSv1`)。" -#: ../../source/ref-changelog.md:598 +#: ../../source/ref-changelog.md:664 msgid "" -"**Rename** `Weights` **to** `NDArrays` ([#1258](https://github.com/adap/" -"flower/pull/1258), [#1259](https://github.com/adap/flower/pull/1259))" +"**Rename** `Weights` **to** `NDArrays` " +"([#1258](https://github.com/adap/flower/pull/1258), " +"[#1259](https://github.com/adap/flower/pull/1259))" msgstr "" -"**重新命名** `Weights` **到** `NDArrays` ([#1258](https://github.com/adap/" -"flower/pull/1258), [#1259](https://github.com/adap/flower/pull/1259))" +"**重新命名** `Weights` **到** `NDArrays` " +"([#1258](https://github.com/adap/flower/pull/1258), " +"[#1259](https://github.com/adap/flower/pull/1259))" -#: ../../source/ref-changelog.md:600 +#: ../../source/ref-changelog.md:666 msgid "" "`flwr.common.Weights` was renamed to `flwr.common.NDArrays` to better " "capture what this type is all about." -msgstr "" -"flwr.common.Weights \"更名为 \"flwr.common.NDArrays\",以更好地反映该类型的含" -"义。" +msgstr "flwr.common.Weights \"更名为 \"flwr.common.NDArrays\",以更好地反映该类型的含义。" -#: ../../source/ref-changelog.md:602 +#: ../../source/ref-changelog.md:668 msgid "" -"**Remove antiquated** `force_final_distributed_eval` **from** `start_server` " -"([#1258](https://github.com/adap/flower/pull/1258), [#1259](https://github." -"com/adap/flower/pull/1259))" +"**Remove antiquated** `force_final_distributed_eval` **from** " +"`start_server` ([#1258](https://github.com/adap/flower/pull/1258), " +"[#1259](https://github.com/adap/flower/pull/1259))" msgstr "" -"**从** `start_server` 中移除过时的** `force_final_distributed_eval` ([#1258]" -"(https://github.com/adap/flower/pull/1258), [#1259](https://github.com/adap/" -"flower/pull/1259))" +"**从** `start_server` 中移除过时的** `force_final_distributed_eval` " +"([#1258](https://github.com/adap/flower/pull/1258), " +"[#1259](https://github.com/adap/flower/pull/1259))" -#: ../../source/ref-changelog.md:604 +#: ../../source/ref-changelog.md:670 msgid "" -"The `start_server` parameter `force_final_distributed_eval` has long been a " -"historic artefact, in this release it is finally gone for good." +"The `start_server` parameter `force_final_distributed_eval` has long been" +" a historic artefact, in this release it is finally gone for good." msgstr "" -"start_server \"参数 \"force_final_distributed_eval \"长期以来一直是个历史遗留" -"问题,在此版本中终于永远消失了。" +"start_server \"参数 \"force_final_distributed_eval " +"\"长期以来一直是个历史遗留问题,在此版本中终于永远消失了。" -#: ../../source/ref-changelog.md:606 +#: ../../source/ref-changelog.md:672 msgid "" -"**Make** `get_parameters` **configurable** ([#1242](https://github.com/adap/" -"flower/pull/1242))" +"**Make** `get_parameters` **configurable** " +"([#1242](https://github.com/adap/flower/pull/1242))" msgstr "" -"**使** `get_parameters` **可配置** ([#1242](https://github.com/adap/flower/" -"pull/1242))" +"**使** `get_parameters` **可配置** " +"([#1242](https://github.com/adap/flower/pull/1242))" -#: ../../source/ref-changelog.md:608 +#: ../../source/ref-changelog.md:674 msgid "" "The `get_parameters` method now accepts a configuration dictionary, just " "like `get_properties`, `fit`, and `evaluate`." @@ -17860,7 +19366,7 @@ msgstr "" "现在,\"get_parameters \"方法与 \"get_properties\"、\"fit \"和 \"evaluate " "\"一样,都接受配置字典。" -#: ../../source/ref-changelog.md:610 +#: ../../source/ref-changelog.md:676 msgid "" "**Replace** `num_rounds` **in** `start_simulation` **with new** `config` " "**parameter** ([#1281](https://github.com/adap/flower/pull/1281))" @@ -17868,119 +19374,115 @@ msgstr "" "**用新的** `config` 参数** 替换** `num_rounds` ** in** `start_simulation` ** " "([#1281](https://github.com/adap/flower/pull/1281))" -#: ../../source/ref-changelog.md:612 +#: ../../source/ref-changelog.md:678 msgid "" "The `start_simulation` function now accepts a configuration dictionary " -"`config` instead of the `num_rounds` integer. This improves the consistency " -"between `start_simulation` and `start_server` and makes transitioning " -"between the two easier." +"`config` instead of the `num_rounds` integer. This improves the " +"consistency between `start_simulation` and `start_server` and makes " +"transitioning between the two easier." msgstr "" -"现在,`start_simulation`(开始模拟)` 函数接受配置字典 `config` 而不是 " -"`num_rounds` 整数。这改进了 `start_simulation` 和 `start_server` 之间的一致" -"性,并使两者之间的转换更容易。" +"现在,`start_simulation`(开始模拟)` 函数接受配置字典 `config` 而不是 `num_rounds` 整数。这改进了 " +"`start_simulation` 和 `start_server` 之间的一致性,并使两者之间的转换更容易。" -#: ../../source/ref-changelog.md:616 +#: ../../source/ref-changelog.md:682 msgid "" -"**Support Python 3.10** ([#1320](https://github.com/adap/flower/pull/1320))" -msgstr "" -"** 支持 Python 3.10** ([#1320](https://github.com/adap/flower/pull/1320))" +"**Support Python 3.10** " +"([#1320](https://github.com/adap/flower/pull/1320))" +msgstr "** 支持 Python 3.10** ([#1320](https://github.com/adap/flower/pull/1320))" -#: ../../source/ref-changelog.md:618 +#: ../../source/ref-changelog.md:684 msgid "" -"The previous Flower release introduced experimental support for Python 3.10, " -"this release declares Python 3.10 support as stable." -msgstr "" -"上一个 Flower 版本引入了对 Python 3.10 的实验支持,而本版本则宣布对 Python " -"3.10 的支持为稳定支持。" +"The previous Flower release introduced experimental support for Python " +"3.10, this release declares Python 3.10 support as stable." +msgstr "上一个 Flower 版本引入了对 Python 3.10 的实验支持,而本版本则宣布对 Python 3.10 的支持为稳定支持。" -#: ../../source/ref-changelog.md:620 +#: ../../source/ref-changelog.md:686 msgid "" -"**Make all** `Client` **and** `NumPyClient` **methods optional** ([#1260]" -"(https://github.com/adap/flower/pull/1260), [#1277](https://github.com/adap/" -"flower/pull/1277))" +"**Make all** `Client` **and** `NumPyClient` **methods optional** " +"([#1260](https://github.com/adap/flower/pull/1260), " +"[#1277](https://github.com/adap/flower/pull/1277))" msgstr "" -"**使所有** `Client` **和** `NumPyClient` **方法成为可选** ([#1260](https://" -"github.com/adap/flower/pull/1260), [#1277](https://github.com/adap/flower/" -"pull/1277))" +"**使所有** `Client` **和** `NumPyClient` **方法成为可选** " +"([#1260](https://github.com/adap/flower/pull/1260), " +"[#1277](https://github.com/adap/flower/pull/1277))" -#: ../../source/ref-changelog.md:622 +#: ../../source/ref-changelog.md:688 msgid "" "The `Client`/`NumPyClient` methods `get_properties`, `get_parameters`, " -"`fit`, and `evaluate` are all optional. This enables writing clients that " -"implement, for example, only `fit`, but no other method. No need to " +"`fit`, and `evaluate` are all optional. This enables writing clients that" +" implement, for example, only `fit`, but no other method. No need to " "implement `evaluate` when using centralized evaluation!" msgstr "" "`Client`/`NumPyClient`的 \"get_properties\"、\"get_parameters\"、\"fit \"和 " -"\"evaluate \"方法都是可选的。这样就可以编写只实现 `fit` 而不实现其他方法的客" -"户端。使用集中评估时,无需实现 `evaluate`!" +"\"evaluate \"方法都是可选的。这样就可以编写只实现 `fit` 而不实现其他方法的客户端。使用集中评估时,无需实现 " +"`evaluate`!" -#: ../../source/ref-changelog.md:624 +#: ../../source/ref-changelog.md:690 msgid "" -"**Enable passing a** `Server` **instance to** `start_simulation` ([#1281]" -"(https://github.com/adap/flower/pull/1281))" +"**Enable passing a** `Server` **instance to** `start_simulation` " +"([#1281](https://github.com/adap/flower/pull/1281))" msgstr "" -"**启用向** `start_simulation` 传递** `Server` 实例 ([#1281](https://github." -"com/adap/flower/pull/1281))" +"**启用向** `start_simulation` 传递** `Server` 实例 " +"([#1281](https://github.com/adap/flower/pull/1281))" -#: ../../source/ref-changelog.md:626 +#: ../../source/ref-changelog.md:692 msgid "" -"Similar to `start_server`, `start_simulation` now accepts a full `Server` " -"instance. This enables users to heavily customize the execution of " -"eperiments and opens the door to running, for example, async FL using the " -"Virtual Client Engine." +"Similar to `start_server`, `start_simulation` now accepts a full `Server`" +" instance. This enables users to heavily customize the execution of " +"eperiments and opens the door to running, for example, async FL using the" +" Virtual Client Engine." msgstr "" -"与 `start_server` 类似,`start_simulation` 现在也接受一个完整的 `Server` 实" -"例。这使得用户可以对实验的执行进行大量自定义,并为使用虚拟客户端引擎运行异步 " -"FL 等打开了大门。" +"与 `start_server` 类似,`start_simulation` 现在也接受一个完整的 `Server` " +"实例。这使得用户可以对实验的执行进行大量自定义,并为使用虚拟客户端引擎运行异步 FL 等打开了大门。" -#: ../../source/ref-changelog.md:628 +#: ../../source/ref-changelog.md:694 msgid "" -"**Update code examples** ([#1291](https://github.com/adap/flower/pull/1291), " -"[#1286](https://github.com/adap/flower/pull/1286), [#1282](https://github." -"com/adap/flower/pull/1282))" +"**Update code examples** " +"([#1291](https://github.com/adap/flower/pull/1291), " +"[#1286](https://github.com/adap/flower/pull/1286), " +"[#1282](https://github.com/adap/flower/pull/1282))" msgstr "" -"**更新代码示例** ([#1291](https://github.com/adap/flower/pull/1291), [#1286]" -"(https://github.com/adap/flower/pull/1286), [#1282](https://github.com/adap/" -"flower/pull/1282))" +"**更新代码示例** ([#1291](https://github.com/adap/flower/pull/1291), " +"[#1286](https://github.com/adap/flower/pull/1286), " +"[#1282](https://github.com/adap/flower/pull/1282))" -#: ../../source/ref-changelog.md:630 +#: ../../source/ref-changelog.md:696 msgid "" -"Many code examples received small or even large maintenance updates, among " -"them are" +"Many code examples received small or even large maintenance updates, " +"among them are" msgstr "许多代码示例都进行了小规模甚至大规模的维护更新,其中包括" -#: ../../source/ref-changelog.md:632 +#: ../../source/ref-changelog.md:698 msgid "`scikit-learn`" msgstr "`scikit-learn`" -#: ../../source/ref-changelog.md:633 +#: ../../source/ref-changelog.md:699 msgid "`simulation_pytorch`" msgstr "`simulation_pytorch`" -#: ../../source/ref-changelog.md:634 +#: ../../source/ref-changelog.md:700 msgid "`quickstart_pytorch`" msgstr "`quickstart_pytorch`" -#: ../../source/ref-changelog.md:635 +#: ../../source/ref-changelog.md:701 msgid "`quickstart_simulation`" msgstr "`quickstart_simulation`" -#: ../../source/ref-changelog.md:636 +#: ../../source/ref-changelog.md:702 msgid "`quickstart_tensorflow`" msgstr "`quickstart_tensorflow`" -#: ../../source/ref-changelog.md:637 +#: ../../source/ref-changelog.md:703 msgid "`advanced_tensorflow`" msgstr "`advanced_tensorflow`" -#: ../../source/ref-changelog.md:639 +#: ../../source/ref-changelog.md:705 msgid "" -"**Remove the obsolete simulation example** ([#1328](https://github.com/adap/" -"flower/pull/1328))" -msgstr "" -"**删除过时的模拟示例** ([#1328](https://github.com/adap/flower/pull/1328))" +"**Remove the obsolete simulation example** " +"([#1328](https://github.com/adap/flower/pull/1328))" +msgstr "**删除过时的模拟示例** ([#1328](https://github.com/adap/flower/pull/1328))" -#: ../../source/ref-changelog.md:641 +#: ../../source/ref-changelog.md:707 msgid "" "Removes the obsolete `simulation` example and renames " "`quickstart_simulation` to `simulation_tensorflow` so it fits withs the " @@ -17989,589 +19491,579 @@ msgstr "" "删除过时的 \"simulation \"示例,并将 \"quickstart_simulation \"重命名为 " "\"simulation_tensorflow\",使其与 \"simulation_pytorch \"的命名一致" -#: ../../source/ref-changelog.md:643 -msgid "" -"**Update documentation** ([#1223](https://github.com/adap/flower/pull/1223), " -"[#1209](https://github.com/adap/flower/pull/1209), [#1251](https://github." -"com/adap/flower/pull/1251), [#1257](https://github.com/adap/flower/" -"pull/1257), [#1267](https://github.com/adap/flower/pull/1267), [#1268]" -"(https://github.com/adap/flower/pull/1268), [#1300](https://github.com/adap/" -"flower/pull/1300), [#1304](https://github.com/adap/flower/pull/1304), [#1305]" -"(https://github.com/adap/flower/pull/1305), [#1307](https://github.com/adap/" -"flower/pull/1307))" -msgstr "" -"**更新文档** ([#1223](https://github.com/adap/flower/pull/1223), [#1209]" -"(https://github.com/adap/flower/pull/1209), [#1251](https://github.com/adap/" -"flower/pull/1251), [#1257](https://github.com/adap/flower/pull/1257), [#1267]" -"(https://github.com/adap/flower/pull/1267), [#1268](https://github.com/adap/" -"flower/pull/1268), [#1300](https://github.com/adap/flower/pull/1300), [#1304]" -"(https://github.com/adap/flower/pull/1304), [#1305](https://github.com/adap/" -"flower/pull/1305), [#1307](https://github.com/adap/flower/pull/1307))" - -#: ../../source/ref-changelog.md:645 +#: ../../source/ref-changelog.md:709 +msgid "" +"**Update documentation** " +"([#1223](https://github.com/adap/flower/pull/1223), " +"[#1209](https://github.com/adap/flower/pull/1209), " +"[#1251](https://github.com/adap/flower/pull/1251), " +"[#1257](https://github.com/adap/flower/pull/1257), " +"[#1267](https://github.com/adap/flower/pull/1267), " +"[#1268](https://github.com/adap/flower/pull/1268), " +"[#1300](https://github.com/adap/flower/pull/1300), " +"[#1304](https://github.com/adap/flower/pull/1304), " +"[#1305](https://github.com/adap/flower/pull/1305), " +"[#1307](https://github.com/adap/flower/pull/1307))" +msgstr "" +"**更新文档** ([#1223](https://github.com/adap/flower/pull/1223), " +"[#1209](https://github.com/adap/flower/pull/1209), " +"[#1251](https://github.com/adap/flower/pull/1251), " +"[#1257](https://github.com/adap/flower/pull/1257), " +"[#1267](https://github.com/adap/flower/pull/1267), " +"[#1268](https://github.com/adap/flower/pull/1268), " +"[#1300](https://github.com/adap/flower/pull/1300), " +"[#1304](https://github.com/adap/flower/pull/1304), " +"[#1305](https://github.com/adap/flower/pull/1305), " +"[#1307](https://github.com/adap/flower/pull/1307))" + +#: ../../source/ref-changelog.md:711 msgid "" "One substantial documentation update fixes multiple smaller rendering " "issues, makes titles more succinct to improve navigation, removes a " -"deprecated library, updates documentation dependencies, includes the `flwr." -"common` module in the API reference, includes support for markdown-based " -"documentation, migrates the changelog from `.rst` to `.md`, and fixes a " -"number of smaller details!" -msgstr "" -"其中一个实质性的文档更新修复了多个较小的渲染问题,使标题更加简洁以改善导航," -"删除了一个已废弃的库,更新了文档依赖关系,在 API 参考中包含了 `flwr.common` " -"模块,包含了对基于 markdown 的文档的支持,将更新日志从 `.rst` 移植到了 `." -"md`,并修复了一些较小的细节!" - -#: ../../source/ref-changelog.md:647 ../../source/ref-changelog.md:702 -#: ../../source/ref-changelog.md:771 ../../source/ref-changelog.md:810 +"deprecated library, updates documentation dependencies, includes the " +"`flwr.common` module in the API reference, includes support for markdown-" +"based documentation, migrates the changelog from `.rst` to `.md`, and " +"fixes a number of smaller details!" +msgstr "" +"其中一个实质性的文档更新修复了多个较小的渲染问题,使标题更加简洁以改善导航,删除了一个已废弃的库,更新了文档依赖关系,在 API 参考中包含了 " +"`flwr.common` 模块,包含了对基于 markdown 的文档的支持,将更新日志从 `.rst` 移植到了 " +"`.md`,并修复了一些较小的细节!" + +#: ../../source/ref-changelog.md:713 ../../source/ref-changelog.md:768 +#: ../../source/ref-changelog.md:837 ../../source/ref-changelog.md:876 msgid "**Minor updates**" msgstr "**小规模更新**" -#: ../../source/ref-changelog.md:649 +#: ../../source/ref-changelog.md:715 msgid "" -"Add round number to fit and evaluate log messages ([#1266](https://github." -"com/adap/flower/pull/1266))" -msgstr "" -"添加四舍五入数字,以适应和评估日志信息([#1266](https://github.com/adap/" -"flower/pull/1266))" +"Add round number to fit and evaluate log messages " +"([#1266](https://github.com/adap/flower/pull/1266))" +msgstr "添加四舍五入数字,以适应和评估日志信息([#1266](https://github.com/adap/flower/pull/1266))" -#: ../../source/ref-changelog.md:650 +#: ../../source/ref-changelog.md:716 msgid "" -"Add secure gRPC connection to the `advanced_tensorflow` code example ([#847]" -"(https://github.com/adap/flower/pull/847))" +"Add secure gRPC connection to the `advanced_tensorflow` code example " +"([#847](https://github.com/adap/flower/pull/847))" msgstr "" -"为 `advanced_tensorflow` 代码示例添加安全 gRPC 连接 ([#847](https://github." -"com/adap/flower/pull/847))" +"为 `advanced_tensorflow` 代码示例添加安全 gRPC 连接 " +"([#847](https://github.com/adap/flower/pull/847))" -#: ../../source/ref-changelog.md:651 +#: ../../source/ref-changelog.md:717 msgid "" -"Update developer tooling ([#1231](https://github.com/adap/flower/pull/1231), " -"[#1276](https://github.com/adap/flower/pull/1276), [#1301](https://github." -"com/adap/flower/pull/1301), [#1310](https://github.com/adap/flower/" -"pull/1310))" +"Update developer tooling " +"([#1231](https://github.com/adap/flower/pull/1231), " +"[#1276](https://github.com/adap/flower/pull/1276), " +"[#1301](https://github.com/adap/flower/pull/1301), " +"[#1310](https://github.com/adap/flower/pull/1310))" msgstr "" -"更新开发人员工具([#1231](https://github.com/adap/flower/pull/1231), [#1276]" -"(https://github.com/adap/flower/pull/1276), [#1301](https://github.com/adap/" -"flower/pull/1301), [#1310](https://github.com/adap/flower/pull/1310)" +"更新开发人员工具([#1231](https://github.com/adap/flower/pull/1231), " +"[#1276](https://github.com/adap/flower/pull/1276), " +"[#1301](https://github.com/adap/flower/pull/1301), " +"[#1310](https://github.com/adap/flower/pull/1310)" -#: ../../source/ref-changelog.md:652 +#: ../../source/ref-changelog.md:718 msgid "" -"Rename ProtoBuf messages to improve consistency ([#1214](https://github.com/" -"adap/flower/pull/1214), [#1258](https://github.com/adap/flower/pull/1258), " +"Rename ProtoBuf messages to improve consistency " +"([#1214](https://github.com/adap/flower/pull/1214), " +"[#1258](https://github.com/adap/flower/pull/1258), " "[#1259](https://github.com/adap/flower/pull/1259))" msgstr "" -"重命名 ProtoBuf 消息以提高一致性([#1214](https://github.com/adap/flower/" -"pull/1214), [#1258](https://github.com/adap/flower/pull/1258), [#1259]" -"(https://github.com/adap/flower/pull/1259)" +"重命名 ProtoBuf 消息以提高一致性([#1214](https://github.com/adap/flower/pull/1214), " +"[#1258](https://github.com/adap/flower/pull/1258), " +"[#1259](https://github.com/adap/flower/pull/1259)" -#: ../../source/ref-changelog.md:654 +#: ../../source/ref-changelog.md:720 msgid "v0.19.0 (2022-05-18)" msgstr "v0.19.0 (2022-05-18)" -#: ../../source/ref-changelog.md:658 +#: ../../source/ref-changelog.md:724 msgid "" -"**Flower Baselines (preview): FedOpt, FedBN, FedAvgM** ([#919](https://" -"github.com/adap/flower/pull/919), [#1127](https://github.com/adap/flower/" -"pull/1127), [#914](https://github.com/adap/flower/pull/914))" +"**Flower Baselines (preview): FedOpt, FedBN, FedAvgM** " +"([#919](https://github.com/adap/flower/pull/919), " +"[#1127](https://github.com/adap/flower/pull/1127), " +"[#914](https://github.com/adap/flower/pull/914))" msgstr "" -"**Flower Baselines(预览): FedOpt、FedBN、FedAvgM** ([#919](https://github." -"com/adap/flower/pull/919), [#1127](https://github.com/adap/flower/" -"pull/1127), [#914](https://github.com/adap/flower/pull/914))" +"**Flower Baselines(预览): FedOpt、FedBN、FedAvgM** " +"([#919](https://github.com/adap/flower/pull/919), " +"[#1127](https://github.com/adap/flower/pull/1127), " +"[#914](https://github.com/adap/flower/pull/914))" -#: ../../source/ref-changelog.md:660 +#: ../../source/ref-changelog.md:726 #, fuzzy msgid "" "The first preview release of Flower Baselines has arrived! We're " "kickstarting Flower Baselines with implementations of FedOpt (FedYogi, " -"FedAdam, FedAdagrad), FedBN, and FedAvgM. Check the documentation on how to " -"use [Flower Baselines](https://flower.ai/docs/using-baselines.html). With " -"this first preview release we're also inviting the community to [contribute " -"their own baselines](https://flower.ai/docs/baselines/how-to-contribute-" -"baselines.html)." -msgstr "" -"Flower Baselines 的第一个预览版已经发布!我们通过实现 FedOpt(FedYogi、" -"FedAdam、FedAdagrad)、FedBN 和 FedAvgM 来启动 Flower Baselines。请查阅文档了" -"解如何使用 [Flower Baselines](https://flower.ai/docs/using-baselines.html)。" -"在首次发布预览版时,我们还邀请社区成员[贡献自己的Baselines](https://flower." -"ai/docs/contributing-baselines.html)。" +"FedAdam, FedAdagrad), FedBN, and FedAvgM. Check the documentation on how " +"to use [Flower Baselines](https://flower.ai/docs/using-baselines.html). " +"With this first preview release we're also inviting the community to " +"[contribute their own baselines](https://flower.ai/docs/baselines/how-to-" +"contribute-baselines.html)." +msgstr "" +"Flower Baselines 的第一个预览版已经发布!我们通过实现 " +"FedOpt(FedYogi、FedAdam、FedAdagrad)、FedBN 和 FedAvgM 来启动 Flower " +"Baselines。请查阅文档了解如何使用 [Flower Baselines](https://flower.ai/docs/using-" +"baselines.html)。在首次发布预览版时,我们还邀请社区成员[贡献自己的Baselines](https://flower.ai/docs" +"/contributing-baselines.html)。" -#: ../../source/ref-changelog.md:662 +#: ../../source/ref-changelog.md:728 msgid "" -"**C++ client SDK (preview) and code example** ([#1111](https://github.com/" -"adap/flower/pull/1111))" -msgstr "" -"**C++客户端SDK(预览版)和代码示例**([#1111](https://github.com/adap/flower/" -"pull/1111))" +"**C++ client SDK (preview) and code example** " +"([#1111](https://github.com/adap/flower/pull/1111))" +msgstr "**C++客户端SDK(预览版)和代码示例**([#1111](https://github.com/adap/flower/pull/1111))" -#: ../../source/ref-changelog.md:664 +#: ../../source/ref-changelog.md:730 msgid "" -"Preview support for Flower clients written in C++. The C++ preview includes " -"a Flower client SDK and a quickstart code example that demonstrates a simple " -"C++ client using the SDK." +"Preview support for Flower clients written in C++. The C++ preview " +"includes a Flower client SDK and a quickstart code example that " +"demonstrates a simple C++ client using the SDK." msgstr "" -"预览版支持用 C++ 编写的 Flower 客户端。C++ 预览版包括一个 Flower 客户端 SDK " -"和一个快速入门代码示例,使用 SDK 演示了一个简单的 C++ 客户端。" +"预览版支持用 C++ 编写的 Flower 客户端。C++ 预览版包括一个 Flower 客户端 SDK 和一个快速入门代码示例,使用 SDK " +"演示了一个简单的 C++ 客户端。" -#: ../../source/ref-changelog.md:666 +#: ../../source/ref-changelog.md:732 msgid "" -"**Add experimental support for Python 3.10 and Python 3.11** ([#1135]" -"(https://github.com/adap/flower/pull/1135))" +"**Add experimental support for Python 3.10 and Python 3.11** " +"([#1135](https://github.com/adap/flower/pull/1135))" msgstr "" -"** 增加对 Python 3.10 和 Python 3.11 的实验支持** ([#1135](https://github." -"com/adap/flower/pull/1135))" +"** 增加对 Python 3.10 和 Python 3.11 的实验支持** " +"([#1135](https://github.com/adap/flower/pull/1135))" -#: ../../source/ref-changelog.md:668 +#: ../../source/ref-changelog.md:734 msgid "" -"Python 3.10 is the latest stable release of Python and Python 3.11 is due to " -"be released in October. This Flower release adds experimental support for " -"both Python versions." +"Python 3.10 is the latest stable release of Python and Python 3.11 is due" +" to be released in October. This Flower release adds experimental support" +" for both Python versions." msgstr "" -"Python 3.10 是 Python 的最新稳定版本,Python 3.11 将于 10 月份发布。Flower 版" -"本增加了对这两个 Python 版本的实验支持。" +"Python 3.10 是 Python 的最新稳定版本,Python 3.11 将于 10 月份发布。Flower 版本增加了对这两个 " +"Python 版本的实验支持。" -#: ../../source/ref-changelog.md:670 +#: ../../source/ref-changelog.md:736 msgid "" -"**Aggregate custom metrics through user-provided functions** ([#1144]" -"(https://github.com/adap/flower/pull/1144))" -msgstr "" -"**通过用户提供的函数聚合自定义指标**([#1144](https://github.com/adap/flower/" -"pull/1144))" +"**Aggregate custom metrics through user-provided functions** " +"([#1144](https://github.com/adap/flower/pull/1144))" +msgstr "**通过用户提供的函数聚合自定义指标**([#1144](https://github.com/adap/flower/pull/1144))" -#: ../../source/ref-changelog.md:672 +#: ../../source/ref-changelog.md:738 msgid "" -"Custom metrics (e.g., `accuracy`) can now be aggregated without having to " -"customize the strategy. Built-in strategies support two new arguments, " +"Custom metrics (e.g., `accuracy`) can now be aggregated without having to" +" customize the strategy. Built-in strategies support two new arguments, " "`fit_metrics_aggregation_fn` and `evaluate_metrics_aggregation_fn`, that " "allow passing custom metric aggregation functions." msgstr "" -"现在无需定制策略即可聚合自定义度量(如`准确度`)。内置策略支持两个新参数:" -"`fit_metrics_aggregation_fn` 和`evaluate_metrics_aggregation_fn`,允许传递自" -"定义度量聚合函数。" +"现在无需定制策略即可聚合自定义度量(如`准确度`)。内置策略支持两个新参数:`fit_metrics_aggregation_fn` " +"和`evaluate_metrics_aggregation_fn`,允许传递自定义度量聚合函数。" -#: ../../source/ref-changelog.md:674 +#: ../../source/ref-changelog.md:740 msgid "" -"**User-configurable round timeout** ([#1162](https://github.com/adap/flower/" -"pull/1162))" -msgstr "" -"**用户可配置的回合超时**([#1162](https://github.com/adap/flower/pull/1162))" +"**User-configurable round timeout** " +"([#1162](https://github.com/adap/flower/pull/1162))" +msgstr "**用户可配置的回合超时**([#1162](https://github.com/adap/flower/pull/1162))" -#: ../../source/ref-changelog.md:676 +#: ../../source/ref-changelog.md:742 msgid "" "A new configuration value allows the round timeout to be set for " -"`start_server` and `start_simulation`. If the `config` dictionary contains a " -"`round_timeout` key (with a `float` value in seconds), the server will wait " -"*at least* `round_timeout` seconds before it closes the connection." +"`start_server` and `start_simulation`. If the `config` dictionary " +"contains a `round_timeout` key (with a `float` value in seconds), the " +"server will wait *at least* `round_timeout` seconds before it closes the " +"connection." msgstr "" -"新的配置值允许为 `start_server` 和 `start_simulation` 设置回合超时。如果 " -"`config` 字典中包含一个 `round_timeout` 键(以秒为单位的 `float`值),服务器" -"将至少等待 ** `round_timeout` 秒后才关闭连接。" +"新的配置值允许为 `start_server` 和 `start_simulation` 设置回合超时。如果 `config` 字典中包含一个 " +"`round_timeout` 键(以秒为单位的 `float`值),服务器将至少等待 ** `round_timeout` 秒后才关闭连接。" -#: ../../source/ref-changelog.md:678 +#: ../../source/ref-changelog.md:744 msgid "" -"**Enable both federated evaluation and centralized evaluation to be used at " -"the same time in all built-in strategies** ([#1091](https://github.com/adap/" -"flower/pull/1091))" +"**Enable both federated evaluation and centralized evaluation to be used " +"at the same time in all built-in strategies** " +"([#1091](https://github.com/adap/flower/pull/1091))" msgstr "" -"**允许在所有内置策略中同时使用联邦评价和集中评估** ([#1091](https://github." -"com/adap/flower/pull/1091))" +"**允许在所有内置策略中同时使用联邦评价和集中评估** " +"([#1091](https://github.com/adap/flower/pull/1091))" -#: ../../source/ref-changelog.md:680 +#: ../../source/ref-changelog.md:746 msgid "" -"Built-in strategies can now perform both federated evaluation (i.e., client-" -"side) and centralized evaluation (i.e., server-side) in the same round. " -"Federated evaluation can be disabled by setting `fraction_eval` to `0.0`." +"Built-in strategies can now perform both federated evaluation (i.e., " +"client-side) and centralized evaluation (i.e., server-side) in the same " +"round. Federated evaluation can be disabled by setting `fraction_eval` to" +" `0.0`." msgstr "" -"内置策略现在可以在同一轮中同时执行联邦评估(即客户端)和集中评估(即服务器" -"端)。可以通过将 `fraction_eval` 设置为 `0.0`来禁用联邦评估。" +"内置策略现在可以在同一轮中同时执行联邦评估(即客户端)和集中评估(即服务器端)。可以通过将 `fraction_eval` 设置为 " +"`0.0`来禁用联邦评估。" -#: ../../source/ref-changelog.md:682 +#: ../../source/ref-changelog.md:748 msgid "" -"**Two new Jupyter Notebook tutorials** ([#1141](https://github.com/adap/" -"flower/pull/1141))" +"**Two new Jupyter Notebook tutorials** " +"([#1141](https://github.com/adap/flower/pull/1141))" msgstr "" -"**两本新的 Jupyter Notebook 教程** ([#1141](https://github.com/adap/flower/" -"pull/1141))" +"**两本新的 Jupyter Notebook 教程** " +"([#1141](https://github.com/adap/flower/pull/1141))" -#: ../../source/ref-changelog.md:684 +#: ../../source/ref-changelog.md:750 msgid "" -"Two Jupyter Notebook tutorials (compatible with Google Colab) explain basic " -"and intermediate Flower features:" -msgstr "" -"两本 Jupyter Notebook 教程(与 Google Colab 兼容)介绍了 Flower 的基本和中级" -"功能:" +"Two Jupyter Notebook tutorials (compatible with Google Colab) explain " +"basic and intermediate Flower features:" +msgstr "两本 Jupyter Notebook 教程(与 Google Colab 兼容)介绍了 Flower 的基本和中级功能:" -#: ../../source/ref-changelog.md:686 +#: ../../source/ref-changelog.md:752 msgid "" -"*An Introduction to Federated Learning*: [Open in Colab](https://colab." -"research.google.com/github/adap/flower/blob/main/tutorials/Flower-1-Intro-to-" -"FL-PyTorch.ipynb)" +"*An Introduction to Federated Learning*: [Open in " +"Colab](https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-1" +"-Intro-to-FL-PyTorch.ipynb)" msgstr "" -"*联邦学习简介*: [在 Colab 中打开](https://colab.research.google.com/github/" -"adap/flower/blob/main/tutorials/Flower-1-Intro-to-FL-PyTorch.ipynb)" +"*联邦学习简介*: [在 Colab " +"中打开](https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-1" +"-Intro-to-FL-PyTorch.ipynb)" -#: ../../source/ref-changelog.md:688 +#: ../../source/ref-changelog.md:754 msgid "" -"*Using Strategies in Federated Learning*: [Open in Colab](https://colab." -"research.google.com/github/adap/flower/blob/main/tutorials/Flower-2-" -"Strategies-in-FL-PyTorch.ipynb)" +"*Using Strategies in Federated Learning*: [Open in " +"Colab](https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-2" +"-Strategies-in-FL-PyTorch.ipynb)" msgstr "" -"*在联邦学习中使用策略*: [在 Colab 中打开](https://colab.research.google.com/" -"github/adap/flower/blob/main/tutorials/Flower-2-Strategies-in-FL-PyTorch." -"ipynb)" +"*在联邦学习中使用策略*: [在 Colab " +"中打开](https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-2" +"-Strategies-in-FL-PyTorch.ipynb)" -#: ../../source/ref-changelog.md:690 +#: ../../source/ref-changelog.md:756 msgid "" -"**New FedAvgM strategy (Federated Averaging with Server Momentum)** ([#1076]" -"(https://github.com/adap/flower/pull/1076))" +"**New FedAvgM strategy (Federated Averaging with Server Momentum)** " +"([#1076](https://github.com/adap/flower/pull/1076))" msgstr "" -"**新的 FedAvgM 策略(带服务器动量的联邦平均)** ([#1076](https://github.com/" -"adap/flower/pull/1076))" +"**新的 FedAvgM 策略(带服务器动量的联邦平均)** " +"([#1076](https://github.com/adap/flower/pull/1076))" -#: ../../source/ref-changelog.md:692 +#: ../../source/ref-changelog.md:758 msgid "" "The new `FedAvgM` strategy implements Federated Averaging with Server " "Momentum \\[Hsu et al., 2019\\]." msgstr "新的 \"FedAvgM \"策略实现了带服务器动量的联邦平均[Hsu et al., 2019\\]." -#: ../../source/ref-changelog.md:694 +#: ../../source/ref-changelog.md:760 msgid "" -"**New advanced PyTorch code example** ([#1007](https://github.com/adap/" -"flower/pull/1007))" -msgstr "" -"**新的 PyTorch 高级代码示例** ([#1007](https://github.com/adap/flower/" -"pull/1007))" +"**New advanced PyTorch code example** " +"([#1007](https://github.com/adap/flower/pull/1007))" +msgstr "**新的 PyTorch 高级代码示例** ([#1007](https://github.com/adap/flower/pull/1007))" -#: ../../source/ref-changelog.md:696 +#: ../../source/ref-changelog.md:762 msgid "" "A new code example (`advanced_pytorch`) demonstrates advanced Flower " "concepts with PyTorch." msgstr "新代码示例 (`advanced_pytorch`) 演示了 PyTorch 的高级 Flower 概念。" -#: ../../source/ref-changelog.md:698 +#: ../../source/ref-changelog.md:764 msgid "" -"**New JAX code example** ([#906](https://github.com/adap/flower/pull/906), " +"**New JAX code example** " +"([#906](https://github.com/adap/flower/pull/906), " "[#1143](https://github.com/adap/flower/pull/1143))" msgstr "" "**新的 JAX 代码示例**([#906](https://github.com/adap/flower/pull/906), " "[#1143](https://github.com/adap/flower/pull/1143)" -#: ../../source/ref-changelog.md:700 +#: ../../source/ref-changelog.md:766 msgid "" "A new code example (`jax_from_centralized_to_federated`) shows federated " "learning with JAX and Flower." -msgstr "" -"新代码示例(`jax_from_centralized_to_federated`)展示了使用 JAX 和 Flower 的" -"联邦学习。" +msgstr "新代码示例(`jax_from_centralized_to_federated`)展示了使用 JAX 和 Flower 的联邦学习。" -#: ../../source/ref-changelog.md:704 +#: ../../source/ref-changelog.md:770 msgid "" "New option to keep Ray running if Ray was already initialized in " "`start_simulation` ([#1177](https://github.com/adap/flower/pull/1177))" msgstr "" -"新增选项,用于在 \"start_simulation\"(开始模拟)中已初始化 Ray 的情况下保持 " -"Ray 运行([#1177](https://github.com/adap/flower/pull/1177))" +"新增选项,用于在 \"start_simulation\"(开始模拟)中已初始化 Ray 的情况下保持 Ray " +"运行([#1177](https://github.com/adap/flower/pull/1177))" -#: ../../source/ref-changelog.md:705 +#: ../../source/ref-changelog.md:771 msgid "" "Add support for custom `ClientManager` as a `start_simulation` parameter " "([#1171](https://github.com/adap/flower/pull/1171))" msgstr "" -"添加对自定义 \"客户端管理器 \"作为 \"start_simulation \"参数的支持([#1171]" -"(https://github.com/adap/flower/pull/1171))" +"添加对自定义 \"客户端管理器 \"作为 \"start_simulation " +"\"参数的支持([#1171](https://github.com/adap/flower/pull/1171))" -#: ../../source/ref-changelog.md:706 +#: ../../source/ref-changelog.md:772 msgid "" -"New documentation for [implementing strategies](https://flower.ai/docs/" -"framework/how-to-implement-strategies.html) ([#1097](https://github.com/adap/" -"flower/pull/1097), [#1175](https://github.com/adap/flower/pull/1175))" +"New documentation for [implementing " +"strategies](https://flower.ai/docs/framework/how-to-implement-" +"strategies.html) ([#1097](https://github.com/adap/flower/pull/1097), " +"[#1175](https://github.com/adap/flower/pull/1175))" msgstr "" -"[实施战略](https://flower.ai/docs/framework/how-to-implement-strategies." -"html) 的新文件([#1097](https://github.com/adap/flower/pull/1097), [#1175]" -"(https://github.com/adap/flower/pull/1175)" +"[实施战略](https://flower.ai/docs/framework/how-to-implement-strategies.html)" +" 的新文件([#1097](https://github.com/adap/flower/pull/1097), " +"[#1175](https://github.com/adap/flower/pull/1175)" -#: ../../source/ref-changelog.md:707 +#: ../../source/ref-changelog.md:773 msgid "" -"New mobile-friendly documentation theme ([#1174](https://github.com/adap/" -"flower/pull/1174))" -msgstr "" -"新的移动友好型文档主题 ([#1174](https://github.com/adap/flower/pull/1174))" +"New mobile-friendly documentation theme " +"([#1174](https://github.com/adap/flower/pull/1174))" +msgstr "新的移动友好型文档主题 ([#1174](https://github.com/adap/flower/pull/1174))" -#: ../../source/ref-changelog.md:708 +#: ../../source/ref-changelog.md:774 msgid "" "Limit version range for (optional) `ray` dependency to include only " -"compatible releases (`>=1.9.2,<1.12.0`) ([#1205](https://github.com/adap/" -"flower/pull/1205))" +"compatible releases (`>=1.9.2,<1.12.0`) " +"([#1205](https://github.com/adap/flower/pull/1205))" msgstr "" "限制(可选)`ray`依赖的版本范围,使其仅包含兼容版本(`>=1.9.2,<1.12.0`) " "([#1205](https://github.com/adap/flower/pull/1205))" -#: ../../source/ref-changelog.md:712 +#: ../../source/ref-changelog.md:778 msgid "" -"**Remove deprecated support for Python 3.6** ([#871](https://github.com/adap/" -"flower/pull/871))" -msgstr "" -"**删除对 Python 3.6 的过时支持** ([#871](https://github.com/adap/flower/" -"pull/871))" +"**Remove deprecated support for Python 3.6** " +"([#871](https://github.com/adap/flower/pull/871))" +msgstr "**删除对 Python 3.6 的过时支持** ([#871](https://github.com/adap/flower/pull/871))" -#: ../../source/ref-changelog.md:713 +#: ../../source/ref-changelog.md:779 msgid "" -"**Remove deprecated KerasClient** ([#857](https://github.com/adap/flower/" -"pull/857))" -msgstr "" -"**移除过时的 KerasClient**([#857](https://github.com/adap/flower/pull/857))" +"**Remove deprecated KerasClient** " +"([#857](https://github.com/adap/flower/pull/857))" +msgstr "**移除过时的 KerasClient**([#857](https://github.com/adap/flower/pull/857))" -#: ../../source/ref-changelog.md:714 +#: ../../source/ref-changelog.md:780 msgid "" -"**Remove deprecated no-op extra installs** ([#973](https://github.com/adap/" -"flower/pull/973))" -msgstr "" -"**移除过时的不操作额外安装** ([#973](https://github.com/adap/flower/" -"pull/973))" +"**Remove deprecated no-op extra installs** " +"([#973](https://github.com/adap/flower/pull/973))" +msgstr "**移除过时的不操作额外安装** ([#973](https://github.com/adap/flower/pull/973))" -#: ../../source/ref-changelog.md:715 +#: ../../source/ref-changelog.md:781 msgid "" "**Remove deprecated proto fields from** `FitRes` **and** `EvaluateRes` " "([#869](https://github.com/adap/flower/pull/869))" msgstr "" -"**从** `FitRes` **和** `EvaluateRes` 中移除已废弃的 proto 字段 ([#869]" -"(https://github.com/adap/flower/pull/869))" +"**从** `FitRes` **和** `EvaluateRes` 中移除已废弃的 proto 字段 " +"([#869](https://github.com/adap/flower/pull/869))" -#: ../../source/ref-changelog.md:716 +#: ../../source/ref-changelog.md:782 msgid "" -"**Remove deprecated QffedAvg strategy (replaced by QFedAvg)** ([#1107]" -"(https://github.com/adap/flower/pull/1107))" +"**Remove deprecated QffedAvg strategy (replaced by QFedAvg)** " +"([#1107](https://github.com/adap/flower/pull/1107))" msgstr "" -"**移除过时的 QffedAvg 策略(由 QFedAvg 取代)** ([#1107](https://github.com/" -"adap/flower/pull/1107))" +"**移除过时的 QffedAvg 策略(由 QFedAvg 取代)** " +"([#1107](https://github.com/adap/flower/pull/1107))" -#: ../../source/ref-changelog.md:717 +#: ../../source/ref-changelog.md:783 msgid "" -"**Remove deprecated DefaultStrategy strategy** ([#1142](https://github.com/" -"adap/flower/pull/1142))" +"**Remove deprecated DefaultStrategy strategy** " +"([#1142](https://github.com/adap/flower/pull/1142))" msgstr "" -"**删除过时的 DefaultStrategy 策略** ([#1142](https://github.com/adap/flower/" -"pull/1142))" +"**删除过时的 DefaultStrategy 策略** " +"([#1142](https://github.com/adap/flower/pull/1142))" -#: ../../source/ref-changelog.md:718 +#: ../../source/ref-changelog.md:784 msgid "" -"**Remove deprecated support for eval_fn accuracy return value** ([#1142]" -"(https://github.com/adap/flower/pull/1142))" +"**Remove deprecated support for eval_fn accuracy return value** " +"([#1142](https://github.com/adap/flower/pull/1142))" msgstr "" -"**删除已过时的对 eval_fn 返回值准确性的支持** ([#1142](https://github.com/" -"adap/flower/pull/1142))" +"**删除已过时的对 eval_fn 返回值准确性的支持** " +"([#1142](https://github.com/adap/flower/pull/1142))" -#: ../../source/ref-changelog.md:719 +#: ../../source/ref-changelog.md:785 msgid "" "**Remove deprecated support for passing initial parameters as NumPy " "ndarrays** ([#1142](https://github.com/adap/flower/pull/1142))" msgstr "" -"**移除对以 NumPy ndarrays 传递初始参数的过时支持** ([#1142](https://github." -"com/adap/flower/pull/1142))" +"**移除对以 NumPy ndarrays 传递初始参数的过时支持** " +"([#1142](https://github.com/adap/flower/pull/1142))" -#: ../../source/ref-changelog.md:721 +#: ../../source/ref-changelog.md:787 msgid "v0.18.0 (2022-02-28)" msgstr "v0.18.0 (2022-02-28)" -#: ../../source/ref-changelog.md:725 +#: ../../source/ref-changelog.md:791 msgid "" "**Improved Virtual Client Engine compatibility with Jupyter Notebook / " -"Google Colab** ([#866](https://github.com/adap/flower/pull/866), [#872]" -"(https://github.com/adap/flower/pull/872), [#833](https://github.com/adap/" -"flower/pull/833), [#1036](https://github.com/adap/flower/pull/1036))" +"Google Colab** ([#866](https://github.com/adap/flower/pull/866), " +"[#872](https://github.com/adap/flower/pull/872), " +"[#833](https://github.com/adap/flower/pull/833), " +"[#1036](https://github.com/adap/flower/pull/1036))" msgstr "" -"**改进了虚拟客户端引擎与 Jupyter Notebook / Google Colab 的兼容性** ([#866]" -"(https://github.com/adap/flower/pull/866), [#872](https://github.com/adap/" -"flower/pull/872), [#833](https://github.com/adap/flower/pull/833), [#1036]" -"(https://github.com/adap/flower/pull/1036))" +"**改进了虚拟客户端引擎与 Jupyter Notebook / Google Colab 的兼容性** " +"([#866](https://github.com/adap/flower/pull/866), " +"[#872](https://github.com/adap/flower/pull/872), " +"[#833](https://github.com/adap/flower/pull/833), " +"[#1036](https://github.com/adap/flower/pull/1036))" -#: ../../source/ref-changelog.md:727 +#: ../../source/ref-changelog.md:793 msgid "" -"Simulations (using the Virtual Client Engine through `start_simulation`) now " -"work more smoothly on Jupyter Notebooks (incl. Google Colab) after " +"Simulations (using the Virtual Client Engine through `start_simulation`) " +"now work more smoothly on Jupyter Notebooks (incl. Google Colab) after " "installing Flower with the `simulation` extra (`pip install " "flwr[simulation]`)." msgstr "" -"通过 `start_simulation` 在 Jupyter 笔记本(包括 Google Colab)上安装 Flower " -"并附加 `simulation` (`pip install flwr[simulation]`)后,模拟(通过 " -"`start_simulation` 使用虚拟客户端引擎)现在可以更流畅地运行。" +"通过 `start_simulation` 在 Jupyter 笔记本(包括 Google Colab)上安装 Flower 并附加 " +"`simulation` (`pip install flwr[simulation]`)后,模拟(通过 `start_simulation` " +"使用虚拟客户端引擎)现在可以更流畅地运行。" -#: ../../source/ref-changelog.md:729 +#: ../../source/ref-changelog.md:795 msgid "" -"**New Jupyter Notebook code example** ([#833](https://github.com/adap/flower/" -"pull/833))" +"**New Jupyter Notebook code example** " +"([#833](https://github.com/adap/flower/pull/833))" msgstr "" -"**新的 Jupyter Notebook 代码示例** ([#833](https://github.com/adap/flower/" -"pull/833))" +"**新的 Jupyter Notebook 代码示例** " +"([#833](https://github.com/adap/flower/pull/833))" -#: ../../source/ref-changelog.md:731 +#: ../../source/ref-changelog.md:797 msgid "" -"A new code example (`quickstart_simulation`) demonstrates Flower simulations " -"using the Virtual Client Engine through Jupyter Notebook (incl. Google " -"Colab)." +"A new code example (`quickstart_simulation`) demonstrates Flower " +"simulations using the Virtual Client Engine through Jupyter Notebook " +"(incl. Google Colab)." msgstr "" "新代码示例(`quickstart_simulation`)通过 Jupyter Notebook(包括 Google " "Colab)演示了使用虚拟客户端引擎进行 Flower 模拟。" -#: ../../source/ref-changelog.md:733 +#: ../../source/ref-changelog.md:799 msgid "" -"**Client properties (feature preview)** ([#795](https://github.com/adap/" -"flower/pull/795))" -msgstr "" -"**客户端属性(功能预览)** ([#795](https://github.com/adap/flower/pull/795))" +"**Client properties (feature preview)** " +"([#795](https://github.com/adap/flower/pull/795))" +msgstr "**客户端属性(功能预览)** ([#795](https://github.com/adap/flower/pull/795))" -#: ../../source/ref-changelog.md:735 +#: ../../source/ref-changelog.md:801 msgid "" -"Clients can implement a new method `get_properties` to enable server-side " -"strategies to query client properties." -msgstr "" -"客户端可以实现一个新方法 `get_properties`,以启用服务器端策略来查询客户端属" -"性。" +"Clients can implement a new method `get_properties` to enable server-side" +" strategies to query client properties." +msgstr "客户端可以实现一个新方法 `get_properties`,以启用服务器端策略来查询客户端属性。" -#: ../../source/ref-changelog.md:737 +#: ../../source/ref-changelog.md:803 msgid "" -"**Experimental Android support with TFLite** ([#865](https://github.com/adap/" -"flower/pull/865))" -msgstr "" -"** 使用 TFLite 实验性支持安卓系统** ([#865](https://github.com/adap/flower/" -"pull/865))" +"**Experimental Android support with TFLite** " +"([#865](https://github.com/adap/flower/pull/865))" +msgstr "** 使用 TFLite 实验性支持安卓系统** ([#865](https://github.com/adap/flower/pull/865))" -#: ../../source/ref-changelog.md:739 +#: ../../source/ref-changelog.md:805 msgid "" "Android support has finally arrived in `main`! Flower is both client-" "agnostic and framework-agnostic by design. One can integrate arbitrary " -"client platforms and with this release, using Flower on Android has become a " -"lot easier." +"client platforms and with this release, using Flower on Android has " +"become a lot easier." msgstr "" -"`main`终于支持 Android 了!Flower 的设计与客户端和框架无关。我们可以集成任意" -"客户端平台,有了这个版本,在安卓系统上使用 Flower 就变得更容易了。" +"`main`终于支持 Android 了!Flower 的设计与客户端和框架无关。我们可以集成任意客户端平台,有了这个版本,在安卓系统上使用 " +"Flower 就变得更容易了。" -#: ../../source/ref-changelog.md:741 +#: ../../source/ref-changelog.md:807 msgid "" -"The example uses TFLite on the client side, along with a new `FedAvgAndroid` " -"strategy. The Android client and `FedAvgAndroid` are still experimental, but " -"they are a first step towards a fully-fledged Android SDK and a unified " -"`FedAvg` implementation that integrated the new functionality from " -"`FedAvgAndroid`." +"The example uses TFLite on the client side, along with a new " +"`FedAvgAndroid` strategy. The Android client and `FedAvgAndroid` are " +"still experimental, but they are a first step towards a fully-fledged " +"Android SDK and a unified `FedAvg` implementation that integrated the new" +" functionality from `FedAvgAndroid`." msgstr "" "该示例在客户端使用了 TFLite 以及新的 `FedAvgAndroid`策略。Android 客户端和 " -"`FedAvgAndroid`仍处于试验阶段,但这是向成熟的 Android SDK 和集成了 " -"`FedAvgAndroid`新功能的统一 `FedAvg`实现迈出的第一步。" +"`FedAvgAndroid`仍处于试验阶段,但这是向成熟的 Android SDK 和集成了 `FedAvgAndroid`新功能的统一 " +"`FedAvg`实现迈出的第一步。" -#: ../../source/ref-changelog.md:743 +#: ../../source/ref-changelog.md:809 msgid "" -"**Make gRPC keepalive time user-configurable and decrease default keepalive " -"time** ([#1069](https://github.com/adap/flower/pull/1069))" +"**Make gRPC keepalive time user-configurable and decrease default " +"keepalive time** ([#1069](https://github.com/adap/flower/pull/1069))" msgstr "" -"**使 gRPC 保持连接时间可由用户配置,并缩短默认保持连接时间** ([#1069]" -"(https://github.com/adap/flower/pull/1069))" +"**使 gRPC 保持连接时间可由用户配置,并缩短默认保持连接时间** " +"([#1069](https://github.com/adap/flower/pull/1069))" -#: ../../source/ref-changelog.md:745 +#: ../../source/ref-changelog.md:811 msgid "" "The default gRPC keepalive time has been reduced to increase the " -"compatibility of Flower with more cloud environments (for example, Microsoft " -"Azure). Users can configure the keepalive time to customize the gRPC stack " -"based on specific requirements." +"compatibility of Flower with more cloud environments (for example, " +"Microsoft Azure). Users can configure the keepalive time to customize the" +" gRPC stack based on specific requirements." msgstr "" -"为提高 Flower 与更多云环境(如 Microsoft Azure)的兼容性,缩短了默认 gRPC 保" -"持时间。用户可以根据具体要求配置 keepalive 时间,自定义 gRPC 堆栈。" +"为提高 Flower 与更多云环境(如 Microsoft Azure)的兼容性,缩短了默认 gRPC 保持时间。用户可以根据具体要求配置 " +"keepalive 时间,自定义 gRPC 堆栈。" -#: ../../source/ref-changelog.md:747 +#: ../../source/ref-changelog.md:813 msgid "" -"**New differential privacy example using Opacus and PyTorch** ([#805]" -"(https://github.com/adap/flower/pull/805))" +"**New differential privacy example using Opacus and PyTorch** " +"([#805](https://github.com/adap/flower/pull/805))" msgstr "" -"**使用 Opacus 和 PyTorch 的新差分隐私示例** ([#805](https://github.com/adap/" -"flower/pull/805))" +"**使用 Opacus 和 PyTorch 的新差分隐私示例** " +"([#805](https://github.com/adap/flower/pull/805))" -#: ../../source/ref-changelog.md:749 +#: ../../source/ref-changelog.md:815 msgid "" -"A new code example (`opacus`) demonstrates differentially-private federated " -"learning with Opacus, PyTorch, and Flower." -msgstr "" -"一个新的代码示例(\"opacus\")演示了使用 Opacus、PyTorch 和 Flower 进行差分隐" -"私的联邦学习。" +"A new code example (`opacus`) demonstrates differentially-private " +"federated learning with Opacus, PyTorch, and Flower." +msgstr "一个新的代码示例(\"opacus\")演示了使用 Opacus、PyTorch 和 Flower 进行差分隐私的联邦学习。" -#: ../../source/ref-changelog.md:751 +#: ../../source/ref-changelog.md:817 msgid "" -"**New Hugging Face Transformers code example** ([#863](https://github.com/" -"adap/flower/pull/863))" +"**New Hugging Face Transformers code example** " +"([#863](https://github.com/adap/flower/pull/863))" msgstr "" -"**新的Hugging Face Transformers代码示例** ([#863](https://github.com/adap/" -"flower/pull/863))" +"**新的Hugging Face Transformers代码示例** " +"([#863](https://github.com/adap/flower/pull/863))" -#: ../../source/ref-changelog.md:753 +#: ../../source/ref-changelog.md:819 msgid "" -"A new code example (`quickstart_huggingface`) demonstrates usage of Hugging " -"Face Transformers with Flower." -msgstr "" -"新的代码示例(`quickstart_huggingface`)证明了结合Flower和Hugging Face " -"Transformers的实用性。" +"A new code example (`quickstart_huggingface`) demonstrates usage of " +"Hugging Face Transformers with Flower." +msgstr "新的代码示例(`quickstart_huggingface`)证明了结合Flower和Hugging Face Transformers的实用性。" -#: ../../source/ref-changelog.md:755 +#: ../../source/ref-changelog.md:821 msgid "" -"**New MLCube code example** ([#779](https://github.com/adap/flower/" -"pull/779), [#1034](https://github.com/adap/flower/pull/1034), [#1065]" -"(https://github.com/adap/flower/pull/1065), [#1090](https://github.com/adap/" -"flower/pull/1090))" +"**New MLCube code example** " +"([#779](https://github.com/adap/flower/pull/779), " +"[#1034](https://github.com/adap/flower/pull/1034), " +"[#1065](https://github.com/adap/flower/pull/1065), " +"[#1090](https://github.com/adap/flower/pull/1090))" msgstr "" "**新的 MLCube 代码示例** ([#779](https://github.com/adap/flower/pull/779), " -"[#1034](https://github.com/adap/flower/pull/1034), [#1065](https://github." -"com/adap/flower/pull/1065), [#1090](https://github.com/adap/flower/" -"pull/1090))" +"[#1034](https://github.com/adap/flower/pull/1034), " +"[#1065](https://github.com/adap/flower/pull/1065), " +"[#1090](https://github.com/adap/flower/pull/1090))" -#: ../../source/ref-changelog.md:757 +#: ../../source/ref-changelog.md:823 msgid "" -"A new code example (`quickstart_mlcube`) demonstrates usage of MLCube with " -"Flower." +"A new code example (`quickstart_mlcube`) demonstrates usage of MLCube " +"with Flower." msgstr "新代码示例(\"quickstart_mlcube\")演示了 MLCube 与 Flower 的用法。" -#: ../../source/ref-changelog.md:759 +#: ../../source/ref-changelog.md:825 msgid "" -"**SSL-enabled server and client** ([#842](https://github.com/adap/flower/" -"pull/842), [#844](https://github.com/adap/flower/pull/844), [#845](https://" -"github.com/adap/flower/pull/845), [#847](https://github.com/adap/flower/" -"pull/847), [#993](https://github.com/adap/flower/pull/993), [#994](https://" -"github.com/adap/flower/pull/994))" +"**SSL-enabled server and client** " +"([#842](https://github.com/adap/flower/pull/842), " +"[#844](https://github.com/adap/flower/pull/844), " +"[#845](https://github.com/adap/flower/pull/845), " +"[#847](https://github.com/adap/flower/pull/847), " +"[#993](https://github.com/adap/flower/pull/993), " +"[#994](https://github.com/adap/flower/pull/994))" msgstr "" -"** 支持 SSL 的服务器和客户端** ([#842](https://github.com/adap/flower/" -"pull/842), [#844](https://github.com/adap/flower/pull/844), [#845](https://" -"github.com/adap/flower/pull/845), [#847](https://github.com/adap/flower/" -"pull/847), [#993](https://github.com/adap/flower/pull/993), [#994](https://" -"github.com/adap/flower/pull/994))" +"** 支持 SSL 的服务器和客户端** ([#842](https://github.com/adap/flower/pull/842), " +"[#844](https://github.com/adap/flower/pull/844), " +"[#845](https://github.com/adap/flower/pull/845), " +"[#847](https://github.com/adap/flower/pull/847), " +"[#993](https://github.com/adap/flower/pull/993), " +"[#994](https://github.com/adap/flower/pull/994))" -#: ../../source/ref-changelog.md:761 +#: ../../source/ref-changelog.md:827 msgid "" -"SSL enables secure encrypted connections between clients and servers. This " -"release open-sources the Flower secure gRPC implementation to make encrypted " -"communication channels accessible to all Flower users." -msgstr "" -"SSL 可实现客户端与服务器之间的安全加密连接。该版本开源了 Flower 安全 gRPC 实" -"现,使所有 Flower 用户都能访问加密通信通道。" +"SSL enables secure encrypted connections between clients and servers. " +"This release open-sources the Flower secure gRPC implementation to make " +"encrypted communication channels accessible to all Flower users." +msgstr "SSL 可实现客户端与服务器之间的安全加密连接。该版本开源了 Flower 安全 gRPC 实现,使所有 Flower 用户都能访问加密通信通道。" -#: ../../source/ref-changelog.md:763 +#: ../../source/ref-changelog.md:829 msgid "" -"**Updated** `FedAdam` **and** `FedYogi` **strategies** ([#885](https://" -"github.com/adap/flower/pull/885), [#895](https://github.com/adap/flower/" -"pull/895))" +"**Updated** `FedAdam` **and** `FedYogi` **strategies** " +"([#885](https://github.com/adap/flower/pull/885), " +"[#895](https://github.com/adap/flower/pull/895))" msgstr "" -"**更新**`FedAdam`**和**`FedYogi`**战略** ([#885](https://github.com/adap/" -"flower/pull/885), [#895](https://github.com/adap/flower/pull/895))" +"**更新**`FedAdam`**和**`FedYogi`**战略** " +"([#885](https://github.com/adap/flower/pull/885), " +"[#895](https://github.com/adap/flower/pull/895))" -#: ../../source/ref-changelog.md:765 +#: ../../source/ref-changelog.md:831 msgid "" -"`FedAdam` and `FedAdam` match the latest version of the Adaptive Federated " -"Optimization paper." +"`FedAdam` and `FedAdam` match the latest version of the Adaptive " +"Federated Optimization paper." msgstr "FedAdam \"和 \"FedAdam \"与最新版本的 \"自适应联邦优化 \"论文相匹配。" -#: ../../source/ref-changelog.md:767 +#: ../../source/ref-changelog.md:833 msgid "" -"**Initialize** `start_simulation` **with a list of client IDs** ([#860]" -"(https://github.com/adap/flower/pull/860))" +"**Initialize** `start_simulation` **with a list of client IDs** " +"([#860](https://github.com/adap/flower/pull/860))" msgstr "" -"**初始化** `start_simulation` **使用客户端 ID 列表** ([#860](https://github." -"com/adap/flower/pull/860))" +"**初始化** `start_simulation` **使用客户端 ID 列表** " +"([#860](https://github.com/adap/flower/pull/860))" -#: ../../source/ref-changelog.md:769 +#: ../../source/ref-changelog.md:835 msgid "" "`start_simulation` can now be called with a list of client IDs " "(`clients_ids`, type: `List[str]`). Those IDs will be passed to the " @@ -18580,297 +20072,270 @@ msgid "" "identifiers." msgstr "" "现在可以使用客户端 ID 列表(`clients_ids`,类型:`List[str]`)调用 " -"`start_simulation`。每当需要初始化客户端时,这些 ID 就会被传递到 `client_fn` " -"中,这样就能更轻松地加载无法通过 `int` 标识符访问的数据分区。" +"`start_simulation`。每当需要初始化客户端时,这些 ID 就会被传递到 `client_fn` 中,这样就能更轻松地加载无法通过 " +"`int` 标识符访问的数据分区。" -#: ../../source/ref-changelog.md:773 +#: ../../source/ref-changelog.md:839 msgid "" -"Update `num_examples` calculation in PyTorch code examples in ([#909]" -"(https://github.com/adap/flower/pull/909))" +"Update `num_examples` calculation in PyTorch code examples in " +"([#909](https://github.com/adap/flower/pull/909))" msgstr "" -"更新 PyTorch 代码示例中的 \"num_examples \"计算 ([#909](https://github.com/" -"adap/flower/pull/909))" +"更新 PyTorch 代码示例中的 \"num_examples \"计算 " +"([#909](https://github.com/adap/flower/pull/909))" -#: ../../source/ref-changelog.md:774 +#: ../../source/ref-changelog.md:840 msgid "" -"Expose Flower version through `flwr.__version__` ([#952](https://github.com/" -"adap/flower/pull/952))" +"Expose Flower version through `flwr.__version__` " +"([#952](https://github.com/adap/flower/pull/952))" msgstr "" -"通过 `flwr.__version__` 公开 Flower 版本 ([#952](https://github.com/adap/" -"flower/pull/952))" +"通过 `flwr.__version__` 公开 Flower 版本 " +"([#952](https://github.com/adap/flower/pull/952))" -#: ../../source/ref-changelog.md:775 +#: ../../source/ref-changelog.md:841 msgid "" -"`start_server` in `app.py` now returns a `History` object containing metrics " -"from training ([#974](https://github.com/adap/flower/pull/974))" +"`start_server` in `app.py` now returns a `History` object containing " +"metrics from training ([#974](https://github.com/adap/flower/pull/974))" msgstr "" -"`app.py`中的 `start_server`现在会返回一个 `History` 对象,其中包含训练中的指" -"标([#974](https://github.com/adap/flower/pull/974))" +"`app.py`中的 `start_server`现在会返回一个 `History` " +"对象,其中包含训练中的指标([#974](https://github.com/adap/flower/pull/974))" -#: ../../source/ref-changelog.md:776 +#: ../../source/ref-changelog.md:842 msgid "" -"Make `max_workers` (used by `ThreadPoolExecutor`) configurable ([#978]" -"(https://github.com/adap/flower/pull/978))" +"Make `max_workers` (used by `ThreadPoolExecutor`) configurable " +"([#978](https://github.com/adap/flower/pull/978))" msgstr "" -"使 `max_workers`(由 `ThreadPoolExecutor`使用)可配置([#978](https://github." -"com/adap/flower/pull/978))" +"使 `max_workers`(由 " +"`ThreadPoolExecutor`使用)可配置([#978](https://github.com/adap/flower/pull/978))" -#: ../../source/ref-changelog.md:777 +#: ../../source/ref-changelog.md:843 msgid "" -"Increase sleep time after server start to three seconds in all code examples " -"([#1086](https://github.com/adap/flower/pull/1086))" -msgstr "" -"在所有代码示例中,将服务器启动后的休眠时间延长至三秒([#1086](https://github." -"com/adap/flower/pull/1086))" +"Increase sleep time after server start to three seconds in all code " +"examples ([#1086](https://github.com/adap/flower/pull/1086))" +msgstr "在所有代码示例中,将服务器启动后的休眠时间延长至三秒([#1086](https://github.com/adap/flower/pull/1086))" -#: ../../source/ref-changelog.md:778 +#: ../../source/ref-changelog.md:844 msgid "" -"Added a new FAQ section to the documentation ([#948](https://github.com/adap/" -"flower/pull/948))" -msgstr "" -"在文档中添加了新的常见问题部分 ([#948](https://github.com/adap/flower/" -"pull/948))" +"Added a new FAQ section to the documentation " +"([#948](https://github.com/adap/flower/pull/948))" +msgstr "在文档中添加了新的常见问题部分 ([#948](https://github.com/adap/flower/pull/948))" -#: ../../source/ref-changelog.md:779 +#: ../../source/ref-changelog.md:845 msgid "" "And many more under-the-hood changes, library updates, documentation " "changes, and tooling improvements!" msgstr "还有更多底层更改、库更新、文档更改和工具改进!" -#: ../../source/ref-changelog.md:783 +#: ../../source/ref-changelog.md:849 msgid "" "**Removed** `flwr_example` **and** `flwr_experimental` **from release " "build** ([#869](https://github.com/adap/flower/pull/869))" msgstr "" -"**从发布版中删除**`flwr_example`**和**`flwr_experimental`** ([#869](https://" -"github.com/adap/flower/pull/869))" +"**从发布版中删除**`flwr_example`**和**`flwr_experimental`** " +"([#869](https://github.com/adap/flower/pull/869))" -#: ../../source/ref-changelog.md:785 +#: ../../source/ref-changelog.md:851 msgid "" "The packages `flwr_example` and `flwr_experimental` have been deprecated " "since Flower 0.12.0 and they are not longer included in Flower release " "builds. The associated extras (`baseline`, `examples-pytorch`, `examples-" -"tensorflow`, `http-logger`, `ops`) are now no-op and will be removed in an " -"upcoming release." +"tensorflow`, `http-logger`, `ops`) are now no-op and will be removed in " +"an upcoming release." msgstr "" -"自 Flower 0.12.0 起,软件包 `flwr_example` 和 `flwr_experimental` 已被弃用," -"它们不再包含在 Flower 的发布版本中。相关的额外包(`baseline`, `examples-" -"pytorch`, `examples-tensorflow`, `http-logger`, `ops`)现在已不再使用,并将在" -"即将发布的版本中移除。" +"自 Flower 0.12.0 起,软件包 `flwr_example` 和 `flwr_experimental` 已被弃用,它们不再包含在 " +"Flower 的发布版本中。相关的额外包(`baseline`, `examples-pytorch`, `examples-" +"tensorflow`, `http-logger`, `ops`)现在已不再使用,并将在即将发布的版本中移除。" -#: ../../source/ref-changelog.md:787 +#: ../../source/ref-changelog.md:853 msgid "v0.17.0 (2021-09-24)" msgstr "v0.17.0 (2021-09-24)" -#: ../../source/ref-changelog.md:791 +#: ../../source/ref-changelog.md:857 msgid "" -"**Experimental virtual client engine** ([#781](https://github.com/adap/" -"flower/pull/781) [#790](https://github.com/adap/flower/pull/790) [#791]" -"(https://github.com/adap/flower/pull/791))" +"**Experimental virtual client engine** " +"([#781](https://github.com/adap/flower/pull/781) " +"[#790](https://github.com/adap/flower/pull/790) " +"[#791](https://github.com/adap/flower/pull/791))" msgstr "" "**实验性虚拟客户端引擎** ([#781](https://github.com/adap/flower/pull/781) " -"[#790](https://github.com/adap/flower/pull/790) [#791](https://github.com/" -"adap/flower/pull/791))" +"[#790](https://github.com/adap/flower/pull/790) " +"[#791](https://github.com/adap/flower/pull/791))" -#: ../../source/ref-changelog.md:793 +#: ../../source/ref-changelog.md:859 msgid "" -"One of Flower's goals is to enable research at scale. This release enables a " -"first (experimental) peek at a major new feature, codenamed the virtual " -"client engine. Virtual clients enable simulations that scale to a (very) " -"large number of clients on a single machine or compute cluster. The easiest " -"way to test the new functionality is to look at the two new code examples " -"called `quickstart_simulation` and `simulation_pytorch`." +"One of Flower's goals is to enable research at scale. This release " +"enables a first (experimental) peek at a major new feature, codenamed the" +" virtual client engine. Virtual clients enable simulations that scale to " +"a (very) large number of clients on a single machine or compute cluster. " +"The easiest way to test the new functionality is to look at the two new " +"code examples called `quickstart_simulation` and `simulation_pytorch`." msgstr "" -"Flower 的目标之一是实现大规模研究。这一版本首次(试验性地)展示了代号为 \"虚" -"拟客户端引擎 \"的重要新功能。虚拟客户端可以在单台机器或计算集群上对大量客户端" -"进行模拟。测试新功能的最简单方法是查看名为 \"quickstart_simulation \"和 " -"\"simulation_pytorch \"的两个新代码示例。" +"Flower 的目标之一是实现大规模研究。这一版本首次(试验性地)展示了代号为 \"虚拟客户端引擎 " +"\"的重要新功能。虚拟客户端可以在单台机器或计算集群上对大量客户端进行模拟。测试新功能的最简单方法是查看名为 " +"\"quickstart_simulation \"和 \"simulation_pytorch \"的两个新代码示例。" -#: ../../source/ref-changelog.md:795 +#: ../../source/ref-changelog.md:861 msgid "" -"The feature is still experimental, so there's no stability guarantee for the " -"API. It's also not quite ready for prime time and comes with a few known " -"caveats. However, those who are curious are encouraged to try it out and " -"share their thoughts." +"The feature is still experimental, so there's no stability guarantee for " +"the API. It's also not quite ready for prime time and comes with a few " +"known caveats. However, those who are curious are encouraged to try it " +"out and share their thoughts." msgstr "" -"该功能仍处于试验阶段,因此无法保证 API 的稳定性。此外,它还没有完全准备好进入" -"黄金时间,并有一些已知的注意事项。不过,我们鼓励好奇的用户尝试使用并分享他们" -"的想法。" +"该功能仍处于试验阶段,因此无法保证 API " +"的稳定性。此外,它还没有完全准备好进入黄金时间,并有一些已知的注意事项。不过,我们鼓励好奇的用户尝试使用并分享他们的想法。" -#: ../../source/ref-changelog.md:797 +#: ../../source/ref-changelog.md:863 msgid "" -"**New built-in strategies** ([#828](https://github.com/adap/flower/pull/828) " +"**New built-in strategies** " +"([#828](https://github.com/adap/flower/pull/828) " "[#822](https://github.com/adap/flower/pull/822))" msgstr "" -"**新的内置策略**([#828](https://github.com/adap/flower/pull/828) [#822]" -"(https://github.com/adap/flower/pull/822)" +"**新的内置策略**([#828](https://github.com/adap/flower/pull/828) " +"[#822](https://github.com/adap/flower/pull/822)" -#: ../../source/ref-changelog.md:799 +#: ../../source/ref-changelog.md:865 msgid "" "FedYogi - Federated learning strategy using Yogi on server-side. " "Implementation based on https://arxiv.org/abs/2003.00295" -msgstr "" -"FedYogi - 在服务器端使用 Yogi 的联邦学习策略。基于 https://arxiv.org/" -"abs/2003.00295 实现" +msgstr "FedYogi - 在服务器端使用 Yogi 的联邦学习策略。基于 https://arxiv.org/abs/2003.00295 实现" -#: ../../source/ref-changelog.md:800 +#: ../../source/ref-changelog.md:866 msgid "" "FedAdam - Federated learning strategy using Adam on server-side. " "Implementation based on https://arxiv.org/abs/2003.00295" -msgstr "" -"FedAdam - 在服务器端使用 Adam 的联邦学习策略。基于 https://arxiv.org/" -"abs/2003.00295 实现" +msgstr "FedAdam - 在服务器端使用 Adam 的联邦学习策略。基于 https://arxiv.org/abs/2003.00295 实现" -#: ../../source/ref-changelog.md:802 +#: ../../source/ref-changelog.md:868 msgid "" -"**New PyTorch Lightning code example** ([#617](https://github.com/adap/" -"flower/pull/617))" +"**New PyTorch Lightning code example** " +"([#617](https://github.com/adap/flower/pull/617))" msgstr "" -"**新的 PyTorch Lightning 代码示例** ([#617](https://github.com/adap/flower/" -"pull/617))" +"**新的 PyTorch Lightning 代码示例** " +"([#617](https://github.com/adap/flower/pull/617))" -#: ../../source/ref-changelog.md:804 +#: ../../source/ref-changelog.md:870 msgid "" -"**New Variational Auto-Encoder code example** ([#752](https://github.com/" -"adap/flower/pull/752))" -msgstr "" -"**新的变分自动编码器代码示例** ([#752](https://github.com/adap/flower/" -"pull/752))" +"**New Variational Auto-Encoder code example** " +"([#752](https://github.com/adap/flower/pull/752))" +msgstr "**新的变分自动编码器代码示例** ([#752](https://github.com/adap/flower/pull/752))" -#: ../../source/ref-changelog.md:806 +#: ../../source/ref-changelog.md:872 msgid "" -"**New scikit-learn code example** ([#748](https://github.com/adap/flower/" -"pull/748))" -msgstr "" -"**新的 scikit-learn 代码示例** ([#748](https://github.com/adap/flower/" -"pull/748))" +"**New scikit-learn code example** " +"([#748](https://github.com/adap/flower/pull/748))" +msgstr "**新的 scikit-learn 代码示例** ([#748](https://github.com/adap/flower/pull/748))" -#: ../../source/ref-changelog.md:808 +#: ../../source/ref-changelog.md:874 msgid "" -"**New experimental TensorBoard strategy** ([#789](https://github.com/adap/" -"flower/pull/789))" -msgstr "" -"**新的实验性 TensorBoard 策略**([#789](https://github.com/adap/flower/" -"pull/789))" +"**New experimental TensorBoard strategy** " +"([#789](https://github.com/adap/flower/pull/789))" +msgstr "**新的实验性 TensorBoard 策略**([#789](https://github.com/adap/flower/pull/789))" -#: ../../source/ref-changelog.md:812 +#: ../../source/ref-changelog.md:878 msgid "" -"Improved advanced TensorFlow code example ([#769](https://github.com/adap/" -"flower/pull/769))" -msgstr "" -"改进的高级 TensorFlow 代码示例([#769](https://github.com/adap/flower/" -"pull/769)" +"Improved advanced TensorFlow code example " +"([#769](https://github.com/adap/flower/pull/769))" +msgstr "改进的高级 TensorFlow 代码示例([#769](https://github.com/adap/flower/pull/769)" -#: ../../source/ref-changelog.md:813 +#: ../../source/ref-changelog.md:879 msgid "" -"Warning when `min_available_clients` is misconfigured ([#830](https://github." -"com/adap/flower/pull/830))" +"Warning when `min_available_clients` is misconfigured " +"([#830](https://github.com/adap/flower/pull/830))" msgstr "" -"当 `min_available_clients` 配置错误时发出警告 ([#830](https://github.com/" -"adap/flower/pull/830))" +"当 `min_available_clients` 配置错误时发出警告 " +"([#830](https://github.com/adap/flower/pull/830))" -#: ../../source/ref-changelog.md:814 +#: ../../source/ref-changelog.md:880 msgid "" -"Improved gRPC server docs ([#841](https://github.com/adap/flower/pull/841))" -msgstr "" -"改进了 gRPC 服务器文档([#841](https://github.com/adap/flower/pull/841))" +"Improved gRPC server docs " +"([#841](https://github.com/adap/flower/pull/841))" +msgstr "改进了 gRPC 服务器文档([#841](https://github.com/adap/flower/pull/841))" -#: ../../source/ref-changelog.md:815 +#: ../../source/ref-changelog.md:881 msgid "" -"Improved error message in `NumPyClient` ([#851](https://github.com/adap/" -"flower/pull/851))" -msgstr "" -"改进了 `NumPyClient` 中的错误信息 ([#851](https://github.com/adap/flower/" -"pull/851))" +"Improved error message in `NumPyClient` " +"([#851](https://github.com/adap/flower/pull/851))" +msgstr "改进了 `NumPyClient` 中的错误信息 ([#851](https://github.com/adap/flower/pull/851))" -#: ../../source/ref-changelog.md:816 +#: ../../source/ref-changelog.md:882 msgid "" -"Improved PyTorch quickstart code example ([#852](https://github.com/adap/" -"flower/pull/852))" -msgstr "" -"改进的 PyTorch 快速启动代码示例 ([#852](https://github.com/adap/flower/" -"pull/852))" +"Improved PyTorch quickstart code example " +"([#852](https://github.com/adap/flower/pull/852))" +msgstr "改进的 PyTorch 快速启动代码示例 ([#852](https://github.com/adap/flower/pull/852))" -#: ../../source/ref-changelog.md:820 +#: ../../source/ref-changelog.md:886 msgid "" -"**Disabled final distributed evaluation** ([#800](https://github.com/adap/" -"flower/pull/800))" -msgstr "" -"**禁用最终分布式评价** ([#800](https://github.com/adap/flower/pull/800))" +"**Disabled final distributed evaluation** " +"([#800](https://github.com/adap/flower/pull/800))" +msgstr "**禁用最终分布式评价** ([#800](https://github.com/adap/flower/pull/800))" -#: ../../source/ref-changelog.md:822 +#: ../../source/ref-changelog.md:888 msgid "" -"Prior behaviour was to perform a final round of distributed evaluation on " -"all connected clients, which is often not required (e.g., when using server-" -"side evaluation). The prior behaviour can be enabled by passing " +"Prior behaviour was to perform a final round of distributed evaluation on" +" all connected clients, which is often not required (e.g., when using " +"server-side evaluation). The prior behaviour can be enabled by passing " "`force_final_distributed_eval=True` to `start_server`." msgstr "" -"之前的行为是在所有连接的客户端上执行最后一轮分布式评估,而这通常是不需要的" -"(例如,在使用服务器端评估时)。可以通过向 `start_server` 传递 " -"`force_final_distributed_eval=True` 来启用之前的行为。" +"之前的行为是在所有连接的客户端上执行最后一轮分布式评估,而这通常是不需要的(例如,在使用服务器端评估时)。可以通过向 `start_server`" +" 传递 `force_final_distributed_eval=True` 来启用之前的行为。" -#: ../../source/ref-changelog.md:824 +#: ../../source/ref-changelog.md:890 msgid "" -"**Renamed q-FedAvg strategy** ([#802](https://github.com/adap/flower/" -"pull/802))" -msgstr "" -"**更名为 q-FedAvg 策略** ([#802](https://github.com/adap/flower/pull/802))" +"**Renamed q-FedAvg strategy** " +"([#802](https://github.com/adap/flower/pull/802))" +msgstr "**更名为 q-FedAvg 策略** ([#802](https://github.com/adap/flower/pull/802))" -#: ../../source/ref-changelog.md:826 +#: ../../source/ref-changelog.md:892 msgid "" -"The strategy named `QffedAvg` was renamed to `QFedAvg` to better reflect the " -"notation given in the original paper (q-FFL is the optimization objective, q-" -"FedAvg is the proposed solver). Note the original (now deprecated) " -"`QffedAvg` class is still available for compatibility reasons (it will be " -"removed in a future release)." +"The strategy named `QffedAvg` was renamed to `QFedAvg` to better reflect " +"the notation given in the original paper (q-FFL is the optimization " +"objective, q-FedAvg is the proposed solver). Note the original (now " +"deprecated) `QffedAvg` class is still available for compatibility reasons" +" (it will be removed in a future release)." msgstr "" -"名为 `QffedAvg` 的策略已更名为 `QFedAvg`,以更好地反映原始论文中给出的符号" -"(q-FFL 是优化目标,q-FedAvg 是建议的求解器)。请注意,出于兼容性原因,原始" -"(现已废弃)的 `QffedAvg` 类仍然可用(它将在未来的版本中移除)。" +"名为 `QffedAvg` 的策略已更名为 `QFedAvg`,以更好地反映原始论文中给出的符号(q-FFL 是优化目标,q-FedAvg " +"是建议的求解器)。请注意,出于兼容性原因,原始(现已废弃)的 `QffedAvg` 类仍然可用(它将在未来的版本中移除)。" -#: ../../source/ref-changelog.md:828 +#: ../../source/ref-changelog.md:894 msgid "" "**Deprecated and renamed code example** `simulation_pytorch` **to** " -"`simulation_pytorch_legacy` ([#791](https://github.com/adap/flower/pull/791))" +"`simulation_pytorch_legacy` " +"([#791](https://github.com/adap/flower/pull/791))" msgstr "" -"**删除并重命名代码示例**`simulation_pytorch`**为" -"**`simulation_pytorch_legacy` ([#791](https://github.com/adap/flower/" -"pull/791))" +"**删除并重命名代码示例**`simulation_pytorch`**为**`simulation_pytorch_legacy` " +"([#791](https://github.com/adap/flower/pull/791))" -#: ../../source/ref-changelog.md:830 +#: ../../source/ref-changelog.md:896 msgid "" -"This example has been replaced by a new example. The new example is based on " -"the experimental virtual client engine, which will become the new default " -"way of doing most types of large-scale simulations in Flower. The existing " -"example was kept for reference purposes, but it might be removed in the " -"future." +"This example has been replaced by a new example. The new example is based" +" on the experimental virtual client engine, which will become the new " +"default way of doing most types of large-scale simulations in Flower. The" +" existing example was kept for reference purposes, but it might be " +"removed in the future." msgstr "" -"该示例已被新示例取代。新示例基于试验性虚拟客户端引擎,它将成为在 Flower 中进" -"行大多数类型大规模模拟的新的默认方式。现有示例将作为参考保留,但将来可能会删" -"除。" +"该示例已被新示例取代。新示例基于试验性虚拟客户端引擎,它将成为在 Flower " +"中进行大多数类型大规模模拟的新的默认方式。现有示例将作为参考保留,但将来可能会删除。" -#: ../../source/ref-changelog.md:832 +#: ../../source/ref-changelog.md:898 msgid "v0.16.0 (2021-05-11)" msgstr "v0.16.0 (2021-05-11)" -#: ../../source/ref-changelog.md:836 +#: ../../source/ref-changelog.md:902 msgid "" -"**New built-in strategies** ([#549](https://github.com/adap/flower/pull/549))" +"**New built-in strategies** " +"([#549](https://github.com/adap/flower/pull/549))" msgstr "**新的内置策略** ([#549](https://github.com/adap/flower/pull/549))" -#: ../../source/ref-changelog.md:838 +#: ../../source/ref-changelog.md:904 msgid "(abstract) FedOpt" msgstr "(摘要) FedOpt" -#: ../../source/ref-changelog.md:841 +#: ../../source/ref-changelog.md:907 msgid "" -"**Custom metrics for server and strategies** ([#717](https://github.com/adap/" -"flower/pull/717))" -msgstr "" -"**服务器和策略的自定义指标** ([#717](https://github.com/adap/flower/" -"pull/717))" +"**Custom metrics for server and strategies** " +"([#717](https://github.com/adap/flower/pull/717))" +msgstr "**服务器和策略的自定义指标** ([#717](https://github.com/adap/flower/pull/717))" -#: ../../source/ref-changelog.md:843 +#: ../../source/ref-changelog.md:909 msgid "" "The Flower server is now fully task-agnostic, all remaining instances of " "task-specific metrics (such as `accuracy`) have been replaced by custom " @@ -18878,429 +20343,421 @@ msgid "" "dictionary containing custom metrics from client to server. As of this " "release, custom metrics replace task-specific metrics on the server." msgstr "" -"Flower 服务器现在完全与任务无关,所有剩余的任务特定度量(如 \"准确度\")都已" -"被自定义度量字典取代。Flower 0.15 引入了从客户端向服务器传递包含自定义指标的" -"字典的功能。从本版本开始,自定义指标将取代服务器上的特定任务指标。" +"Flower 服务器现在完全与任务无关,所有剩余的任务特定度量(如 \"准确度\")都已被自定义度量字典取代。Flower 0.15 " +"引入了从客户端向服务器传递包含自定义指标的字典的功能。从本版本开始,自定义指标将取代服务器上的特定任务指标。" -#: ../../source/ref-changelog.md:845 +#: ../../source/ref-changelog.md:911 msgid "" -"Custom metric dictionaries are now used in two user-facing APIs: they are " -"returned from Strategy methods `aggregate_fit`/`aggregate_evaluate` and they " -"enable evaluation functions passed to built-in strategies (via `eval_fn`) to " -"return more than two evaluation metrics. Strategies can even return " -"*aggregated* metrics dictionaries for the server to keep track of." +"Custom metric dictionaries are now used in two user-facing APIs: they are" +" returned from Strategy methods `aggregate_fit`/`aggregate_evaluate` and " +"they enable evaluation functions passed to built-in strategies (via " +"`eval_fn`) to return more than two evaluation metrics. Strategies can " +"even return *aggregated* metrics dictionaries for the server to keep " +"track of." msgstr "" -"自定义度量字典现在可在两个面向用户的 API 中使用:它们可从策略方法 " -"`aggregate_fit`/`aggregate_evaluate` 返回,还可使传递给内置策略(通过 " -"`eval_fn`)的评估函数返回两个以上的评估度量。策略甚至可以返回 *aggregated* 指" -"标字典,以便服务器跟踪。" +"自定义度量字典现在可在两个面向用户的 API 中使用:它们可从策略方法 `aggregate_fit`/`aggregate_evaluate` " +"返回,还可使传递给内置策略(通过 `eval_fn`)的评估函数返回两个以上的评估度量。策略甚至可以返回 *aggregated* " +"指标字典,以便服务器跟踪。" -#: ../../source/ref-changelog.md:847 +#: ../../source/ref-changelog.md:913 msgid "" "Strategy implementations should migrate their `aggregate_fit` and " "`aggregate_evaluate` methods to the new return type (e.g., by simply " -"returning an empty `{}`), server-side evaluation functions should migrate " -"from `return loss, accuracy` to `return loss, {\"accuracy\": accuracy}`." +"returning an empty `{}`), server-side evaluation functions should migrate" +" from `return loss, accuracy` to `return loss, {\"accuracy\": accuracy}`." msgstr "" -"Strategy 实现应将其 `aggregate_fit` 和 `aggregate_evaluate` 方法迁移到新的返" -"回类型(例如,只需返回空的 `{}`),服务器端评估函数应从 `return loss, " -"accuracy` 迁移到 `return loss, {\"accuracy\": accuracy}`。" +"Strategy 实现应将其 `aggregate_fit` 和 `aggregate_evaluate` " +"方法迁移到新的返回类型(例如,只需返回空的 `{}`),服务器端评估函数应从 `return loss, accuracy` 迁移到 " +"`return loss, {\"accuracy\": accuracy}`。" -#: ../../source/ref-changelog.md:849 +#: ../../source/ref-changelog.md:915 msgid "" "Flower 0.15-style return types are deprecated (but still supported), " "compatibility will be removed in a future release." -msgstr "" -"Flower 0.15 风格的返回类型已被弃用(但仍受支持),兼容性将在未来的版本中移" -"除。" +msgstr "Flower 0.15 风格的返回类型已被弃用(但仍受支持),兼容性将在未来的版本中移除。" -#: ../../source/ref-changelog.md:851 +#: ../../source/ref-changelog.md:917 msgid "" -"**Migration warnings for deprecated functionality** ([#690](https://github." -"com/adap/flower/pull/690))" -msgstr "" -"** 过时功能的迁移警告** ([#690](https://github.com/adap/flower/pull/690))" +"**Migration warnings for deprecated functionality** " +"([#690](https://github.com/adap/flower/pull/690))" +msgstr "** 过时功能的迁移警告** ([#690](https://github.com/adap/flower/pull/690))" -#: ../../source/ref-changelog.md:853 +#: ../../source/ref-changelog.md:919 msgid "" "Earlier versions of Flower were often migrated to new APIs, while " -"maintaining compatibility with legacy APIs. This release introduces detailed " -"warning messages if usage of deprecated APIs is detected. The new warning " -"messages often provide details on how to migrate to more recent APIs, thus " -"easing the transition from one release to another." +"maintaining compatibility with legacy APIs. This release introduces " +"detailed warning messages if usage of deprecated APIs is detected. The " +"new warning messages often provide details on how to migrate to more " +"recent APIs, thus easing the transition from one release to another." msgstr "" -"Flower 早期版本通常会迁移到新的应用程序接口,同时保持与旧版应用程序接口的兼" -"容。如果检测到使用了过时的 API,本版本将引入详细的警告信息。新的警告信息通常" -"会详细说明如何迁移到更新的 API,从而简化从一个版本到另一个版本的过渡。" +"Flower 早期版本通常会迁移到新的应用程序接口,同时保持与旧版应用程序接口的兼容。如果检测到使用了过时的 " +"API,本版本将引入详细的警告信息。新的警告信息通常会详细说明如何迁移到更新的 API,从而简化从一个版本到另一个版本的过渡。" -#: ../../source/ref-changelog.md:855 +#: ../../source/ref-changelog.md:921 msgid "" -"Improved docs and docstrings ([#691](https://github.com/adap/flower/" -"pull/691) [#692](https://github.com/adap/flower/pull/692) [#713](https://" -"github.com/adap/flower/pull/713))" +"Improved docs and docstrings " +"([#691](https://github.com/adap/flower/pull/691) " +"[#692](https://github.com/adap/flower/pull/692) " +"[#713](https://github.com/adap/flower/pull/713))" msgstr "" -"改进了文档和文档说明 ([#691](https://github.com/adap/flower/pull/691) [#692]" -"(https://github.com/adap/flower/pull/692) [#713](https://github.com/adap/" -"flower/pull/713))" +"改进了文档和文档说明 ([#691](https://github.com/adap/flower/pull/691) " +"[#692](https://github.com/adap/flower/pull/692) " +"[#713](https://github.com/adap/flower/pull/713))" -#: ../../source/ref-changelog.md:857 +#: ../../source/ref-changelog.md:923 msgid "MXNet example and documentation" msgstr "MXNet 示例和文档" -#: ../../source/ref-changelog.md:859 +#: ../../source/ref-changelog.md:925 msgid "" "FedBN implementation in example PyTorch: From Centralized To Federated " -"([#696](https://github.com/adap/flower/pull/696) [#702](https://github.com/" -"adap/flower/pull/702) [#705](https://github.com/adap/flower/pull/705))" +"([#696](https://github.com/adap/flower/pull/696) " +"[#702](https://github.com/adap/flower/pull/702) " +"[#705](https://github.com/adap/flower/pull/705))" msgstr "" -"PyTorch 示例中的 FedBN 实现: 从集中到联邦 ([#696](https://github.com/adap/" -"flower/pull/696) [#702](https://github.com/adap/flower/pull/702) [#705]" -"(https://github.com/adap/flower/pull/705))" +"PyTorch 示例中的 FedBN 实现: 从集中到联邦 " +"([#696](https://github.com/adap/flower/pull/696) " +"[#702](https://github.com/adap/flower/pull/702) " +"[#705](https://github.com/adap/flower/pull/705))" -#: ../../source/ref-changelog.md:863 +#: ../../source/ref-changelog.md:929 msgid "" -"**Serialization-agnostic server** ([#721](https://github.com/adap/flower/" -"pull/721))" +"**Serialization-agnostic server** " +"([#721](https://github.com/adap/flower/pull/721))" msgstr "**序列化无关服务器** ([#721](https://github.com/adap/flower/pull/721))" -#: ../../source/ref-changelog.md:865 +#: ../../source/ref-changelog.md:931 msgid "" -"The Flower server is now fully serialization-agnostic. Prior usage of class " -"`Weights` (which represents parameters as deserialized NumPy ndarrays) was " -"replaced by class `Parameters` (e.g., in `Strategy`). `Parameters` objects " -"are fully serialization-agnostic and represents parameters as byte arrays, " -"the `tensor_type` attributes indicates how these byte arrays should be " -"interpreted (e.g., for serialization/deserialization)." +"The Flower server is now fully serialization-agnostic. Prior usage of " +"class `Weights` (which represents parameters as deserialized NumPy " +"ndarrays) was replaced by class `Parameters` (e.g., in `Strategy`). " +"`Parameters` objects are fully serialization-agnostic and represents " +"parameters as byte arrays, the `tensor_type` attributes indicates how " +"these byte arrays should be interpreted (e.g., for " +"serialization/deserialization)." msgstr "" -"Flower 服务器现在完全不依赖序列化。之前使用的 `Weights` 类(以反序列化的 " -"NumPy ndarrays 表示参数)已被 `Parameters` 类取代(例如在 `Strategy`中)。参" -"数 \"对象与序列化完全无关,它以字节数组的形式表示参数,\"tensor_type \"属性表" -"示如何解释这些字节数组(例如,用于序列化/反序列化)。" +"Flower 服务器现在完全不依赖序列化。之前使用的 `Weights` 类(以反序列化的 NumPy ndarrays 表示参数)已被 " +"`Parameters` 类取代(例如在 `Strategy`中)。参数 " +"\"对象与序列化完全无关,它以字节数组的形式表示参数,\"tensor_type \"属性表示如何解释这些字节数组(例如,用于序列化/反序列化)。" -#: ../../source/ref-changelog.md:867 +#: ../../source/ref-changelog.md:933 msgid "" -"Built-in strategies implement this approach by handling serialization and " -"deserialization to/from `Weights` internally. Custom/3rd-party Strategy " +"Built-in strategies implement this approach by handling serialization and" +" deserialization to/from `Weights` internally. Custom/3rd-party Strategy " "implementations should update to the slightly changed Strategy method " -"definitions. Strategy authors can consult PR [#721](https://github.com/adap/" -"flower/pull/721) to see how strategies can easily migrate to the new format." +"definitions. Strategy authors can consult PR " +"[#721](https://github.com/adap/flower/pull/721) to see how strategies can" +" easily migrate to the new format." msgstr "" -"内置策略通过在内部处理序列化和反序列化到/从`Weights`来实现这种方法。自定义/第" -"三方策略实现应更新为稍有改动的策略方法定义。策略作者可查阅 PR [#721](https://" -"github.com/adap/flower/pull/721) 以了解如何将策略轻松迁移到新格式。" +"内置策略通过在内部处理序列化和反序列化到/从`Weights`来实现这种方法。自定义/第三方策略实现应更新为稍有改动的策略方法定义。策略作者可查阅" +" PR [#721](https://github.com/adap/flower/pull/721) 以了解如何将策略轻松迁移到新格式。" -#: ../../source/ref-changelog.md:869 +#: ../../source/ref-changelog.md:935 msgid "" -"Deprecated `flwr.server.Server.evaluate`, use `flwr.server.Server." -"evaluate_round` instead ([#717](https://github.com/adap/flower/pull/717))" +"Deprecated `flwr.server.Server.evaluate`, use " +"`flwr.server.Server.evaluate_round` instead " +"([#717](https://github.com/adap/flower/pull/717))" msgstr "" -"已弃用 `flwr.server.Server.evaluate`,改用 `flwr.server.Server." -"evaluate_round`([#717](https://github.com/adap/flower/pull/717)" +"已弃用 `flwr.server.Server.evaluate`,改用 " +"`flwr.server.Server.evaluate_round`([#717](https://github.com/adap/flower/pull/717)" -#: ../../source/ref-changelog.md:871 +#: ../../source/ref-changelog.md:937 msgid "v0.15.0 (2021-03-12)" msgstr "v0.15.0 (2021-03-12)" -#: ../../source/ref-changelog.md:875 +#: ../../source/ref-changelog.md:941 msgid "" -"**Server-side parameter initialization** ([#658](https://github.com/adap/" -"flower/pull/658))" -msgstr "" -"**服务器端参数初始化** ([#658](https://github.com/adap/flower/pull/658))" +"**Server-side parameter initialization** " +"([#658](https://github.com/adap/flower/pull/658))" +msgstr "**服务器端参数初始化** ([#658](https://github.com/adap/flower/pull/658))" -#: ../../source/ref-changelog.md:877 +#: ../../source/ref-changelog.md:943 msgid "" "Model parameters can now be initialized on the server-side. Server-side " "parameter initialization works via a new `Strategy` method called " "`initialize_parameters`." msgstr "" -"现在可以在服务器端初始化模型参数。服务器端参数初始化通过名为 " -"\"initialize_parameters \"的新 \"Strategy \"方法进行。" +"现在可以在服务器端初始化模型参数。服务器端参数初始化通过名为 \"initialize_parameters \"的新 \"Strategy " +"\"方法进行。" -#: ../../source/ref-changelog.md:879 +#: ../../source/ref-changelog.md:945 msgid "" "Built-in strategies support a new constructor argument called " -"`initial_parameters` to set the initial parameters. Built-in strategies will " -"provide these initial parameters to the server on startup and then delete " -"them to free the memory afterwards." +"`initial_parameters` to set the initial parameters. Built-in strategies " +"will provide these initial parameters to the server on startup and then " +"delete them to free the memory afterwards." msgstr "" -"内置策略支持名为 \"initial_parameters \"的新构造函数参数,用于设置初始参数。" -"内置策略会在启动时向服务器提供这些初始参数,然后删除它们以释放内存。" +"内置策略支持名为 \"initial_parameters " +"\"的新构造函数参数,用于设置初始参数。内置策略会在启动时向服务器提供这些初始参数,然后删除它们以释放内存。" -#: ../../source/ref-changelog.md:898 +#: ../../source/ref-changelog.md:964 msgid "" "If no initial parameters are provided to the strategy, the server will " "continue to use the current behaviour (namely, it will ask one of the " "connected clients for its parameters and use these as the initial global " "parameters)." -msgstr "" -"如果没有向策略提供初始参数,服务器将继续使用当前行为(即向其中一个已连接的客" -"户端询问参数,并将这些参数用作初始全局参数)。" +msgstr "如果没有向策略提供初始参数,服务器将继续使用当前行为(即向其中一个已连接的客户端询问参数,并将这些参数用作初始全局参数)。" -#: ../../source/ref-changelog.md:900 +#: ../../source/ref-changelog.md:966 msgid "Deprecations" msgstr "停用" -#: ../../source/ref-changelog.md:902 +#: ../../source/ref-changelog.md:968 msgid "" -"Deprecate `flwr.server.strategy.DefaultStrategy` (migrate to `flwr.server." -"strategy.FedAvg`, which is equivalent)" +"Deprecate `flwr.server.strategy.DefaultStrategy` (migrate to " +"`flwr.server.strategy.FedAvg`, which is equivalent)" msgstr "" -"停用 `flwr.server.strategy.DefaultStrategy`(迁移到等价的 `flwr.server." -"strategy.FedAvg`)" +"停用 `flwr.server.strategy.DefaultStrategy`(迁移到等价的 " +"`flwr.server.strategy.FedAvg`)" -#: ../../source/ref-changelog.md:904 +#: ../../source/ref-changelog.md:970 msgid "v0.14.0 (2021-02-18)" msgstr "v0.14.0 (2021-02-18)" -#: ../../source/ref-changelog.md:908 +#: ../../source/ref-changelog.md:974 msgid "" "**Generalized** `Client.fit` **and** `Client.evaluate` **return values** " -"([#610](https://github.com/adap/flower/pull/610) [#572](https://github.com/" -"adap/flower/pull/572) [#633](https://github.com/adap/flower/pull/633))" +"([#610](https://github.com/adap/flower/pull/610) " +"[#572](https://github.com/adap/flower/pull/572) " +"[#633](https://github.com/adap/flower/pull/633))" msgstr "" -"**通用** `Client.fit` **和** `Client.evaluate` **返回值** ([#610](https://" -"github.com/adap/flower/pull/610) [#572](https://github.com/adap/flower/" -"pull/572) [#633](https://github.com/adap/flower/pull/633))" +"**通用** `Client.fit` **和** `Client.evaluate` **返回值** " +"([#610](https://github.com/adap/flower/pull/610) " +"[#572](https://github.com/adap/flower/pull/572) " +"[#633](https://github.com/adap/flower/pull/633))" -#: ../../source/ref-changelog.md:910 +#: ../../source/ref-changelog.md:976 msgid "" -"Clients can now return an additional dictionary mapping `str` keys to values " -"of the following types: `bool`, `bytes`, `float`, `int`, `str`. This means " -"one can return almost arbitrary values from `fit`/`evaluate` and make use of " -"them on the server side!" +"Clients can now return an additional dictionary mapping `str` keys to " +"values of the following types: `bool`, `bytes`, `float`, `int`, `str`. " +"This means one can return almost arbitrary values from `fit`/`evaluate` " +"and make use of them on the server side!" msgstr "" -"客户端现在可以返回一个额外的字典,将 `str` 键映射为以下类型的值: bool`、" -"`bytes`、`float`、`int`、`str`。这意味着我们可以从 `fit`/`evaluate` 返回几乎" -"任意的值,并在服务器端使用它们!" +"客户端现在可以返回一个额外的字典,将 `str` 键映射为以下类型的值: " +"bool`、`bytes`、`float`、`int`、`str`。这意味着我们可以从 `fit`/`evaluate` " +"返回几乎任意的值,并在服务器端使用它们!" -#: ../../source/ref-changelog.md:912 +#: ../../source/ref-changelog.md:978 msgid "" -"This improvement also allowed for more consistent return types between `fit` " -"and `evaluate`: `evaluate` should now return a tuple `(float, int, dict)` " -"representing the loss, number of examples, and a dictionary holding " -"arbitrary problem-specific values like accuracy." +"This improvement also allowed for more consistent return types between " +"`fit` and `evaluate`: `evaluate` should now return a tuple `(float, int, " +"dict)` representing the loss, number of examples, and a dictionary " +"holding arbitrary problem-specific values like accuracy." msgstr "" -"这一改进还使 `fit` 和 `evaluate` 之间的返回类型更加一致:`evaluate` 现在应返" -"回一个元组`(float, int, dict)`,代表损失、示例数和一个包含特定问题任意值(如" -"准确度)的字典。" +"这一改进还使 `fit` 和 `evaluate` 之间的返回类型更加一致:`evaluate` 现在应返回一个元组`(float, int, " +"dict)`,代表损失、示例数和一个包含特定问题任意值(如准确度)的字典。" -#: ../../source/ref-changelog.md:914 +#: ../../source/ref-changelog.md:980 msgid "" -"In case you wondered: this feature is compatible with existing projects, the " -"additional dictionary return value is optional. New code should however " -"migrate to the new return types to be compatible with upcoming Flower " -"releases (`fit`: `List[np.ndarray], int, Dict[str, Scalar]`, `evaluate`: " -"`float, int, Dict[str, Scalar]`). See the example below for details." +"In case you wondered: this feature is compatible with existing projects, " +"the additional dictionary return value is optional. New code should " +"however migrate to the new return types to be compatible with upcoming " +"Flower releases (`fit`: `List[np.ndarray], int, Dict[str, Scalar]`, " +"`evaluate`: `float, int, Dict[str, Scalar]`). See the example below for " +"details." msgstr "" -"如果你想知道:此功能与现有项目兼容,额外的字典返回值是可选的。不过,新代码应" -"迁移到新的返回类型,以便与即将发布的 Flower 版本兼容(`fit`: `List[np." -"ndarray], int, Dict[str, Scalar]`,`evaluate`: `float, int, Dict[str, " -"Scalar]`)。详见下面的示例。" +"如果你想知道:此功能与现有项目兼容,额外的字典返回值是可选的。不过,新代码应迁移到新的返回类型,以便与即将发布的 Flower " +"版本兼容(`fit`: `List[np.ndarray], int, Dict[str, Scalar]`,`evaluate`: " +"`float, int, Dict[str, Scalar]`)。详见下面的示例。" -#: ../../source/ref-changelog.md:916 +#: ../../source/ref-changelog.md:982 msgid "" "*Code example:* note the additional dictionary return values in both " "`FlwrClient.fit` and `FlwrClient.evaluate`:" -msgstr "" -"*代码示例:* 注意 `FlwrClient.fit` 和 `FlwrClient.evaluate` 中的附加字典返回" -"值:" +msgstr "*代码示例:* 注意 `FlwrClient.fit` 和 `FlwrClient.evaluate` 中的附加字典返回值:" -#: ../../source/ref-changelog.md:931 +#: ../../source/ref-changelog.md:997 msgid "" -"**Generalized** `config` **argument in** `Client.fit` **and** `Client." -"evaluate` ([#595](https://github.com/adap/flower/pull/595))" +"**Generalized** `config` **argument in** `Client.fit` **and** " +"`Client.evaluate` ([#595](https://github.com/adap/flower/pull/595))" msgstr "" -"**在**`Client.fit` **和**`Client.evaluate`中泛化**`config`参数([#595]" -"(https://github.com/adap/flower/pull/595))" +"**在**`Client.fit` " +"**和**`Client.evaluate`中泛化**`config`参数([#595](https://github.com/adap/flower/pull/595))" -#: ../../source/ref-changelog.md:933 +#: ../../source/ref-changelog.md:999 msgid "" -"The `config` argument used to be of type `Dict[str, str]`, which means that " -"dictionary values were expected to be strings. The new release generalizes " -"this to enable values of the following types: `bool`, `bytes`, `float`, " -"`int`, `str`." +"The `config` argument used to be of type `Dict[str, str]`, which means " +"that dictionary values were expected to be strings. The new release " +"generalizes this to enable values of the following types: `bool`, " +"`bytes`, `float`, `int`, `str`." msgstr "" -"`config`参数曾是 \"字典[str, str]\"类型,这意味着字典值应是字符串。新版本将其" -"扩展为以下类型的值: bool`、`bytes`、`float`、`int`、`str`。" +"`config`参数曾是 \"字典[str, str]\"类型,这意味着字典值应是字符串。新版本将其扩展为以下类型的值: " +"bool`、`bytes`、`float`、`int`、`str`。" -#: ../../source/ref-changelog.md:935 +#: ../../source/ref-changelog.md:1001 msgid "" "This means one can now pass almost arbitrary values to `fit`/`evaluate` " -"using the `config` dictionary. Yay, no more `str(epochs)` on the server-side " -"and `int(config[\"epochs\"])` on the client side!" +"using the `config` dictionary. Yay, no more `str(epochs)` on the server-" +"side and `int(config[\"epochs\"])` on the client side!" msgstr "" -"这意味着现在可以使用 `config` 字典向 `fit`/`evaluate` 传递几乎任意的值。耶," -"服务器端不再需要 `str(epochs)`,客户端不再需要 `int(config[\"epochs\"])`!" +"这意味着现在可以使用 `config` 字典向 `fit`/`evaluate` 传递几乎任意的值。耶,服务器端不再需要 " +"`str(epochs)`,客户端不再需要 `int(config[\"epochs\"])`!" -#: ../../source/ref-changelog.md:937 +#: ../../source/ref-changelog.md:1003 msgid "" "*Code example:* note that the `config` dictionary now contains non-`str` " "values in both `Client.fit` and `Client.evaluate`:" -msgstr "" -"*代码示例:* 注意 `config` 字典现在在 `Client.fit` 和 `Client.evaluate` 中都" -"包含非 `str` 值:" +msgstr "*代码示例:* 注意 `config` 字典现在在 `Client.fit` 和 `Client.evaluate` 中都包含非 `str` 值:" -#: ../../source/ref-changelog.md:954 +#: ../../source/ref-changelog.md:1020 msgid "v0.13.0 (2021-01-08)" msgstr "v0.13.0 (2021-01-08)" -#: ../../source/ref-changelog.md:958 +#: ../../source/ref-changelog.md:1024 msgid "" -"New example: PyTorch From Centralized To Federated ([#549](https://github." -"com/adap/flower/pull/549))" -msgstr "" -"新示例: PyTorch 从集中到联邦 ([#549](https://github.com/adap/flower/" -"pull/549))" +"New example: PyTorch From Centralized To Federated " +"([#549](https://github.com/adap/flower/pull/549))" +msgstr "新示例: PyTorch 从集中到联邦 ([#549](https://github.com/adap/flower/pull/549))" -#: ../../source/ref-changelog.md:959 +#: ../../source/ref-changelog.md:1025 msgid "Improved documentation" msgstr "改进文档" -#: ../../source/ref-changelog.md:960 -msgid "" -"New documentation theme ([#551](https://github.com/adap/flower/pull/551))" +#: ../../source/ref-changelog.md:1026 +msgid "New documentation theme ([#551](https://github.com/adap/flower/pull/551))" msgstr "新文档主题 ([#551](https://github.com/adap/flower/pull/551))" -#: ../../source/ref-changelog.md:961 +#: ../../source/ref-changelog.md:1027 msgid "New API reference ([#554](https://github.com/adap/flower/pull/554))" msgstr "新的 API 参考 ([#554](https://github.com/adap/flower/pull/554))" -#: ../../source/ref-changelog.md:962 +#: ../../source/ref-changelog.md:1028 msgid "" -"Updated examples documentation ([#549](https://github.com/adap/flower/" -"pull/549))" +"Updated examples documentation " +"([#549](https://github.com/adap/flower/pull/549))" msgstr "更新了示例文档 ([#549](https://github.com/adap/flower/pull/549))" -#: ../../source/ref-changelog.md:963 +#: ../../source/ref-changelog.md:1029 msgid "" -"Removed obsolete documentation ([#548](https://github.com/adap/flower/" -"pull/548))" +"Removed obsolete documentation " +"([#548](https://github.com/adap/flower/pull/548))" msgstr "删除了过时的文档 ([#548](https://github.com/adap/flower/pull/548))" -#: ../../source/ref-changelog.md:965 +#: ../../source/ref-changelog.md:1031 msgid "Bugfix:" msgstr "错误修正:" -#: ../../source/ref-changelog.md:967 +#: ../../source/ref-changelog.md:1033 msgid "" -"`Server.fit` does not disconnect clients when finished, disconnecting the " -"clients is now handled in `flwr.server.start_server` ([#553](https://github." -"com/adap/flower/pull/553) [#540](https://github.com/adap/flower/issues/540))." +"`Server.fit` does not disconnect clients when finished, disconnecting the" +" clients is now handled in `flwr.server.start_server` " +"([#553](https://github.com/adap/flower/pull/553) " +"[#540](https://github.com/adap/flower/issues/540))." msgstr "" -"Server.fit \"完成后不会断开客户端连接,现在断开客户端连接是在 \"flwr.server." -"start_server \"中处理的([#553](https://github.com/adap/flower/pull/553) " +"Server.fit \"完成后不会断开客户端连接,现在断开客户端连接是在 \"flwr.server.start_server " +"\"中处理的([#553](https://github.com/adap/flower/pull/553) " "[#540](https://github.com/adap/flower/issues/540))。" -#: ../../source/ref-changelog.md:969 +#: ../../source/ref-changelog.md:1035 msgid "v0.12.0 (2020-12-07)" msgstr "v0.12.0 (2020-12-07)" -#: ../../source/ref-changelog.md:971 ../../source/ref-changelog.md:987 +#: ../../source/ref-changelog.md:1037 ../../source/ref-changelog.md:1053 msgid "Important changes:" msgstr "重要变更:" -#: ../../source/ref-changelog.md:973 +#: ../../source/ref-changelog.md:1039 msgid "" -"Added an example for embedded devices ([#507](https://github.com/adap/flower/" -"pull/507))" +"Added an example for embedded devices " +"([#507](https://github.com/adap/flower/pull/507))" msgstr "添加了嵌入式设备示例 ([#507](https://github.com/adap/flower/pull/507))" -#: ../../source/ref-changelog.md:974 +#: ../../source/ref-changelog.md:1040 msgid "" -"Added a new NumPyClient (in addition to the existing KerasClient) ([#504]" -"(https://github.com/adap/flower/pull/504) [#508](https://github.com/adap/" -"flower/pull/508))" +"Added a new NumPyClient (in addition to the existing KerasClient) " +"([#504](https://github.com/adap/flower/pull/504) " +"[#508](https://github.com/adap/flower/pull/508))" msgstr "" -"添加了一个新的 NumPyClient(除现有的 KerasClient 之外)([#504](https://" -"github.com/adap/flower/pull/504) [#508](https://github.com/adap/flower/" -"pull/508)" +"添加了一个新的 NumPyClient(除现有的 KerasClient " +"之外)([#504](https://github.com/adap/flower/pull/504) " +"[#508](https://github.com/adap/flower/pull/508)" -#: ../../source/ref-changelog.md:975 +#: ../../source/ref-changelog.md:1041 msgid "" -"Deprecated `flwr_example` package and started to migrate examples into the " -"top-level `examples` directory ([#494](https://github.com/adap/flower/" -"pull/494) [#512](https://github.com/adap/flower/pull/512))" +"Deprecated `flwr_example` package and started to migrate examples into " +"the top-level `examples` directory " +"([#494](https://github.com/adap/flower/pull/494) " +"[#512](https://github.com/adap/flower/pull/512))" msgstr "" -"弃用 `flwr_example` 软件包,并开始将示例迁移到顶层的 `examples` 目录 ([#494]" -"(https://github.com/adap/flower/pull/494) [#512](https://github.com/adap/" -"flower/pull/512))" +"弃用 `flwr_example` 软件包,并开始将示例迁移到顶层的 `examples` 目录 " +"([#494](https://github.com/adap/flower/pull/494) " +"[#512](https://github.com/adap/flower/pull/512))" -#: ../../source/ref-changelog.md:977 +#: ../../source/ref-changelog.md:1043 msgid "v0.11.0 (2020-11-30)" msgstr "v0.11.0 (2020-11-30)" -#: ../../source/ref-changelog.md:979 +#: ../../source/ref-changelog.md:1045 msgid "Incompatible changes:" msgstr "不兼容的更改:" -#: ../../source/ref-changelog.md:981 +#: ../../source/ref-changelog.md:1047 msgid "" -"Renamed strategy methods ([#486](https://github.com/adap/flower/pull/486)) " -"to unify the naming of Flower's public APIs. Other public methods/functions " -"(e.g., every method in `Client`, but also `Strategy.evaluate`) do not use " -"the `on_` prefix, which is why we're removing it from the four methods in " -"Strategy. To migrate rename the following `Strategy` methods accordingly:" +"Renamed strategy methods " +"([#486](https://github.com/adap/flower/pull/486)) to unify the naming of " +"Flower's public APIs. Other public methods/functions (e.g., every method " +"in `Client`, but also `Strategy.evaluate`) do not use the `on_` prefix, " +"which is why we're removing it from the four methods in Strategy. To " +"migrate rename the following `Strategy` methods accordingly:" msgstr "" -"重命名了策略方法([#486](https://github.com/adap/flower/pull/486)),以统一 " -"Flower公共 API 的命名。其他公共方法/函数(例如 `Client` 中的每个方法,以及 " -"`Strategy.evaluate`)不使用 `on_` 前缀,这就是我们从 Strategy 中的四个方法中" -"移除它的原因。迁移时,请相应地重命名以下 `Strategy` 方法:" +"重命名了策略方法([#486](https://github.com/adap/flower/pull/486)),以统一 Flower公共 " +"API 的命名。其他公共方法/函数(例如 `Client` 中的每个方法,以及 `Strategy.evaluate`)不使用 `on_` " +"前缀,这就是我们从 Strategy 中的四个方法中移除它的原因。迁移时,请相应地重命名以下 `Strategy` 方法:" -#: ../../source/ref-changelog.md:982 +#: ../../source/ref-changelog.md:1048 msgid "`on_configure_evaluate` => `configure_evaluate`" msgstr "`on_configure_evaluate` => `configure_evaluate`" -#: ../../source/ref-changelog.md:983 +#: ../../source/ref-changelog.md:1049 msgid "`on_aggregate_evaluate` => `aggregate_evaluate`" msgstr "`on_aggregate_evaluate` => `aggregate_evaluate`" -#: ../../source/ref-changelog.md:984 +#: ../../source/ref-changelog.md:1050 msgid "`on_configure_fit` => `configure_fit`" msgstr "`on_configure_fit` => `configure_fit`" -#: ../../source/ref-changelog.md:985 +#: ../../source/ref-changelog.md:1051 msgid "`on_aggregate_fit` => `aggregate_fit`" msgstr "`on_aggregate_fit` => `aggregate_fit`" -#: ../../source/ref-changelog.md:989 +#: ../../source/ref-changelog.md:1055 msgid "" -"Deprecated `DefaultStrategy` ([#479](https://github.com/adap/flower/" -"pull/479)). To migrate use `FedAvg` instead." +"Deprecated `DefaultStrategy` " +"([#479](https://github.com/adap/flower/pull/479)). To migrate use " +"`FedAvg` instead." msgstr "" -"已废弃的 `DefaultStrategy` ([#479](https://github.com/adap/flower/" -"pull/479)) 。迁移时请使用 `FedAvg`。" +"已废弃的 `DefaultStrategy` ([#479](https://github.com/adap/flower/pull/479)) " +"。迁移时请使用 `FedAvg`。" -#: ../../source/ref-changelog.md:990 +#: ../../source/ref-changelog.md:1056 msgid "" -"Simplified examples and baselines ([#484](https://github.com/adap/flower/" -"pull/484))." -msgstr "" -"简化示例和baselines([#484](https://github.com/adap/flower/pull/484))。" +"Simplified examples and baselines " +"([#484](https://github.com/adap/flower/pull/484))." +msgstr "简化示例和baselines([#484](https://github.com/adap/flower/pull/484))。" -#: ../../source/ref-changelog.md:991 +#: ../../source/ref-changelog.md:1057 msgid "" -"Removed presently unused `on_conclude_round` from strategy interface ([#483]" -"(https://github.com/adap/flower/pull/483))." +"Removed presently unused `on_conclude_round` from strategy interface " +"([#483](https://github.com/adap/flower/pull/483))." msgstr "" -"删除了策略界面中目前未使用的 \"on_conclude_round\"([#483](https://github." -"com/adap/flower/pull/483))。" +"删除了策略界面中目前未使用的 " +"\"on_conclude_round\"([#483](https://github.com/adap/flower/pull/483))。" -#: ../../source/ref-changelog.md:992 +#: ../../source/ref-changelog.md:1058 msgid "" -"Set minimal Python version to 3.6.1 instead of 3.6.9 ([#471](https://github." -"com/adap/flower/pull/471))." +"Set minimal Python version to 3.6.1 instead of 3.6.9 " +"([#471](https://github.com/adap/flower/pull/471))." msgstr "" -"将最小 Python 版本设为 3.6.1,而不是 3.6.9 ([#471](https://github.com/adap/" -"flower/pull/471))." +"将最小 Python 版本设为 3.6.1,而不是 3.6.9 " +"([#471](https://github.com/adap/flower/pull/471))." -#: ../../source/ref-changelog.md:993 +#: ../../source/ref-changelog.md:1059 msgid "" -"Improved `Strategy` docstrings ([#470](https://github.com/adap/flower/" -"pull/470))." +"Improved `Strategy` docstrings " +"([#470](https://github.com/adap/flower/pull/470))." msgstr "" -"改进了 `Strategy` docstrings([#470](https://github.com/adap/flower/" -"pull/470))。" +"改进了 `Strategy` " +"docstrings([#470](https://github.com/adap/flower/pull/470))。" #: ../../source/ref-example-projects.rst:2 msgid "Example projects" @@ -19308,15 +20765,15 @@ msgstr "项目实例" #: ../../source/ref-example-projects.rst:4 msgid "" -"Flower comes with a number of usage examples. The examples demonstrate how " -"Flower can be used to federate different kinds of existing machine learning " -"pipelines, usually leveraging popular machine learning frameworks such as " -"`PyTorch `_ or `TensorFlow `_." +"Flower comes with a number of usage examples. The examples demonstrate " +"how Flower can be used to federate different kinds of existing machine " +"learning pipelines, usually leveraging popular machine learning " +"frameworks such as `PyTorch `_ or `TensorFlow " +"`_." msgstr "" -"Flower 附带了许多使用示例。这些示例演示了如何使用 Flower 联邦不同类型的现有机" -"器学习形式,通常是利用流行的机器学习框架,如 `PyTorch `_ 或 `TensorFlow `_。" +"Flower 附带了许多使用示例。这些示例演示了如何使用 Flower 联邦不同类型的现有机器学习形式,通常是利用流行的机器学习框架,如 " +"`PyTorch `_ 或 `TensorFlow " +"`_。" #: ../../source/ref-example-projects.rst:10 #, fuzzy @@ -19327,32 +20784,30 @@ msgstr "以下示例可作为独立项目使用。" #: ../../source/ref-example-projects.rst:14 msgid "" -"The TensorFlow/Keras quickstart example shows CIFAR-10 image classification " -"with MobileNetV2:" -msgstr "" -"TensorFlow/Keras 快速入门示例展示了使用 MobileNetV2 进行的 CIFAR-10 图像分" -"类:" +"The TensorFlow/Keras quickstart example shows CIFAR-10 image " +"classification with MobileNetV2:" +msgstr "TensorFlow/Keras 快速入门示例展示了使用 MobileNetV2 进行的 CIFAR-10 图像分类:" #: ../../source/ref-example-projects.rst:17 msgid "" -"`Quickstart TensorFlow (Code) `_" +"`Quickstart TensorFlow (Code) " +"`_" msgstr "" -"`TensorFlow快速入门 (代码) `_" +"`TensorFlow快速入门 (代码) `_" #: ../../source/ref-example-projects.rst:18 #, fuzzy -msgid "" -":doc:`Quickstart TensorFlow (Tutorial) `" +msgid ":doc:`Quickstart TensorFlow (Tutorial) `" msgstr "" "`TensorFlow快速入门 (教程) `_" #: ../../source/ref-example-projects.rst:19 msgid "" -"`Quickstart TensorFlow (Blog Post) `_" +"`Quickstart TensorFlow (Blog Post) `_" msgstr "" "`TensorFlow快速入门 (博客) `_" @@ -19364,25 +20819,24 @@ msgstr "PyTorch快速入门" #: ../../source/ref-example-projects.rst:25 msgid "" -"The PyTorch quickstart example shows CIFAR-10 image classification with a " -"simple Convolutional Neural Network:" -msgstr "" -"PyTorch 快速入门范例展示了使用简单卷积神经网络进行 CIFAR-10 图像分类的情况:" +"The PyTorch quickstart example shows CIFAR-10 image classification with a" +" simple Convolutional Neural Network:" +msgstr "PyTorch 快速入门范例展示了使用简单卷积神经网络进行 CIFAR-10 图像分类的情况:" #: ../../source/ref-example-projects.rst:28 msgid "" -"`Quickstart PyTorch (Code) `_" +"`Quickstart PyTorch (Code) " +"`_" msgstr "" -"`PyTorch快速入门 (代码) `_" +"`PyTorch快速入门 (代码) `_" #: ../../source/ref-example-projects.rst:29 #, fuzzy msgid ":doc:`Quickstart PyTorch (Tutorial) `" msgstr "" -"`PyTorch快速入门 (教程) `_" +"`PyTorch快速入门 (教程) `_" #: ../../source/ref-example-projects.rst:33 msgid "PyTorch: From Centralized To Federated" @@ -19396,11 +20850,12 @@ msgstr "本例展示了如何使用 Flower 联邦化一个普通的 PyTorch 项 #: ../../source/ref-example-projects.rst:37 msgid "" -"`PyTorch: From Centralized To Federated (Code) `_" +"`PyTorch: From Centralized To Federated (Code) " +"`_" msgstr "" -"PyTorch: 从集中式到联邦式(代码) `_" +"PyTorch: 从集中式到联邦式(代码) `_" #: ../../source/ref-example-projects.rst:38 #, fuzzy @@ -19408,8 +20863,8 @@ msgid "" ":doc:`PyTorch: From Centralized To Federated (Tutorial) `" msgstr "" -"PyTorch: 从集中式到联邦式(教程) `_" +"PyTorch: 从集中式到联邦式(教程) `_" #: ../../source/ref-example-projects.rst:42 msgid "Federated Learning on Raspberry Pi and Nvidia Jetson" @@ -19419,27 +20874,23 @@ msgstr "树莓派和 Nvidia Jetson 上的联邦学习" msgid "" "This example shows how Flower can be used to build a federated learning " "system that run across Raspberry Pi and Nvidia Jetson:" -msgstr "" -"本示例展示了如何利用 Flower 建立一个跨 Raspberry Pi 和 Nvidia Jetson 运行的联" -"邦学习系统:" +msgstr "本示例展示了如何利用 Flower 建立一个跨 Raspberry Pi 和 Nvidia Jetson 运行的联邦学习系统:" #: ../../source/ref-example-projects.rst:46 msgid "" -"`Federated Learning on Raspberry Pi and Nvidia Jetson (Code) `_" +"`Federated Learning on Raspberry Pi and Nvidia Jetson (Code) " +"`_" msgstr "" -"Raspberry Pi 和 Nvidia Jetson 上的联邦学习(代码) `_" +"Raspberry Pi 和 Nvidia Jetson 上的联邦学习(代码) " +"`_" #: ../../source/ref-example-projects.rst:47 msgid "" -"`Federated Learning on Raspberry Pi and Nvidia Jetson (Blog Post) `_" +"`Federated Learning on Raspberry Pi and Nvidia Jetson (Blog Post) " +"`_" msgstr "" -"Raspberry Pi和 Nvidia Jetson 上的联邦学习(博客) `_" +"Raspberry Pi和 Nvidia Jetson 上的联邦学习(博客) " +"`_" #: ../../source/ref-faq.rst:4 msgid "" @@ -19450,32 +20901,33 @@ msgstr "本页收集了有关 \"Flower 联邦学习 \"常见问题的答案。" #: ../../source/ref-faq.rst #, fuzzy msgid ":fa:`eye,mr-1` Can Flower run on Jupyter Notebooks / Google Colab?" -msgstr "" -":fa:`eye,mr-1` Flower 可以在 Juptyter Notebooks / Google Colab 上运行吗?" +msgstr ":fa:`eye,mr-1` Flower 可以在 Juptyter Notebooks / Google Colab 上运行吗?" #: ../../source/ref-faq.rst:8 msgid "" -"Yes, it can! Flower even comes with a few under-the-hood optimizations to " -"make it work even better on Colab. Here's a quickstart example:" -msgstr "" -"是的,它可以!Flower 甚至还进行了一些底层优化,使其在 Colab 上运行得更好。下" -"面是一个快速启动示例:" +"Yes, it can! Flower even comes with a few under-the-hood optimizations to" +" make it work even better on Colab. Here's a quickstart example:" +msgstr "是的,它可以!Flower 甚至还进行了一些底层优化,使其在 Colab 上运行得更好。下面是一个快速启动示例:" #: ../../source/ref-faq.rst:10 msgid "" -"`Flower simulation PyTorch `_" +"`Flower simulation PyTorch " +"`_" msgstr "" -"`Flower 模拟 PyTorch `_" +"`Flower 模拟 PyTorch " +"`_" #: ../../source/ref-faq.rst:11 msgid "" -"`Flower simulation TensorFlow/Keras `_" +"`Flower simulation TensorFlow/Keras " +"`_" msgstr "" -"`Flower模拟TensorFlow/Keras `_" +"`Flower模拟TensorFlow/Keras " +"`_" #: ../../source/ref-faq.rst msgid ":fa:`eye,mr-1` How can I run Federated Learning on a Raspberry Pi?" @@ -19484,37 +20936,33 @@ msgstr ":fa:`eye,mr-1` 如何在 Raspberry Pi 上运行联邦学习?" #: ../../source/ref-faq.rst:15 msgid "" "Find the `blog post about federated learning on embedded device here " -"`_ " -"and the corresponding `GitHub code example `_." +"`_" +" and the corresponding `GitHub code example " +"`_." msgstr "" -"请点击此处查看有关嵌入式设备联邦学习的 \"博文\"`_和" -"相应的 \"GitHub 代码示例\"`_。" +"请点击此处查看有关嵌入式设备联邦学习的 " +"\"博文\"`_和相应的" +" \"GitHub 代码示例\"`_。" #: ../../source/ref-faq.rst -msgid "" -":fa:`eye,mr-1` Does Flower support federated learning on Android devices?" +msgid ":fa:`eye,mr-1` Does Flower support federated learning on Android devices?" msgstr ":fa:`eye,mr-1` Flower 是否支持安卓设备上的联邦学习?" #: ../../source/ref-faq.rst:19 msgid "" -"Yes, it does. Please take a look at our `blog post `_ or " -"check out the code examples:" +"Yes, it does. Please take a look at our `blog post " +"`_ or check out the code examples:" msgstr "" -"是的,确实如此。请查看我们的 \"博客文章 `_\" 或查看代码示例:" +"是的,确实如此。请查看我们的 \"博客文章 `_\" 或查看代码示例:" #: ../../source/ref-faq.rst:21 msgid "" -"`Android Kotlin example `_" -msgstr "" -"`Android Kotlin 示例 `_" +"`Android Kotlin example `_" +msgstr "`Android Kotlin 示例 `_" #: ../../source/ref-faq.rst:22 msgid "`Android Java example `_" @@ -19532,46 +20980,46 @@ msgstr "当然可以。有关在区块链环境中使用 Flower 的可用示例 #: ../../source/ref-faq.rst:28 msgid "" -"`Flower meets Nevermined GitHub Repository `_." +"`Flower meets Nevermined GitHub Repository `_." msgstr "" -"`Flower meets Nevermined GitHub Repository `_." +"`Flower meets Nevermined GitHub Repository `_." #: ../../source/ref-faq.rst:29 msgid "" -"`Flower meets Nevermined YouTube video `_." +"`Flower meets Nevermined YouTube video " +"`_." msgstr "" -"`Flower meets Nevermined YouTube 视频 `_." +"`Flower meets Nevermined YouTube 视频 " +"`_." #: ../../source/ref-faq.rst:30 #, fuzzy msgid "" -"`Flower meets KOSMoS `_." +"`Flower meets KOSMoS `_." msgstr "" -"`Flower meets KOSMoS `_." +"`Flower meets KOSMoS `_." #: ../../source/ref-faq.rst:31 msgid "" "`Flower meets Talan blog post `_ ." +"learning-same-mask-different-faces-imen-" +"ayari/?trackingId=971oIlxLQ9%2BA9RB0IQ73XQ%3D%3D>`_ ." msgstr "" "`Flower meets Talan博文 `_ 。" +"same-mask-different-faces-imen-" +"ayari/?trackingId=971oIlxLQ9%2BA9RB0IQ73XQ%3D%3D>`_ 。" #: ../../source/ref-faq.rst:32 msgid "" -"`Flower meets Talan GitHub Repository `_ ." +"`Flower meets Talan GitHub Repository " +"`_ ." msgstr "" -"`Flower meets Talan GitHub Repository `_ ." +"`Flower meets Talan GitHub Repository " +"`_ ." #: ../../source/ref-telemetry.md:1 msgid "Telemetry" @@ -19579,64 +21027,60 @@ msgstr "遥测功能" #: ../../source/ref-telemetry.md:3 msgid "" -"The Flower open-source project collects **anonymous** usage metrics to make " -"well-informed decisions to improve Flower. Doing this enables the Flower " -"team to understand how Flower is used and what challenges users might face." +"The Flower open-source project collects **anonymous** usage metrics to " +"make well-informed decisions to improve Flower. Doing this enables the " +"Flower team to understand how Flower is used and what challenges users " +"might face." msgstr "" -"Flower 开源项目收集**匿名**使用指标,以便在充分知情的情况下做出改进 Flower 的" -"决定。这样做能让 Flower 团队了解 Flower 的使用情况以及用户可能面临的挑战。" +"Flower 开源项目收集**匿名**使用指标,以便在充分知情的情况下做出改进 Flower 的决定。这样做能让 Flower 团队了解 " +"Flower 的使用情况以及用户可能面临的挑战。" #: ../../source/ref-telemetry.md:5 msgid "" -"**Flower is a friendly framework for collaborative AI and data science.** " -"Staying true to this statement, Flower makes it easy to disable telemetry " -"for users that do not want to share anonymous usage metrics." -msgstr "" -"**Flower 是一个用于协作式人工智能和数据科学的友好框架。** Flower 遵循这一声" -"明,让不想分享匿名使用指标的用户可以轻松禁用遥测技术。" +"**Flower is a friendly framework for collaborative AI and data science.**" +" Staying true to this statement, Flower makes it easy to disable " +"telemetry for users that do not want to share anonymous usage metrics." +msgstr "**Flower 是一个用于协作式人工智能和数据科学的友好框架。** Flower 遵循这一声明,让不想分享匿名使用指标的用户可以轻松禁用遥测技术。" #: ../../source/ref-telemetry.md:7 msgid "Principles" msgstr "原则" #: ../../source/ref-telemetry.md:9 -msgid "" -"We follow strong principles guarding anonymous usage metrics collection:" +msgid "We follow strong principles guarding anonymous usage metrics collection:" msgstr "我们遵循严格的匿名使用指标收集原则:" #: ../../source/ref-telemetry.md:11 msgid "" -"**Optional:** You will always be able to disable telemetry; read on to learn " -"“[How to opt-out](#how-to-opt-out)”." -msgstr "" -"**可选:** 您始终可以禁用遥测功能;请继续阅读\"[如何退出](#how-to-opt-" -"out)\"。" +"**Optional:** You will always be able to disable telemetry; read on to " +"learn “[How to opt-out](#how-to-opt-out)”." +msgstr "**可选:** 您始终可以禁用遥测功能;请继续阅读\"[如何退出](#how-to-opt-out)\"。" #: ../../source/ref-telemetry.md:12 msgid "" -"**Anonymous:** The reported usage metrics are anonymous and do not contain " -"any personally identifiable information (PII). See “[Collected metrics]" -"(#collected-metrics)” to understand what metrics are being reported." +"**Anonymous:** The reported usage metrics are anonymous and do not " +"contain any personally identifiable information (PII). See “[Collected " +"metrics](#collected-metrics)” to understand what metrics are being " +"reported." msgstr "" -"**匿名:** 报告的使用指标是匿名的,不包含任何个人身份信息 (PII)。请参阅\"[收" -"集的指标](#collected-metrics) \"了解报告的指标。" +"**匿名:** 报告的使用指标是匿名的,不包含任何个人身份信息 (PII)。请参阅\"[收集的指标](#collected-metrics) " +"\"了解报告的指标。" #: ../../source/ref-telemetry.md:13 msgid "" "**Transparent:** You can easily inspect what anonymous metrics are being " -"reported; see the section “[How to inspect what is being reported](#how-to-" -"inspect-what-is-being-reported)”" +"reported; see the section “[How to inspect what is being reported](#how-" +"to-inspect-what-is-being-reported)”" msgstr "" -"**透明:** 您可以轻松查看正在报告的匿名指标;请参阅\"[如何查看正在报告的指标]" -"(#how-to-inspect-what-is-being-reported)\"部分" +"**透明:** 您可以轻松查看正在报告的匿名指标;请参阅\"[如何查看正在报告的指标](#how-to-inspect-what-is-" +"being-reported)\"部分" #: ../../source/ref-telemetry.md:14 msgid "" -"**Open for feedback:** You can always reach out to us if you have feedback; " -"see the section “[How to contact us](#how-to-contact-us)” for details." -msgstr "" -"**欢迎反馈:** 如果您有反馈意见,可以随时联系我们;详情请参见\"[如何联系我们]" -"(#how-to-contact-us) \"部分。" +"**Open for feedback:** You can always reach out to us if you have " +"feedback; see the section “[How to contact us](#how-to-contact-us)” for " +"details." +msgstr "**欢迎反馈:** 如果您有反馈意见,可以随时联系我们;详情请参见\"[如何联系我们](#how-to-contact-us) \"部分。" #: ../../source/ref-telemetry.md:16 msgid "How to opt-out" @@ -19650,17 +21094,17 @@ msgid "" "client, simply do so by prepending your command as in:" msgstr "" "Flower 启动时,会检查环境变量 `FLWR_TELEMETRY_ENABLED` 是否存在。通过设置 " -"`FLWR_TELEMETRY_ENABLED=0` 可以轻松禁用遥测功能。假设你启动的是 Flower 服务器" -"或客户端,只需在命令前添加以下内容即可:" +"`FLWR_TELEMETRY_ENABLED=0` 可以轻松禁用遥测功能。假设你启动的是 Flower " +"服务器或客户端,只需在命令前添加以下内容即可:" #: ../../source/ref-telemetry.md:24 msgid "" -"Alternatively, you can export `FLWR_TELEMETRY_ENABLED=0` in, for example, `." -"bashrc` (or whatever configuration file applies to your environment) to " -"disable Flower telemetry permanently." +"Alternatively, you can export `FLWR_TELEMETRY_ENABLED=0` in, for example," +" `.bashrc` (or whatever configuration file applies to your environment) " +"to disable Flower telemetry permanently." msgstr "" -"或者,你也可以在 `.bashrc`(或任何适用于你的环境的配置文件)中导出 " -"`FLWR_TELEMETRY_ENABLED=0` 来永久禁用 Flower telemetry。" +"或者,你也可以在 `.bashrc`(或任何适用于你的环境的配置文件)中导出 `FLWR_TELEMETRY_ENABLED=0` 来永久禁用 " +"Flower telemetry。" #: ../../source/ref-telemetry.md:26 msgid "Collected metrics" @@ -19672,21 +21116,17 @@ msgstr "Flower 遥测技术收集以下指标:" #: ../../source/ref-telemetry.md:30 msgid "" -"**Flower version.** Understand which versions of Flower are currently being " -"used. This helps us to decide whether we should invest effort into releasing " -"a patch version for an older version of Flower or instead use the bandwidth " -"to build new features." -msgstr "" -"**了解目前使用的 Flower 版本。这有助于我们决定是否应该投入精力为旧版本的 " -"Flower 发布补丁版本,还是利用带宽来构建新功能。" +"**Flower version.** Understand which versions of Flower are currently " +"being used. This helps us to decide whether we should invest effort into " +"releasing a patch version for an older version of Flower or instead use " +"the bandwidth to build new features." +msgstr "**了解目前使用的 Flower 版本。这有助于我们决定是否应该投入精力为旧版本的 Flower 发布补丁版本,还是利用带宽来构建新功能。" #: ../../source/ref-telemetry.md:32 msgid "" "**Operating system.** Enables us to answer questions such as: *Should we " "create more guides for Linux, macOS, or Windows?*" -msgstr "" -"**操作系统**使我们能够回答以下问题: *我们应该为 Linux、macOS 还是 Windows 创" -"建更多指南?*" +msgstr "**操作系统**使我们能够回答以下问题: *我们应该为 Linux、macOS 还是 Windows 创建更多指南?*" #: ../../source/ref-telemetry.md:34 msgid "" @@ -19694,82 +21134,73 @@ msgid "" "decide whether we should invest effort into supporting old versions of " "Python or stop supporting them and start taking advantage of new Python " "features." -msgstr "" -"**例如,了解 Python 版本有助于我们决定是否应该投入精力支持旧版本的 Python,还" -"是停止支持这些版本并开始利用新的 Python 功能。" +msgstr "**例如,了解 Python 版本有助于我们决定是否应该投入精力支持旧版本的 Python,还是停止支持这些版本并开始利用新的 Python 功能。" #: ../../source/ref-telemetry.md:36 msgid "" -"**Hardware properties.** Understanding the hardware environment that Flower " -"is being used in helps to decide whether we should, for example, put more " -"effort into supporting low-resource environments." -msgstr "" -"**硬件属性** 了解 Flower 的硬件使用环境,有助于决定我们是否应在支持低资源环境" -"等方面投入更多精力。" +"**Hardware properties.** Understanding the hardware environment that " +"Flower is being used in helps to decide whether we should, for example, " +"put more effort into supporting low-resource environments." +msgstr "**硬件属性** 了解 Flower 的硬件使用环境,有助于决定我们是否应在支持低资源环境等方面投入更多精力。" #: ../../source/ref-telemetry.md:38 msgid "" -"**Execution mode.** Knowing what execution mode Flower starts in enables us " -"to understand how heavily certain features are being used and better " +"**Execution mode.** Knowing what execution mode Flower starts in enables " +"us to understand how heavily certain features are being used and better " "prioritize based on that." -msgstr "" -"** 执行模式** 了解 Flower 的启动执行模式,能让我们了解某些功能的使用率,并据" -"此更好地确定优先级。" +msgstr "** 执行模式** 了解 Flower 的启动执行模式,能让我们了解某些功能的使用率,并据此更好地确定优先级。" #: ../../source/ref-telemetry.md:40 msgid "" "**Cluster.** Flower telemetry assigns a random in-memory cluster ID each " "time a Flower workload starts. This allows us to understand which device " -"types not only start Flower workloads but also successfully complete them." +"types not only start Flower workloads but also successfully complete " +"them." msgstr "" -"**每次 Flower 工作负载启动时,Flower 遥测都会随机分配一个内存集群 ID。这样," -"我们就能了解哪些设备类型不仅启动了 Flower 工作负载,而且还成功完成了它们。" +"**每次 Flower 工作负载启动时,Flower 遥测都会随机分配一个内存集群 ID。这样,我们就能了解哪些设备类型不仅启动了 Flower " +"工作负载,而且还成功完成了它们。" #: ../../source/ref-telemetry.md:42 msgid "" -"**Source.** Flower telemetry tries to store a random source ID in `~/.flwr/" -"source` the first time a telemetry event is generated. The source ID is " -"important to identify whether an issue is recurring or whether an issue is " -"triggered by multiple clusters running concurrently (which often happens in " -"simulation). For example, if a device runs multiple workloads at the same " -"time, and this results in an issue, then, in order to reproduce the issue, " -"multiple workloads must be started at the same time." +"**Source.** Flower telemetry tries to store a random source ID in " +"`~/.flwr/source` the first time a telemetry event is generated. The " +"source ID is important to identify whether an issue is recurring or " +"whether an issue is triggered by multiple clusters running concurrently " +"(which often happens in simulation). For example, if a device runs " +"multiple workloads at the same time, and this results in an issue, then, " +"in order to reproduce the issue, multiple workloads must be started at " +"the same time." msgstr "" -"**Source.** Flower 遥测会在第一次生成遥测事件时,尝试在 `~/.flwr/source` 中存" -"储一个随机源 ID。源 ID 对于识别问题是否反复出现或问题是否由多个集群同时运行触" -"发(这在模拟中经常发生)非常重要。例如,如果设备同时运行多个工作负载并导致问" -"题,那么为了重现问题,必须同时启动多个工作负载。" +"**Source.** Flower 遥测会在第一次生成遥测事件时,尝试在 `~/.flwr/source` 中存储一个随机源 ID。源 ID " +"对于识别问题是否反复出现或问题是否由多个集群同时运行触发(这在模拟中经常发生)非常重要。例如,如果设备同时运行多个工作负载并导致问题,那么为了重现问题,必须同时启动多个工作负载。" #: ../../source/ref-telemetry.md:44 msgid "" -"You may delete the source ID at any time. If you wish for all events logged " -"under a specific source ID to be deleted, you can send a deletion request " -"mentioning the source ID to `telemetry@flower.ai`. All events related to " -"that source ID will then be permanently deleted." +"You may delete the source ID at any time. If you wish for all events " +"logged under a specific source ID to be deleted, you can send a deletion " +"request mentioning the source ID to `telemetry@flower.ai`. All events " +"related to that source ID will then be permanently deleted." msgstr "" -"您可以随时删除源 ID。如果您希望删除特定源 ID 下记录的所有事件,可以向 " -"`telemetry@flower.ai` 发送删除请求,并提及该源 ID。届时,与该源 ID 相关的所有" -"事件都将被永久删除。" +"您可以随时删除源 ID。如果您希望删除特定源 ID 下记录的所有事件,可以向 `telemetry@flower.ai` 发送删除请求,并提及该源" +" ID。届时,与该源 ID 相关的所有事件都将被永久删除。" #: ../../source/ref-telemetry.md:46 msgid "" -"We will not collect any personally identifiable information. If you think " -"any of the metrics collected could be misused in any way, please [get in " -"touch with us](#how-to-contact-us). We will update this page to reflect any " -"changes to the metrics collected and publish changes in the changelog." +"We will not collect any personally identifiable information. If you think" +" any of the metrics collected could be misused in any way, please [get in" +" touch with us](#how-to-contact-us). We will update this page to reflect " +"any changes to the metrics collected and publish changes in the " +"changelog." msgstr "" -"我们不会收集任何个人身份信息。如果您认为所收集的任何指标可能以任何方式被滥" -"用,请[与我们联系](#how-to-contact-us)。我们将更新本页面,以反映对所收集指" -"标的任何更改,并在更新日志中公布更改内容。" +"我们不会收集任何个人身份信息。如果您认为所收集的任何指标可能以任何方式被滥用,请[与我们联系](#how-to-contact-" +"us)。我们将更新本页面,以反映对所收集指标的任何更改,并在更新日志中公布更改内容。" #: ../../source/ref-telemetry.md:48 msgid "" "If you think other metrics would be helpful for us to better guide our " "decisions, please let us know! We will carefully review them; if we are " "confident that they do not compromise user privacy, we may add them." -msgstr "" -"如果您认为其他指标有助于我们更好地指导决策,请告诉我们!我们将仔细审查这些指" -"标;如果我们确信它们不会损害用户隐私,我们可能会添加这些指标。" +msgstr "如果您认为其他指标有助于我们更好地指导决策,请告诉我们!我们将仔细审查这些指标;如果我们确信它们不会损害用户隐私,我们可能会添加这些指标。" #: ../../source/ref-telemetry.md:50 msgid "How to inspect what is being reported" @@ -19778,23 +21209,21 @@ msgstr "如何检查报告中的内容" #: ../../source/ref-telemetry.md:52 msgid "" "We wanted to make it very easy for you to inspect what anonymous usage " -"metrics are reported. You can view all the reported telemetry information by " -"setting the environment variable `FLWR_TELEMETRY_LOGGING=1`. Logging is " -"disabled by default. You may use logging independently from " +"metrics are reported. You can view all the reported telemetry information" +" by setting the environment variable `FLWR_TELEMETRY_LOGGING=1`. Logging " +"is disabled by default. You may use logging independently from " "`FLWR_TELEMETRY_ENABLED` so that you can inspect the telemetry feature " "without sending any metrics." msgstr "" -"我们希望能让您轻松查看所报告的匿名使用指标。通过设置环境变量 " -"`FLWR_TELEMETRY_LOGGING=1` 可以查看所有报告的遥测信息。日志记录默认为禁用。您" -"可以不使用 `FLWR_TELEMETRY_ENABLED` 而单独使用日志记录,这样就可以在不发送任" -"何指标的情况下检查遥测功能。" +"我们希望能让您轻松查看所报告的匿名使用指标。通过设置环境变量 `FLWR_TELEMETRY_LOGGING=1` " +"可以查看所有报告的遥测信息。日志记录默认为禁用。您可以不使用 `FLWR_TELEMETRY_ENABLED` " +"而单独使用日志记录,这样就可以在不发送任何指标的情况下检查遥测功能。" #: ../../source/ref-telemetry.md:58 msgid "" -"The inspect Flower telemetry without sending any anonymous usage metrics, " -"use both environment variables:" -msgstr "" -"在不发送任何匿名使用指标的情况下检查 Flower 遥测,可使用这两个环境变量:" +"The inspect Flower telemetry without sending any anonymous usage metrics," +" use both environment variables:" +msgstr "在不发送任何匿名使用指标的情况下检查 Flower 遥测,可使用这两个环境变量:" #: ../../source/ref-telemetry.md:64 msgid "How to contact us" @@ -19807,16 +21236,14 @@ msgid "" "[Slack](https://flower.ai/join-slack/) (channel `#telemetry`) or email " "(`telemetry@flower.ai`)." msgstr "" -"我们希望听到您的意见。如果您对如何改进我们处理匿名使用指标的方式有任何反馈或" -"想法,请通过 [Slack](https://flower.ai/join-slack/) (频道 `#telemetry`)或电" -"子邮件 (`telemetry@flower.ai`)与我们联系。" +"我们希望听到您的意见。如果您对如何改进我们处理匿名使用指标的方式有任何反馈或想法,请通过 [Slack](https://flower.ai" +"/join-slack/) (频道 `#telemetry`)或电子邮件 (`telemetry@flower.ai`)与我们联系。" #: ../../source/tutorial-quickstart-android.rst:-1 msgid "" -"Read this Federated Learning quickstart tutorial for creating an Android app " -"using Flower." -msgstr "" -"阅读本联邦学习快速入门教程,了解如何使用 Flower 创建 Android 应用程序。" +"Read this Federated Learning quickstart tutorial for creating an Android " +"app using Flower." +msgstr "阅读本联邦学习快速入门教程,了解如何使用 Flower 创建 Android 应用程序。" #: ../../source/tutorial-quickstart-android.rst:5 msgid "Quickstart Android" @@ -19824,24 +21251,24 @@ msgstr "快速入门 Android" #: ../../source/tutorial-quickstart-android.rst:10 msgid "" -"Let's build a federated learning system using TFLite and Flower on Android!" +"Let's build a federated learning system using TFLite and Flower on " +"Android!" msgstr "让我们在 Android 上使用 TFLite 和 Flower 构建一个联邦学习系统!" #: ../../source/tutorial-quickstart-android.rst:12 msgid "" -"Please refer to the `full code example `_ to learn more." +"Please refer to the `full code example " +"`_ to learn " +"more." msgstr "" -"请参阅`完整代码示例 `_了解更多信息。" +"请参阅`完整代码示例 " +"`_了解更多信息。" #: ../../source/tutorial-quickstart-fastai.rst:-1 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower with " -"FastAI to train a vision model on CIFAR-10." -msgstr "" -"查看此联邦学习快速入门教程,了解如何使用 Flower 和 FastAI 在 CIFAR-10 上训练" -"视觉模型。" +"Check out this Federated Learning quickstart tutorial for using Flower " +"with FastAI to train a vision model on CIFAR-10." +msgstr "查看此联邦学习快速入门教程,了解如何使用 Flower 和 FastAI 在 CIFAR-10 上训练视觉模型。" #: ../../source/tutorial-quickstart-fastai.rst:5 msgid "Quickstart fastai" @@ -19853,19 +21280,18 @@ msgstr "让我们用 fastai 和 Flower 建立一个联邦学习系统!" #: ../../source/tutorial-quickstart-fastai.rst:12 msgid "" -"Please refer to the `full code example `_ to learn more." +"Please refer to the `full code example " +"`_ " +"to learn more." msgstr "" -"请参阅 `完整代码示例 `_了解更多信息。" +"请参阅 `完整代码示例 `_了解更多信息。" #: ../../source/tutorial-quickstart-huggingface.rst:-1 msgid "" -"Check out this Federating Learning quickstart tutorial for using Flower with " -"HuggingFace Transformers in order to fine-tune an LLM." -msgstr "" -"查看此联邦学习 快速入门教程,了解如何使用 Flower 和 HuggingFace Transformers " -"来微调 LLM。" +"Check out this Federating Learning quickstart tutorial for using Flower " +"with HuggingFace Transformers in order to fine-tune an LLM." +msgstr "查看此联邦学习 快速入门教程,了解如何使用 Flower 和 HuggingFace Transformers 来微调 LLM。" #: ../../source/tutorial-quickstart-huggingface.rst:5 msgid "Quickstart 🤗 Transformers" @@ -19873,21 +21299,20 @@ msgstr "🤗 Transformers快速入门" #: ../../source/tutorial-quickstart-huggingface.rst:10 msgid "" -"Let's build a federated learning system using Hugging Face Transformers and " -"Flower!" +"Let's build a federated learning system using Hugging Face Transformers " +"and Flower!" msgstr "让我们用Hugging Face Transformers和Flower来构建一个联邦学习系统!" #: ../../source/tutorial-quickstart-huggingface.rst:12 msgid "" -"We will leverage Hugging Face to federate the training of language models " -"over multiple clients using Flower. More specifically, we will fine-tune a " -"pre-trained Transformer model (distilBERT) for sequence classification over " -"a dataset of IMDB ratings. The end goal is to detect if a movie rating is " -"positive or negative." +"We will leverage Hugging Face to federate the training of language models" +" over multiple clients using Flower. More specifically, we will fine-tune" +" a pre-trained Transformer model (distilBERT) for sequence classification" +" over a dataset of IMDB ratings. The end goal is to detect if a movie " +"rating is positive or negative." msgstr "" -"我们将利用Hugging Face技术,使用 Flower 在多个客户端上联邦训练语言模型。更具" -"体地说,我们将对预先训练好的 Transformer 模型(distilBERT)进行微调,以便在 " -"IMDB 评分数据集上进行序列分类。最终目标是检测电影评分是正面还是负面。" +"我们将利用Hugging Face技术,使用 Flower 在多个客户端上联邦训练语言模型。更具体地说,我们将对预先训练好的 " +"Transformer 模型(distilBERT)进行微调,以便在 IMDB 评分数据集上进行序列分类。最终目标是检测电影评分是正面还是负面。" #: ../../source/tutorial-quickstart-huggingface.rst:18 msgid "Dependencies" @@ -19896,12 +21321,12 @@ msgstr "依赖关系" #: ../../source/tutorial-quickstart-huggingface.rst:20 msgid "" "To follow along this tutorial you will need to install the following " -"packages: :code:`datasets`, :code:`evaluate`, :code:`flwr`, :code:`torch`, " -"and :code:`transformers`. This can be done using :code:`pip`:" +"packages: :code:`datasets`, :code:`evaluate`, :code:`flwr`, " +":code:`torch`, and :code:`transformers`. This can be done using " +":code:`pip`:" msgstr "" -"要学习本教程,您需要安装以下软件包: :code:`datasets`、 :code:`evaluate`、 :" -"code:`flwr`、 :code:`torch`和 :code:`transformers`。这可以通过 :code:`pip` 来" -"完成:" +"要学习本教程,您需要安装以下软件包: :code:`datasets`、 :code:`evaluate`、 :code:`flwr`、 " +":code:`torch`和 :code:`transformers`。这可以通过 :code:`pip` 来完成:" #: ../../source/tutorial-quickstart-huggingface.rst:30 msgid "Standard Hugging Face workflow" @@ -19917,9 +21342,8 @@ msgid "" "library. We then need to tokenize the data and create :code:`PyTorch` " "dataloaders, this is all done in the :code:`load_data` function:" msgstr "" -"为了获取 IMDB 数据集,我们将使用 Hugging Face 的 :code:`datasets` 库。然后," -"我们需要对数据进行标记化,并创建 :code:`PyTorch` 数据加载器,这些都将在 :" -"code:`load_data` 函数中完成:" +"为了获取 IMDB 数据集,我们将使用 Hugging Face 的 :code:`datasets` 库。然后,我们需要对数据进行标记化,并创建" +" :code:`PyTorch` 数据加载器,这些都将在 :code:`load_data` 函数中完成:" #: ../../source/tutorial-quickstart-huggingface.rst:81 msgid "Training and testing the model" @@ -19927,12 +21351,12 @@ msgstr "训练和测试模型" #: ../../source/tutorial-quickstart-huggingface.rst:83 msgid "" -"Once we have a way of creating our trainloader and testloader, we can take " -"care of the training and testing. This is very similar to any :code:" -"`PyTorch` training or testing loop:" +"Once we have a way of creating our trainloader and testloader, we can " +"take care of the training and testing. This is very similar to any " +":code:`PyTorch` training or testing loop:" msgstr "" -"有了创建 trainloader 和 testloader 的方法后,我们就可以进行训练和测试了。这与" -"任何 :code:`PyTorch` 训练或测试循环都非常相似:" +"有了创建 trainloader 和 testloader 的方法后,我们就可以进行训练和测试了。这与任何 :code:`PyTorch` " +"训练或测试循环都非常相似:" #: ../../source/tutorial-quickstart-huggingface.rst:121 msgid "Creating the model itself" @@ -19940,11 +21364,11 @@ msgstr "创建模型本身" #: ../../source/tutorial-quickstart-huggingface.rst:123 msgid "" -"To create the model itself, we will just load the pre-trained distillBERT " -"model using Hugging Face’s :code:`AutoModelForSequenceClassification` :" +"To create the model itself, we will just load the pre-trained distillBERT" +" model using Hugging Face’s :code:`AutoModelForSequenceClassification` :" msgstr "" -"要创建模型本身,我们只需使用 Hugging Face 的 :code:" -"`AutoModelForSequenceClassification` 加载预训练的 distillBERT 模型:" +"要创建模型本身,我们只需使用 Hugging Face 的 :code:`AutoModelForSequenceClassification` " +"加载预训练的 distillBERT 模型:" #: ../../source/tutorial-quickstart-huggingface.rst:136 msgid "Federating the example" @@ -19957,24 +21381,23 @@ msgstr "创建 IMDBClient" #: ../../source/tutorial-quickstart-huggingface.rst:141 msgid "" "To federate our example to multiple clients, we first need to write our " -"Flower client class (inheriting from :code:`flwr.client.NumPyClient`). This " -"is very easy, as our model is a standard :code:`PyTorch` model:" +"Flower client class (inheriting from :code:`flwr.client.NumPyClient`). " +"This is very easy, as our model is a standard :code:`PyTorch` model:" msgstr "" -"要将我们的示例联邦到多个客户端,我们首先需要编写 Flower 客户端类(继承自 :" -"code:`flwr.client.NumPyClient`)。这很容易,因为我们的模型是一个标准的 :code:" -"`PyTorch` 模型:" +"要将我们的示例联邦到多个客户端,我们首先需要编写 Flower 客户端类(继承自 " +":code:`flwr.client.NumPyClient`)。这很容易,因为我们的模型是一个标准的 :code:`PyTorch` 模型:" #: ../../source/tutorial-quickstart-huggingface.rst:169 msgid "" "The :code:`get_parameters` function lets the server get the client's " -"parameters. Inversely, the :code:`set_parameters` function allows the server " -"to send its parameters to the client. Finally, the :code:`fit` function " -"trains the model locally for the client, and the :code:`evaluate` function " -"tests the model locally and returns the relevant metrics." +"parameters. Inversely, the :code:`set_parameters` function allows the " +"server to send its parameters to the client. Finally, the :code:`fit` " +"function trains the model locally for the client, and the " +":code:`evaluate` function tests the model locally and returns the " +"relevant metrics." msgstr "" -":code:`get_parameters` 函数允许服务器获取客户端的参数。相反,:code:" -"`set_parameters`函数允许服务器将其参数发送给客户端。最后,:code:`fit`函数在本" -"地为客户端训练模型,:code:`evaluate`函数在本地测试模型并返回相关指标。" +":code:`get_parameters` " +"函数允许服务器获取客户端的参数。相反,:code:`set_parameters`函数允许服务器将其参数发送给客户端。最后,:code:`fit`函数在本地为客户端训练模型,:code:`evaluate`函数在本地测试模型并返回相关指标。" #: ../../source/tutorial-quickstart-huggingface.rst:175 msgid "Starting the server" @@ -19982,26 +21405,25 @@ msgstr "启动服务器" #: ../../source/tutorial-quickstart-huggingface.rst:177 msgid "" -"Now that we have a way to instantiate clients, we need to create our server " -"in order to aggregate the results. Using Flower, this can be done very " -"easily by first choosing a strategy (here, we are using :code:`FedAvg`, " -"which will define the global weights as the average of all the clients' " -"weights at each round) and then using the :code:`flwr.server.start_server` " -"function:" +"Now that we have a way to instantiate clients, we need to create our " +"server in order to aggregate the results. Using Flower, this can be done " +"very easily by first choosing a strategy (here, we are using " +":code:`FedAvg`, which will define the global weights as the average of " +"all the clients' weights at each round) and then using the " +":code:`flwr.server.start_server` function:" msgstr "" -"现在我们有了实例化客户端的方法,我们需要创建服务器,以便汇总结果。使用 " -"Flower,首先选择一个策略(这里我们使用 :code:`FedAvg`,它将把全局模型参数定义" -"为每轮所有客户端模型参数的平均值),然后使用 :code:`flwr.server.start_server`" -"函数,就可以非常轻松地完成这项工作:" +"现在我们有了实例化客户端的方法,我们需要创建服务器,以便汇总结果。使用 Flower,首先选择一个策略(这里我们使用 " +":code:`FedAvg`,它将把全局模型参数定义为每轮所有客户端模型参数的平均值),然后使用 " +":code:`flwr.server.start_server`函数,就可以非常轻松地完成这项工作:" #: ../../source/tutorial-quickstart-huggingface.rst:205 msgid "" -"The :code:`weighted_average` function is there to provide a way to aggregate " -"the metrics distributed amongst the clients (basically this allows us to " -"display a nice average accuracy and loss for every round)." +"The :code:`weighted_average` function is there to provide a way to " +"aggregate the metrics distributed amongst the clients (basically this " +"allows us to display a nice average accuracy and loss for every round)." msgstr "" -"使用 :code:`weighted_average` 函数是为了提供一种方法来汇总分布在客户端的指标" -"(基本上,这可以让我们显示每一轮的平均精度和损失值)。" +"使用 :code:`weighted_average` " +"函数是为了提供一种方法来汇总分布在客户端的指标(基本上,这可以让我们显示每一轮的平均精度和损失值)。" #: ../../source/tutorial-quickstart-huggingface.rst:209 msgid "Putting everything together" @@ -20020,38 +21442,33 @@ msgstr "他们就能连接到服务器,开始联邦训练。" #: ../../source/tutorial-quickstart-huggingface.rst:223 #, fuzzy msgid "" -"If you want to check out everything put together, you should check out the " -"`full code example `_ ." +"If you want to check out everything put together, you should check out " +"the `full code example `_ ." msgstr "" -"如果您想查看所有内容,请查看完整的代码示例: [https://github.com/adap/flower/" -"tree/main/examples/quickstart-huggingface](https://github.com/adap/flower/" -"tree/main/examples/quickstart-huggingface)." +"如果您想查看所有内容,请查看完整的代码示例: [https://github.com/adap/flower/tree/main/examples" +"/quickstart-" +"huggingface](https://github.com/adap/flower/tree/main/examples" +"/quickstart-huggingface)." #: ../../source/tutorial-quickstart-huggingface.rst:226 msgid "" -"Of course, this is a very basic example, and a lot can be added or modified, " -"it was just to showcase how simply we could federate a Hugging Face workflow " -"using Flower." -msgstr "" -"当然,这只是一个非常基本的示例,还可以添加或修改很多内容,只是为了展示我们可" -"以如何简单地使用 Flower 联合Hugging Face的工作流程。" +"Of course, this is a very basic example, and a lot can be added or " +"modified, it was just to showcase how simply we could federate a Hugging " +"Face workflow using Flower." +msgstr "当然,这只是一个非常基本的示例,还可以添加或修改很多内容,只是为了展示我们可以如何简单地使用 Flower 联合Hugging Face的工作流程。" #: ../../source/tutorial-quickstart-huggingface.rst:229 msgid "" -"Note that in this example we used :code:`PyTorch`, but we could have very " -"well used :code:`TensorFlow`." -msgstr "" -"请注意,在本例中我们使用了 :code:`PyTorch`,但也完全可以使用 :code:" -"`TensorFlow`。" +"Note that in this example we used :code:`PyTorch`, but we could have very" +" well used :code:`TensorFlow`." +msgstr "请注意,在本例中我们使用了 :code:`PyTorch`,但也完全可以使用 :code:`TensorFlow`。" #: ../../source/tutorial-quickstart-ios.rst:-1 msgid "" "Read this Federated Learning quickstart tutorial for creating an iOS app " "using Flower to train a neural network on MNIST." -msgstr "" -"阅读本联邦学习快速入门教程,了解如何使用 Flower 创建 iOS 应用程序,并在 " -"MNIST 上训练神经网络。" +msgstr "阅读本联邦学习快速入门教程,了解如何使用 Flower 创建 iOS 应用程序,并在 MNIST 上训练神经网络。" #: ../../source/tutorial-quickstart-ios.rst:5 msgid "Quickstart iOS" @@ -20059,59 +21476,49 @@ msgstr "快速入门 iOS" #: ../../source/tutorial-quickstart-ios.rst:10 msgid "" -"In this tutorial we will learn how to train a Neural Network on MNIST using " -"Flower and CoreML on iOS devices." -msgstr "" -"在本教程中,我们将学习如何在 iOS 设备上使用 Flower 和 CoreML 在 MNIST 上训练" -"神经网络。" +"In this tutorial we will learn how to train a Neural Network on MNIST " +"using Flower and CoreML on iOS devices." +msgstr "在本教程中,我们将学习如何在 iOS 设备上使用 Flower 和 CoreML 在 MNIST 上训练神经网络。" #: ../../source/tutorial-quickstart-ios.rst:12 #, fuzzy msgid "" "First of all, for running the Flower Python server, it is recommended to " -"create a virtual environment and run everything within a :doc:`virtualenv " -"`. For the Flower client " +"create a virtual environment and run everything within a :doc:`virtualenv" +" `. For the Flower client " "implementation in iOS, it is recommended to use Xcode as our IDE." msgstr "" "首先,为了运行 Flower Python 服务器,建议创建一个虚拟环境,并在 `virtualenv " -"`_ 中运行一切。对于在 iOS " -"中实现 Flower 客户端,建议使用 Xcode 作为我们的集成开发环境。" +"`_ 中运行一切。对于在 iOS 中实现 " +"Flower 客户端,建议使用 Xcode 作为我们的集成开发环境。" #: ../../source/tutorial-quickstart-ios.rst:15 msgid "" -"Our example consists of one Python *server* and two iPhone *clients* that " -"all have the same model." -msgstr "" -"我们的示例包括一个 Python *服务器*和两个 iPhone *客户端*,它们都具有相同的模" -"型。" +"Our example consists of one Python *server* and two iPhone *clients* that" +" all have the same model." +msgstr "我们的示例包括一个 Python *服务器*和两个 iPhone *客户端*,它们都具有相同的模型。" #: ../../source/tutorial-quickstart-ios.rst:17 msgid "" -"*Clients* are responsible for generating individual weight updates for the " -"model based on their local datasets. These updates are then sent to the " -"*server* which will aggregate them to produce a better model. Finally, the " -"*server* sends this improved version of the model back to each *client*. A " -"complete cycle of weight updates is called a *round*." -msgstr "" -"*客户端*负责根据其本地数据集为模型生成独立的模型参数。然后,这些参数更新会被" -"发送到*服务器*,由*服务器*汇总后生成一个更好的模型。最后,*服务器*将改进后的" -"模型发送回每个*客户端*。一个完整的参数更新周期称为一*轮*。" +"*Clients* are responsible for generating individual weight updates for " +"the model based on their local datasets. These updates are then sent to " +"the *server* which will aggregate them to produce a better model. " +"Finally, the *server* sends this improved version of the model back to " +"each *client*. A complete cycle of weight updates is called a *round*." +msgstr "*客户端*负责根据其本地数据集为模型生成独立的模型参数。然后,这些参数更新会被发送到*服务器*,由*服务器*汇总后生成一个更好的模型。最后,*服务器*将改进后的模型发送回每个*客户端*。一个完整的参数更新周期称为一*轮*。" #: ../../source/tutorial-quickstart-ios.rst:21 msgid "" "Now that we have a rough idea of what is going on, let's get started to " -"setup our Flower server environment. We first need to install Flower. You " -"can do this by using pip:" -msgstr "" -"现在我们已经有了一个大致的概念,让我们开始设置 Flower 服务器环境吧。首先,我" -"们需要安装 Flower。你可以使用 pip 来安装:" +"setup our Flower server environment. We first need to install Flower. You" +" can do this by using pip:" +msgstr "现在我们已经有了一个大致的概念,让我们开始设置 Flower 服务器环境吧。首先,我们需要安装 Flower。你可以使用 pip 来安装:" #: ../../source/tutorial-quickstart-ios.rst:27 msgid "Or Poetry:" msgstr "或者Poetry:" #: ../../source/tutorial-quickstart-ios.rst:34 -#: ../../source/tutorial-quickstart-mxnet.rst:36 #: ../../source/tutorial-quickstart-pytorch.rst:37 #: ../../source/tutorial-quickstart-scikitlearn.rst:40 #: ../../source/tutorial-quickstart-tensorflow.rst:29 @@ -20122,29 +21529,29 @@ msgstr "Flower 客户端" #: ../../source/tutorial-quickstart-ios.rst:36 msgid "" "Now that we have all our dependencies installed, let's run a simple " -"distributed training using CoreML as our local training pipeline and MNIST " -"as our dataset. For simplicity reasons we will use the complete Flower " -"client with CoreML, that has been implemented and stored inside the Swift " -"SDK. The client implementation can be seen below:" +"distributed training using CoreML as our local training pipeline and " +"MNIST as our dataset. For simplicity reasons we will use the complete " +"Flower client with CoreML, that has been implemented and stored inside " +"the Swift SDK. The client implementation can be seen below:" msgstr "" -"现在我们已经安装了所有依赖项,让我们使用 CoreML 作为本地训练框架和 MNIST 作为" -"数据集,运行一个简单的分布式训练。为了简单起见,我们将使用 CoreML 的完整 " -"Flower 客户端,该客户端已在 Swift SDK 中实现并存储。客户端实现如下:" +"现在我们已经安装了所有依赖项,让我们使用 CoreML 作为本地训练框架和 MNIST " +"作为数据集,运行一个简单的分布式训练。为了简单起见,我们将使用 CoreML 的完整 Flower 客户端,该客户端已在 Swift SDK " +"中实现并存储。客户端实现如下:" #: ../../source/tutorial-quickstart-ios.rst:72 msgid "" -"Let's create a new application project in Xcode and add :code:`flwr` as a " -"dependency in your project. For our application, we will store the logic of " -"our app in :code:`FLiOSModel.swift` and the UI elements in :code:" -"`ContentView.swift`. We will focus more on :code:`FLiOSModel.swift` in this " -"quickstart. Please refer to the `full code example `_ to learn more about the app." +"Let's create a new application project in Xcode and add :code:`flwr` as a" +" dependency in your project. For our application, we will store the logic" +" of our app in :code:`FLiOSModel.swift` and the UI elements in " +":code:`ContentView.swift`. We will focus more on :code:`FLiOSModel.swift`" +" in this quickstart. Please refer to the `full code example " +"`_ to learn more " +"about the app." msgstr "" -"让我们在 Xcode 中创建一个新的应用程序项目,并在项目中添加 :code:`flwr` 作为依" -"赖关系。对于我们的应用程序,我们将在 :code:`FLiOSModel.swift` 中存储应用程序" -"的逻辑,在 :code:`ContentView.swift` 中存储 UI 元素。在本快速入门中,我们将更" -"多地关注 :code:`FLiOSModel.swift`。请参阅 `完整代码示例 `_ 以了解更多有关应用程序的信息。" +"让我们在 Xcode 中创建一个新的应用程序项目,并在项目中添加 :code:`flwr` 作为依赖关系。对于我们的应用程序,我们将在 " +":code:`FLiOSModel.swift` 中存储应用程序的逻辑,在 :code:`ContentView.swift` 中存储 UI " +"元素。在本快速入门中,我们将更多地关注 :code:`FLiOSModel.swift`。请参阅 `完整代码示例 " +"`_ 以了解更多有关应用程序的信息。" #: ../../source/tutorial-quickstart-ios.rst:75 msgid "Import Flower and CoreML related packages in :code:`FLiOSModel.swift`:" @@ -20153,30 +21560,30 @@ msgstr "在 :code:`FLiOSModel.swift` 中导入 Flower 和 CoreML 相关软件包 #: ../../source/tutorial-quickstart-ios.rst:83 msgid "" "Then add the mlmodel to the project simply by drag-and-drop, the mlmodel " -"will be bundled inside the application during deployment to your iOS device. " -"We need to pass the url to access mlmodel and run CoreML machine learning " -"processes, it can be retrieved by calling the function :code:`Bundle.main." -"url`. For the MNIST dataset, we need to preprocess it into :code:" -"`MLBatchProvider` object. The preprocessing is done inside :code:`DataLoader." -"swift`." -msgstr "" -"然后通过拖放将 mlmodel 添加到项目中,在部署到 iOS 设备时,mlmodel 将被捆绑到" -"应用程序中。我们需要传递 url 以访问 mlmodel 并运行 CoreML 机器学习进程,可通" -"过调用函数 :code:`Bundle.main.url` 获取。对于 MNIST 数据集,我们需要将其预处" -"理为 :code:`MLBatchProvider` 对象。预处理在 :code:`DataLoader.swift` 中完成。" +"will be bundled inside the application during deployment to your iOS " +"device. We need to pass the url to access mlmodel and run CoreML machine " +"learning processes, it can be retrieved by calling the function " +":code:`Bundle.main.url`. For the MNIST dataset, we need to preprocess it " +"into :code:`MLBatchProvider` object. The preprocessing is done inside " +":code:`DataLoader.swift`." +msgstr "" +"然后通过拖放将 mlmodel 添加到项目中,在部署到 iOS 设备时,mlmodel 将被捆绑到应用程序中。我们需要传递 url 以访问 " +"mlmodel 并运行 CoreML 机器学习进程,可通过调用函数 :code:`Bundle.main.url` 获取。对于 MNIST " +"数据集,我们需要将其预处理为 :code:`MLBatchProvider` 对象。预处理在 :code:`DataLoader.swift` " +"中完成。" #: ../../source/tutorial-quickstart-ios.rst:99 #, fuzzy msgid "" -"Since CoreML does not allow the model parameters to be seen before training, " -"and accessing the model parameters during or after the training can only be " -"done by specifying the layer name, we need to know this information " -"beforehand, through looking at the model specification, which are written as " -"proto files. The implementation can be seen in :code:`MLModelInspect`." +"Since CoreML does not allow the model parameters to be seen before " +"training, and accessing the model parameters during or after the training" +" can only be done by specifying the layer name, we need to know this " +"information beforehand, through looking at the model specification, which" +" are written as proto files. The implementation can be seen in " +":code:`MLModelInspect`." msgstr "" -"由于 CoreML 不允许在训练前查看模型参数,而在训练过程中或训练后访问模型参数只" -"能通过指定层名来完成,因此我们需要事先通过查看模型规范(写成 proto 文件)来了" -"解这些信息。具体实现可参见 :code:`MLModelInspect`。" +"由于 CoreML 不允许在训练前查看模型参数,而在训练过程中或训练后访问模型参数只能通过指定层名来完成,因此我们需要事先通过查看模型规范(写成 " +"proto 文件)来了解这些信息。具体实现可参见 :code:`MLModelInspect`。" #: ../../source/tutorial-quickstart-ios.rst:102 #, fuzzy @@ -20185,4709 +21592,5337 @@ msgid "" "client." msgstr "获得所有必要信息后,让我们创建 Flower 客户端。" -#: ../../source/tutorial-quickstart-ios.rst:117 +#: ../../source/tutorial-quickstart-ios.rst:117 +msgid "" +"Then start the Flower gRPC client and start communicating to the server " +"by passing our Flower client to the function :code:`startFlwrGRPC`." +msgstr "然后启动 Flower gRPC 客户端,并通过将 Flower 客户端传递给函数 :code:`startFlwrGRPC` 来开始与服务器通信。" + +#: ../../source/tutorial-quickstart-ios.rst:124 +msgid "" +"That's it for the client. We only have to implement :code:`Client` or " +"call the provided :code:`MLFlwrClient` and call :code:`startFlwrGRPC()`. " +"The attribute :code:`hostname` and :code:`port` tells the client which " +"server to connect to. This can be done by entering the hostname and port " +"in the application before clicking the start button to start the " +"federated learning process." +msgstr "" +"这就是客户端。我们只需实现 :code:`Client` 或调用提供的 :code:`MLFlwrClient` 并调用 " +":code:`startFlwrGRPC()`。属性 :code:`hostname` 和 :code:`port` " +"会告诉客户端要连接到哪个服务器。这可以通过在应用程序中输入主机名和端口来实现,然后再点击开始按钮启动联邦学习进程。" + +#: ../../source/tutorial-quickstart-ios.rst:129 +#: ../../source/tutorial-quickstart-pytorch.rst:203 +#: ../../source/tutorial-quickstart-scikitlearn.rst:167 +#: ../../source/tutorial-quickstart-tensorflow.rst:98 +#: ../../source/tutorial-quickstart-xgboost.rst:309 +msgid "Flower Server" +msgstr "Flower 服务器" + +#: ../../source/tutorial-quickstart-ios.rst:131 +#: ../../source/tutorial-quickstart-pytorch.rst:205 +#: ../../source/tutorial-quickstart-tensorflow.rst:100 +msgid "" +"For simple workloads we can start a Flower server and leave all the " +"configuration possibilities at their default values. In a file named " +":code:`server.py`, import Flower and start the server:" +msgstr "" +"对于简单的工作负载,我们可以启动 Flower 服务器,并将所有配置选项保留为默认值。在名为 :code:`server.py` 的文件中,导入 " +"Flower 并启动服务器:" + +#: ../../source/tutorial-quickstart-ios.rst:142 +#: ../../source/tutorial-quickstart-pytorch.rst:216 +#: ../../source/tutorial-quickstart-scikitlearn.rst:230 +#: ../../source/tutorial-quickstart-tensorflow.rst:112 +msgid "Train the model, federated!" +msgstr "联邦训练模型!" + +#: ../../source/tutorial-quickstart-ios.rst:144 +#: ../../source/tutorial-quickstart-pytorch.rst:218 +#: ../../source/tutorial-quickstart-tensorflow.rst:114 +#: ../../source/tutorial-quickstart-xgboost.rst:525 +msgid "" +"With both client and server ready, we can now run everything and see " +"federated learning in action. FL systems usually have a server and " +"multiple clients. We therefore have to start the server first:" +msgstr "客户端和服务器都已准备就绪,我们现在可以运行一切,看看联邦学习的实际效果。FL 系统通常有一个服务器和多个客户端。因此,我们必须先启动服务器:" + +#: ../../source/tutorial-quickstart-ios.rst:152 +msgid "" +"Once the server is running we can start the clients in different " +"terminals. Build and run the client through your Xcode, one through Xcode" +" Simulator and the other by deploying it to your iPhone. To see more " +"about how to deploy your app to iPhone or Simulator visit `here " +"`_." +msgstr "" +"服务器运行后,我们就可以在不同的终端启动客户端。通过 Xcode 构建并运行客户端,一个通过 Xcode 模拟器,另一个通过部署到 " +"iPhone。要了解更多有关如何将应用程序部署到 iPhone 或模拟器的信息,请访问 `此处 " +"`_。" + +#: ../../source/tutorial-quickstart-ios.rst:156 +msgid "" +"Congratulations! You've successfully built and run your first federated " +"learning system in your ios device. The full `source code " +"`_ for this " +"example can be found in :code:`examples/ios`." +msgstr "" +"恭喜您! 您已经成功地在 ios 设备中构建并运行了第一个联邦学习系统。本示例的`完整源代码 " +"`_ 可在 " +":code:`examples/ios` 中找到。" + +#: ../../source/tutorial-quickstart-jax.rst:-1 +msgid "" +"Check out this Federated Learning quickstart tutorial for using Flower " +"with Jax to train a linear regression model on a scikit-learn dataset." +msgstr "查看此联邦学习快速入门教程,了解如何使用 Flower 和 Jax 在 scikit-learn 数据集上训练线性回归模型。" + +#: ../../source/tutorial-quickstart-jax.rst:5 +msgid "Quickstart JAX" +msgstr "快速入门 JAX" + +#: ../../source/tutorial-quickstart-pandas.rst:-1 +msgid "" +"Check out this Federated Learning quickstart tutorial for using Flower " +"with Pandas to perform Federated Analytics." +msgstr "查看此联邦学习快速入门教程,了解如何使用 Flower 和 Pandas 执行联邦分析。" + +#: ../../source/tutorial-quickstart-pandas.rst:5 +msgid "Quickstart Pandas" +msgstr "快速入门Pandas" + +#: ../../source/tutorial-quickstart-pandas.rst:10 +msgid "Let's build a federated analytics system using Pandas and Flower!" +msgstr "让我们使用 Pandas 和 Flower 建立一个联邦分析系统!" + +#: ../../source/tutorial-quickstart-pandas.rst:12 +msgid "" +"Please refer to the `full code example " +"`_ " +"to learn more." +msgstr "" +"请参阅 `完整代码示例 `_\" 了解更多信息。" + +#: ../../source/tutorial-quickstart-pytorch.rst:-1 +msgid "" +"Check out this Federated Learning quickstart tutorial for using Flower " +"with PyTorch to train a CNN model on MNIST." +msgstr "查看此联邦学习快速入门教程,了解如何使用 Flower 和 PyTorch 在 MNIST 上训练 CNN 模型。" + +#: ../../source/tutorial-quickstart-pytorch.rst:13 +msgid "" +"In this tutorial we will learn how to train a Convolutional Neural " +"Network on CIFAR10 using Flower and PyTorch." +msgstr "在本教程中,我们将学习如何使用 Flower 和 PyTorch 在 CIFAR10 上训练卷积神经网络。" + +#: ../../source/tutorial-quickstart-pytorch.rst:15 +#: ../../source/tutorial-quickstart-xgboost.rst:39 +#, fuzzy +msgid "" +"First of all, it is recommended to create a virtual environment and run " +"everything within a :doc:`virtualenv `." +msgstr "" +"首先,建议创建一个虚拟环境,并在 `virtualenv `_ 中运行一切。" + +#: ../../source/tutorial-quickstart-pytorch.rst:17 +#: ../../source/tutorial-quickstart-scikitlearn.rst:14 +msgid "" +"Our example consists of one *server* and two *clients* all having the " +"same model." +msgstr "我们的例子包括一个*服务器*和两个*客户端*,它们都有相同的模型。" + +#: ../../source/tutorial-quickstart-pytorch.rst:19 +msgid "" +"*Clients* are responsible for generating individual weight-updates for " +"the model based on their local datasets. These updates are then sent to " +"the *server* which will aggregate them to produce a better model. " +"Finally, the *server* sends this improved version of the model back to " +"each *client*. A complete cycle of weight updates is called a *round*." +msgstr "*客户端*负责在其本地数据集上更新模型参数。然后,这些参数会被发送到*服务器*,由*服务器*聚合后生成一个更好的模型。最后,*服务器*将改进后的模型发送回每个*客户端*。一个完整的模型参数更新周期称为一*轮*。" + +#: ../../source/tutorial-quickstart-pytorch.rst:23 +msgid "" +"Now that we have a rough idea of what is going on, let's get started. We " +"first need to install Flower. You can do this by running :" +msgstr "现在,我们已经有了一个大致的概念了,那就让我们开始吧。首先,我们需要安装 Flower。可以通过运行 :" + +#: ../../source/tutorial-quickstart-pytorch.rst:29 +msgid "" +"Since we want to use PyTorch to solve a computer vision task, let's go " +"ahead and install PyTorch and the **torchvision** library:" +msgstr "既然我们想用 PyTorch 解决计算机视觉任务,那就继续安装 PyTorch 和 **torchvision** 库吧:" + +#: ../../source/tutorial-quickstart-pytorch.rst:39 +msgid "" +"Now that we have all our dependencies installed, let's run a simple " +"distributed training with two clients and one server. Our training " +"procedure and network architecture are based on PyTorch's `Deep Learning " +"with PyTorch " +"`_." +msgstr "" +"现在我们已经安装了所有的依赖项,让我们用两个客户端和一个服务器来运行一个简单的分布式训练。我们的训练过程和网络架构基于 PyTorch " +"的《Deep Learning with PyTorch " +"`_》。" + +#: ../../source/tutorial-quickstart-pytorch.rst:41 +msgid "" +"In a file called :code:`client.py`, import Flower and PyTorch related " +"packages:" +msgstr "在名为 :code:`client.py` 的文件中,导入 Flower 和 PyTorch 相关软件包:" + +#: ../../source/tutorial-quickstart-pytorch.rst:56 +msgid "In addition, we define the device allocation in PyTorch with:" +msgstr "此外,我们还在 PyTorch 中定义了设备分配:" + +#: ../../source/tutorial-quickstart-pytorch.rst:62 +msgid "" +"We use PyTorch to load CIFAR10, a popular colored image classification " +"dataset for machine learning. The PyTorch :code:`DataLoader()` downloads " +"the training and test data that are then normalized." +msgstr "" +"我们使用 PyTorch 来加载 CIFAR10,这是一个用于机器学习的流行彩色图像分类数据集。PyTorch " +":code:`DataLoader()`下载训练数据和测试数据,然后进行归一化处理。" + +#: ../../source/tutorial-quickstart-pytorch.rst:78 +msgid "" +"Define the loss and optimizer with PyTorch. The training of the dataset " +"is done by looping over the dataset, measure the corresponding loss and " +"optimize it." +msgstr "使用 PyTorch 定义损失和优化器。数据集的训练是通过循环数据集、测量相应的损失值并对其进行优化来完成的。" + +#: ../../source/tutorial-quickstart-pytorch.rst:94 +msgid "" +"Define then the validation of the machine learning network. We loop over" +" the test set and measure the loss and accuracy of the test set." +msgstr "然后定义机器学习网络的验证。我们在测试集上循环,计算测试集的损失值和准确率。" + +#: ../../source/tutorial-quickstart-pytorch.rst:113 +msgid "" +"After defining the training and testing of a PyTorch machine learning " +"model, we use the functions for the Flower clients." +msgstr "在定义了 PyTorch 机器学习模型的训练和测试之后,我们将这些功能用于 Flower 客户端。" + +#: ../../source/tutorial-quickstart-pytorch.rst:115 +msgid "" +"The Flower clients will use a simple CNN adapted from 'PyTorch: A 60 " +"Minute Blitz':" +msgstr "Flower 客户端将使用一个简单的从“PyTorch: 60 分钟突击\"改编的CNN:" + +#: ../../source/tutorial-quickstart-pytorch.rst:142 +msgid "" +"After loading the data set with :code:`load_data()` we define the Flower " +"interface." +msgstr "使用 :code:`load_data()` 加载数据集后,我们定义了 Flower 接口。" + +#: ../../source/tutorial-quickstart-pytorch.rst:144 +#: ../../source/tutorial-quickstart-tensorflow.rst:54 +msgid "" +"The Flower server interacts with clients through an interface called " +":code:`Client`. When the server selects a particular client for training," +" it sends training instructions over the network. The client receives " +"those instructions and calls one of the :code:`Client` methods to run " +"your code (i.e., to train the neural network we defined earlier)." +msgstr "" +"Flower 服务器通过一个名为 :code:`Client` " +"的接口与客户端交互。当服务器选择一个特定的客户端进行训练时,它会通过网络发送训练指令。客户端接收到这些指令后,会调用 :code:`Client`" +" 方法之一来运行您的代码(即训练我们之前定义的神经网络)。" + +#: ../../source/tutorial-quickstart-pytorch.rst:150 +msgid "" +"Flower provides a convenience class called :code:`NumPyClient` which " +"makes it easier to implement the :code:`Client` interface when your " +"workload uses PyTorch. Implementing :code:`NumPyClient` usually means " +"defining the following methods (:code:`set_parameters` is optional " +"though):" +msgstr "" +"Flower 提供了一个名为 :code:`NumPyClient` 的便捷类,当您的工作负载使用 PyTorch 时,它使 " +":code:`Client` 接口的实现变得更容易。实现 :code:`NumPyClient` " +"通常意味着定义以下方法(:code:`set_parameters` 是可选的):" + +#: ../../source/tutorial-quickstart-pytorch.rst:156 +#: ../../source/tutorial-quickstart-scikitlearn.rst:119 +msgid "return the model weight as a list of NumPy ndarrays" +msgstr "以 NumPy ndarrays 列表形式返回模型参数" + +#: ../../source/tutorial-quickstart-pytorch.rst:157 +#: ../../source/tutorial-quickstart-scikitlearn.rst:121 +msgid ":code:`set_parameters` (optional)" +msgstr ":code:`set_parameters` (可选)" + +#: ../../source/tutorial-quickstart-pytorch.rst:158 +#: ../../source/tutorial-quickstart-scikitlearn.rst:121 +msgid "" +"update the local model weights with the parameters received from the " +"server" +msgstr "用从服务器接收到的参数更新本地模型参数" + +#: ../../source/tutorial-quickstart-pytorch.rst:160 +#: ../../source/tutorial-quickstart-scikitlearn.rst:124 +msgid "set the local model weights" +msgstr "设置本地模型参数" + +#: ../../source/tutorial-quickstart-pytorch.rst:161 +#: ../../source/tutorial-quickstart-scikitlearn.rst:125 +msgid "train the local model" +msgstr "训练本地模型" + +#: ../../source/tutorial-quickstart-pytorch.rst:162 +#: ../../source/tutorial-quickstart-scikitlearn.rst:126 +msgid "receive the updated local model weights" +msgstr "接收更新的本地模型参数" + +#: ../../source/tutorial-quickstart-pytorch.rst:164 +#: ../../source/tutorial-quickstart-scikitlearn.rst:128 +msgid "test the local model" +msgstr "测试本地模型" + +#: ../../source/tutorial-quickstart-pytorch.rst:166 +msgid "which can be implemented in the following way:" +msgstr "可以通过以下方式实现:" + +#: ../../source/tutorial-quickstart-pytorch.rst:189 +#: ../../source/tutorial-quickstart-tensorflow.rst:82 msgid "" -"Then start the Flower gRPC client and start communicating to the server by " -"passing our Flower client to the function :code:`startFlwrGRPC`." -msgstr "" -"然后启动 Flower gRPC 客户端,并通过将 Flower 客户端传递给函数 :code:" -"`startFlwrGRPC` 来开始与服务器通信。" +"We can now create an instance of our class :code:`CifarClient` and add " +"one line to actually run this client:" +msgstr "现在我们可以创建一个 :code:`CifarClient` 类的实例,并添加一行来实际运行该客户端:" -#: ../../source/tutorial-quickstart-ios.rst:124 +#: ../../source/tutorial-quickstart-pytorch.rst:196 +#: ../../source/tutorial-quickstart-tensorflow.rst:90 +#, fuzzy msgid "" -"That's it for the client. We only have to implement :code:`Client` or call " -"the provided :code:`MLFlwrClient` and call :code:`startFlwrGRPC()`. The " -"attribute :code:`hostname` and :code:`port` tells the client which server to " -"connect to. This can be done by entering the hostname and port in the " -"application before clicking the start button to start the federated learning " -"process." +"That's it for the client. We only have to implement :code:`Client` or " +":code:`NumPyClient` and call :code:`fl.client.start_client()`. If you " +"implement a client of type :code:`NumPyClient` you'll need to first call " +"its :code:`to_client()` method. The string :code:`\"[::]:8080\"` tells " +"the client which server to connect to. In our case we can run the server " +"and the client on the same machine, therefore we use " +":code:`\"[::]:8080\"`. If we run a truly federated workload with the " +"server and clients running on different machines, all that needs to " +"change is the :code:`server_address` we point the client at." msgstr "" -"这就是客户端。我们只需实现 :code:`Client` 或调用提供的 :code:`MLFlwrClient` " -"并调用 :code:`startFlwrGRPC()`。属性 :code:`hostname` 和 :code:`port` 会告诉" -"客户端要连接到哪个服务器。这可以通过在应用程序中输入主机名和端口来实现,然后" -"再点击开始按钮启动联邦学习进程。" - -#: ../../source/tutorial-quickstart-ios.rst:129 -#: ../../source/tutorial-quickstart-mxnet.rst:226 -#: ../../source/tutorial-quickstart-pytorch.rst:203 -#: ../../source/tutorial-quickstart-scikitlearn.rst:157 -#: ../../source/tutorial-quickstart-tensorflow.rst:98 -#: ../../source/tutorial-quickstart-xgboost.rst:309 -msgid "Flower Server" -msgstr "Flower 服务器" +"这就是客户端。我们只需实现 :code:`Client` 或 :code:`NumPyClient` 并调用 " +":code:`fl.client.start_client()` 或 " +":code:`fl.client.start_numpy_client()`。字符串 " +":code:`\"[::]:8080\"`会告诉客户端要连接的服务器。在本例中,我们可以在同一台机器上运行服务器和客户端,因此使用 " +":code:`\"[::]:8080\"。如果我们运行的是真正的联邦工作负载,服务器和客户端运行在不同的机器上,那么需要改变的只是客户端指向的 " +":code:`server_address`。" -#: ../../source/tutorial-quickstart-ios.rst:131 -#: ../../source/tutorial-quickstart-mxnet.rst:228 -#: ../../source/tutorial-quickstart-pytorch.rst:205 -#: ../../source/tutorial-quickstart-tensorflow.rst:100 +#: ../../source/tutorial-quickstart-pytorch.rst:226 +#: ../../source/tutorial-quickstart-scikitlearn.rst:239 +#: ../../source/tutorial-quickstart-tensorflow.rst:122 +#: ../../source/tutorial-quickstart-xgboost.rst:533 msgid "" -"For simple workloads we can start a Flower server and leave all the " -"configuration possibilities at their default values. In a file named :code:" -"`server.py`, import Flower and start the server:" -msgstr "" -"对于简单的工作负载,我们可以启动 Flower 服务器,并将所有配置选项保留为默认" -"值。在名为 :code:`server.py` 的文件中,导入 Flower 并启动服务器:" - -#: ../../source/tutorial-quickstart-ios.rst:142 -#: ../../source/tutorial-quickstart-mxnet.rst:239 -#: ../../source/tutorial-quickstart-pytorch.rst:216 -#: ../../source/tutorial-quickstart-scikitlearn.rst:215 -#: ../../source/tutorial-quickstart-tensorflow.rst:112 -msgid "Train the model, federated!" -msgstr "联邦训练模型!" +"Once the server is running we can start the clients in different " +"terminals. Open a new terminal and start the first client:" +msgstr "服务器运行后,我们就可以在不同终端启动客户端了。打开一个新终端,启动第一个客户端:" -#: ../../source/tutorial-quickstart-ios.rst:144 -#: ../../source/tutorial-quickstart-pytorch.rst:218 -#: ../../source/tutorial-quickstart-tensorflow.rst:114 -#: ../../source/tutorial-quickstart-xgboost.rst:525 -msgid "" -"With both client and server ready, we can now run everything and see " -"federated learning in action. FL systems usually have a server and multiple " -"clients. We therefore have to start the server first:" -msgstr "" -"客户端和服务器都已准备就绪,我们现在可以运行一切,看看联邦学习的实际效果。FL " -"系统通常有一个服务器和多个客户端。因此,我们必须先启动服务器:" +#: ../../source/tutorial-quickstart-pytorch.rst:233 +#: ../../source/tutorial-quickstart-scikitlearn.rst:246 +#: ../../source/tutorial-quickstart-tensorflow.rst:129 +#: ../../source/tutorial-quickstart-xgboost.rst:540 +msgid "Open another terminal and start the second client:" +msgstr "打开另一台终端,启动第二个客户端:" -#: ../../source/tutorial-quickstart-ios.rst:152 +#: ../../source/tutorial-quickstart-pytorch.rst:239 +#: ../../source/tutorial-quickstart-scikitlearn.rst:252 +#: ../../source/tutorial-quickstart-xgboost.rst:546 msgid "" -"Once the server is running we can start the clients in different terminals. " -"Build and run the client through your Xcode, one through Xcode Simulator and " -"the other by deploying it to your iPhone. To see more about how to deploy " -"your app to iPhone or Simulator visit `here `_." -msgstr "" -"服务器运行后,我们就可以在不同的终端启动客户端。通过 Xcode 构建并运行客户端," -"一个通过 Xcode 模拟器,另一个通过部署到 iPhone。要了解更多有关如何将应用程序" -"部署到 iPhone 或模拟器的信息,请访问 `此处 `_。" +"Each client will have its own dataset. You should now see how the " +"training does in the very first terminal (the one that started the " +"server):" +msgstr "每个客户端都有自己的数据集。现在你应该看到第一个终端(启动服务器的终端)的训练效果了:" -#: ../../source/tutorial-quickstart-ios.rst:156 +#: ../../source/tutorial-quickstart-pytorch.rst:271 msgid "" "Congratulations! You've successfully built and run your first federated " -"learning system in your ios device. The full `source code `_ for this example can be found in :" -"code:`examples/ios`." +"learning system. The full `source code " +"`_ for this example can be found in :code:`examples" +"/quickstart-pytorch`." msgstr "" -"恭喜您! 您已经成功地在 ios 设备中构建并运行了第一个联邦学习系统。本示例的`完" -"整源代码 `_ 可在 :" -"code:`examples/ios` 中找到。" +"恭喜您!您已经成功构建并运行了第一个联邦学习系统。本示例的`完整源代码 " +"`_ 可以在 :code:`examples/quickstart-pytorch` 中找到。" -#: ../../source/tutorial-quickstart-jax.rst:-1 +#: ../../source/tutorial-quickstart-pytorch-lightning.rst:-1 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower with " -"Jax to train a linear regression model on a scikit-learn dataset." -msgstr "" -"查看此联邦学习快速入门教程,了解如何使用 Flower 和 Jax 在 scikit-learn 数据集" -"上训练线性回归模型。" +"Check out this Federated Learning quickstart tutorial for using Flower " +"with PyTorch Lightning to train an Auto Encoder model on MNIST." +msgstr "查看此联邦学习快速入门教程,了解如何使用 Flower 和 PyTorch Lightning 在 MNIST 上训练自动编码器模型。" -#: ../../source/tutorial-quickstart-jax.rst:5 -msgid "Quickstart JAX" -msgstr "快速入门 JAX" +#: ../../source/tutorial-quickstart-pytorch-lightning.rst:5 +msgid "Quickstart PyTorch Lightning" +msgstr "快速入门 PyTorch Lightning" -#: ../../source/tutorial-quickstart-mxnet.rst:-1 +#: ../../source/tutorial-quickstart-pytorch-lightning.rst:10 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower with " -"MXNet to train a Sequential model on MNIST." -msgstr "" -"查看此联邦学习 快速入门教程,了解如何使用 Flower 和 MXNet 在 MNIST 上训练序列" -"模型。" - -#: ../../source/tutorial-quickstart-mxnet.rst:5 -msgid "Quickstart MXNet" -msgstr "快速入门 MXNet" +"Let's build a horizontal federated learning system using PyTorch " +"Lightning and Flower!" +msgstr "让我们使用 PyTorch Lightning 和 Flower 构建一个水平联邦学习系统!" -#: ../../source/tutorial-quickstart-mxnet.rst:7 +#: ../../source/tutorial-quickstart-pytorch-lightning.rst:12 msgid "" -"MXNet is no longer maintained and has been moved into `Attic `_. As a result, we would encourage you to " -"use other ML frameworks alongside Flower, for example, PyTorch. This " -"tutorial might be removed in future versions of Flower." +"Please refer to the `full code example " +"`_ to learn more." msgstr "" +"请参阅 `完整代码示例 `_ 了解更多信息。" -#: ../../source/tutorial-quickstart-mxnet.rst:12 +#: ../../source/tutorial-quickstart-scikitlearn.rst:-1 +msgid "" +"Check out this Federated Learning quickstart tutorial for using Flower " +"with scikit-learn to train a linear regression model." +msgstr "查看此联邦学习快速入门教程,了解如何使用 Flower 和 scikit-learn 训练线性回归模型。" + +#: ../../source/tutorial-quickstart-scikitlearn.rst:5 +msgid "Quickstart scikit-learn" +msgstr "scikit-learn快速入门" + +#: ../../source/tutorial-quickstart-scikitlearn.rst:10 msgid "" -"In this tutorial, we will learn how to train a :code:`Sequential` model on " -"MNIST using Flower and MXNet." +"In this tutorial, we will learn how to train a :code:`Logistic " +"Regression` model on MNIST using Flower and scikit-learn." msgstr "" -"在本教程中,我们将学习如何使用 Flower 和 MXNet 在 MNIST 上训练 :code:" -"`Sequential` 模型。" +"在本教程中,我们将学习如何使用 Flower 和 scikit-learn 在 MNIST 上训练一个 :code:`Logistic " +"Regression` 模型。" -#: ../../source/tutorial-quickstart-mxnet.rst:14 #: ../../source/tutorial-quickstart-scikitlearn.rst:12 #, fuzzy msgid "" -"It is recommended to create a virtual environment and run everything within " -"this :doc:`virtualenv `." +"It is recommended to create a virtual environment and run everything " +"within this :doc:`virtualenv `." msgstr "" -"建议创建一个虚拟环境,并在此 `virtualenv `_ 中运行所有内容。" +"建议创建一个虚拟环境,并在此 `virtualenv `_ 中运行所有内容。" -#: ../../source/tutorial-quickstart-mxnet.rst:16 -#: ../../source/tutorial-quickstart-pytorch.rst:17 -#: ../../source/tutorial-quickstart-scikitlearn.rst:14 -msgid "" -"Our example consists of one *server* and two *clients* all having the same " -"model." -msgstr "我们的例子包括一个*服务器*和两个*客户端*,它们都有相同的模型。" - -#: ../../source/tutorial-quickstart-mxnet.rst:18 #: ../../source/tutorial-quickstart-scikitlearn.rst:16 msgid "" -"*Clients* are responsible for generating individual model parameter updates " -"for the model based on their local datasets. These updates are then sent to " -"the *server* which will aggregate them to produce an updated global model. " -"Finally, the *server* sends this improved version of the model back to each " -"*client*. A complete cycle of parameters updates is called a *round*." -msgstr "" -"*客户端*负责根据其本地数据集为模型生成单独的模型参数更新。然后,这些参数更新" -"将被发送到*服务器*,由*服务器*汇总后生成一个更新的全局模型。最后,*服务器*将" -"这一改进版模型发回给每个*客户端*。一个完整的参数更新周期称为一*轮*。" +"*Clients* are responsible for generating individual model parameter " +"updates for the model based on their local datasets. These updates are " +"then sent to the *server* which will aggregate them to produce an updated" +" global model. Finally, the *server* sends this improved version of the " +"model back to each *client*. A complete cycle of parameters updates is " +"called a *round*." +msgstr "*客户端*负责根据其本地数据集为模型生成单独的模型参数更新。然后,这些参数更新将被发送到*服务器*,由*服务器*汇总后生成一个更新的全局模型。最后,*服务器*将这一改进版模型发回给每个*客户端*。一个完整的参数更新周期称为一*轮*。" -#: ../../source/tutorial-quickstart-mxnet.rst:22 #: ../../source/tutorial-quickstart-scikitlearn.rst:20 msgid "" "Now that we have a rough idea of what is going on, let's get started. We " "first need to install Flower. You can do this by running:" -msgstr "" -"现在,我们已经有了一个大致的概念,让我们开始吧。首先,我们需要安装 Flower。运" -"行:" +msgstr "现在,我们已经有了一个大致的概念,让我们开始吧。首先,我们需要安装 Flower。运行:" + +#: ../../source/tutorial-quickstart-scikitlearn.rst:26 +#, fuzzy +msgid "Since we want to use scikit-learn, let's go ahead and install it:" +msgstr "既然我们要使用 scikt-learn,那就继续安装吧:" -#: ../../source/tutorial-quickstart-mxnet.rst:28 -msgid "Since we want to use MXNet, let's go ahead and install it:" -msgstr "既然我们要使用 MXNet,那就继续安装吧:" +#: ../../source/tutorial-quickstart-scikitlearn.rst:32 +msgid "Or simply install all dependencies using Poetry:" +msgstr "或者直接使用 Poetry 安装所有依赖项:" -#: ../../source/tutorial-quickstart-mxnet.rst:38 +#: ../../source/tutorial-quickstart-scikitlearn.rst:42 msgid "" "Now that we have all our dependencies installed, let's run a simple " -"distributed training with two clients and one server. Our training procedure " -"and network architecture are based on MXNet´s `Hand-written Digit " -"Recognition tutorial `_." +"distributed training with two clients and one server. However, before " +"setting up the client and server, we will define all functionalities that" +" we need for our federated learning setup within :code:`utils.py`. The " +":code:`utils.py` contains different functions defining all the machine " +"learning basics:" msgstr "" -"现在,我们已经安装了所有依赖项,让我们用两个客户端和一个服务器来运行一个简单" -"的分布式训练。我们的训练程序和网络架构基于 MXNet 的 `手写数字识别教程 " -"`_\"。" +"现在我们已经安装了所有的依赖项,让我们用两个客户端和一个服务器来运行一个简单的分布式训练。不过,在设置客户端和服务器之前,我们将在 " +":code:`utils.py` 中定义联邦学习设置所需的所有功能。:code:`utils.py`包含定义所有机器学习基础知识的不同函数:" -#: ../../source/tutorial-quickstart-mxnet.rst:40 -msgid "" -"In a file called :code:`client.py`, import Flower and MXNet related packages:" -msgstr "在名为 :code:`client.py` 的文件中,导入 Flower 和 MXNet 相关软件包:" +#: ../../source/tutorial-quickstart-scikitlearn.rst:45 +msgid ":code:`get_model_parameters()`" +msgstr ":code:`get_model_parameters()`" + +#: ../../source/tutorial-quickstart-scikitlearn.rst:46 +msgid "Returns the parameters of a :code:`sklearn` LogisticRegression model" +msgstr "返回 :code:`sklearn` LogisticRegression 模型的参数" + +#: ../../source/tutorial-quickstart-scikitlearn.rst:47 +msgid ":code:`set_model_params()`" +msgstr ":code:`set_model_params()`" + +#: ../../source/tutorial-quickstart-scikitlearn.rst:48 +#, fuzzy +msgid "Sets the parameters of a :code:`sklearn` LogisticRegression model" +msgstr "设置:code:`sklean`的LogisticRegression模型的参数" + +#: ../../source/tutorial-quickstart-scikitlearn.rst:50 +msgid ":code:`set_initial_params()`" +msgstr ":code:`set_initial_params()`" -#: ../../source/tutorial-quickstart-mxnet.rst:55 -msgid "In addition, define the device allocation in MXNet with:" -msgstr "此外,还可以在 MXNet 中定义设备分配:" +#: ../../source/tutorial-quickstart-scikitlearn.rst:50 +msgid "Initializes the model parameters that the Flower server will ask for" +msgstr "初始化 Flower 服务器将要求的模型参数" -#: ../../source/tutorial-quickstart-mxnet.rst:61 +#: ../../source/tutorial-quickstart-scikitlearn.rst:52 msgid "" -"We use MXNet to load MNIST, a popular image classification dataset of " -"handwritten digits for machine learning. The MXNet utility :code:`mx." -"test_utils.get_mnist()` downloads the training and test data." +"Please check out :code:`utils.py` `here " +"`_ for more details. The pre-defined functions are used in" +" the :code:`client.py` and imported. The :code:`client.py` also requires " +"to import several packages such as Flower and scikit-learn:" msgstr "" -"我们使用 MXNet 加载 MNIST,这是一个用于机器学习的流行手写数字图像分类数据集。" -"MXNet 工具 :code:`mx.test_utils.get_mnist()` 会下载训练和测试数据。" +"更多详情请查看 :code:`utils.py`` 这里 " +"`_。在 :code:`client.py` 中使用并导入了预定义函数。:code:`client.py` " +"还需要导入几个软件包,如 Flower 和 scikit-learn:" -#: ../../source/tutorial-quickstart-mxnet.rst:75 +#: ../../source/tutorial-quickstart-scikitlearn.rst:67 msgid "" -"Define the training and loss with MXNet. We train the model by looping over " -"the dataset, measure the corresponding loss, and optimize it." +"Prior to local training, we need to load the MNIST dataset, a popular " +"image classification dataset of handwritten digits for machine learning, " +"and partition the dataset for FL. This can be conveniently achieved using" +" `Flower Datasets `_. The " +":code:`FederatedDataset.load_partition()` method loads the partitioned " +"training set for each partition ID defined in the :code:`--partition-id` " +"argument." msgstr "" -"用 MXNet 定义训练和损失值。我们在数据集上循环训练模型,测量相应的损失值,并对" -"其进行优化。" -#: ../../source/tutorial-quickstart-mxnet.rst:113 +#: ../../source/tutorial-quickstart-scikitlearn.rst:95 +msgid "" +"Next, the logistic regression model is defined and initialized with " +":code:`utils.set_initial_params()`." +msgstr "接下来,使用 :code:`utils.set_initial_params()` 对逻辑回归模型进行定义和初始化。" + +#: ../../source/tutorial-quickstart-scikitlearn.rst:107 msgid "" -"Next, we define the validation of our machine learning model. We loop over " -"the test set and measure both loss and accuracy on the test set." +"The Flower server interacts with clients through an interface called " +":code:`Client`. When the server selects a particular client for training," +" it sends training instructions over the network. The client receives " +"those instructions and calls one of the :code:`Client` methods to run " +"your code (i.e., to fit the logistic regression we defined earlier)." msgstr "" -"接下来,我们定义机器学习模型的验证。我们在测试集上循环,测量测试集上的损失值" -"和准确率。" +"Flower 服务器通过一个名为 :code:`Client` " +"的接口与客户端交互。当服务器选择一个特定的客户端进行训练时,它会通过网络发送训练指令。客户端接收到这些指令后,会调用 :code:`Client`" +" 方法之一来运行您的代码(即拟合我们之前定义的逻辑回归)。" -#: ../../source/tutorial-quickstart-mxnet.rst:137 +#: ../../source/tutorial-quickstart-scikitlearn.rst:113 msgid "" -"After defining the training and testing of a MXNet machine learning model, " -"we use these functions to implement a Flower client." +"Flower provides a convenience class called :code:`NumPyClient` which " +"makes it easier to implement the :code:`Client` interface when your " +"workload uses scikit-learn. Implementing :code:`NumPyClient` usually " +"means defining the following methods (:code:`set_parameters` is optional " +"though):" msgstr "" -"在定义了 MXNet 机器学习模型的训练和测试后,我们使用这些函数实现了 Flower 客户" -"端。" +"Flower 提供了一个名为 :code:`NumPyClient` 的便捷类,当你的工作负载使用 scikit-learn " +"时,它可以让你更容易地实现 :code:`Client` 接口。实现 :code:`NumPyClient` " +"通常意味着定义以下方法(:code:`set_parameters` 是可选的):" + +#: ../../source/tutorial-quickstart-scikitlearn.rst:122 +msgid "is directly imported with :code:`utils.set_model_params()`" +msgstr "直接导入 :code:`utils.set_model_params()`" + +#: ../../source/tutorial-quickstart-scikitlearn.rst:130 +msgid "The methods can be implemented in the following way:" +msgstr "这些方法可以通过以下方式实现:" -#: ../../source/tutorial-quickstart-mxnet.rst:139 -msgid "Our Flower clients will use a simple :code:`Sequential` model:" -msgstr "我们的 Flower 客户端将使用简单的 :code:`Sequential` 模型:" +#: ../../source/tutorial-quickstart-scikitlearn.rst:153 +msgid "" +"We can now create an instance of our class :code:`MnistClient` and add " +"one line to actually run this client:" +msgstr "现在我们可以创建一个 :code:`MnistClient` 类的实例,并添加一行来实际运行该客户端:" -#: ../../source/tutorial-quickstart-mxnet.rst:158 +#: ../../source/tutorial-quickstart-scikitlearn.rst:160 +#, fuzzy msgid "" -"After loading the dataset with :code:`load_data()` we perform one forward " -"propagation to initialize the model and model parameters with :code:" -"`model(init)`. Next, we implement a Flower client." +"That's it for the client. We only have to implement :code:`Client` or " +":code:`NumPyClient` and call :code:`fl.client.start_client()`. If you " +"implement a client of type :code:`NumPyClient` you'll need to first call " +"its :code:`to_client()` method. The string :code:`\"0.0.0.0:8080\"` tells" +" the client which server to connect to. In our case we can run the server" +" and the client on the same machine, therefore we use " +":code:`\"0.0.0.0:8080\"`. If we run a truly federated workload with the " +"server and clients running on different machines, all that needs to " +"change is the :code:`server_address` we pass to the client." msgstr "" -"使用 :code:`load_data()` 加载数据集后,我们会执行一次前向传播,使用 :code:" -"`model(init)` 初始化模型和模型参数。接下来,我们实现一个 Flower 客户端。" +"这就是客户端。我们只需实现 :code:`Client` 或 :code:`NumPyClient` 并调用 " +":code:`fl.client.start_client()` 或 " +":code:`fl.client.start_numpy_client()`。字符串 " +":code:`\"0.0.0.0:8080\"`会告诉客户端要连接的服务器。在本例中,我们可以在同一台机器上运行服务器和客户端,因此我们使用 " +":code:`\"0.0.0.0:8080\"`。如果我们运行的是真正的联邦工作负载,服务器和客户端运行在不同的机器上,那么需要改变的只是传递给客户端的" +" :code:`server_address`。" -#: ../../source/tutorial-quickstart-mxnet.rst:160 -#: ../../source/tutorial-quickstart-pytorch.rst:144 -#: ../../source/tutorial-quickstart-tensorflow.rst:54 +#: ../../source/tutorial-quickstart-scikitlearn.rst:169 +msgid "" +"The following Flower server is a little bit more advanced and returns an " +"evaluation function for the server-side evaluation. First, we import " +"again all required libraries such as Flower and scikit-learn." +msgstr "" +"下面的 Flower 服务器更先进一些,会返回一个用于服务器端评估的评估函数。首先,我们再次导入所有需要的库,如 Flower 和 scikit-" +"learn。" + +#: ../../source/tutorial-quickstart-scikitlearn.rst:172 +msgid ":code:`server.py`, import Flower and start the server:" +msgstr ":code:`server.py`, 导入 Flower 并启动服务器:" + +#: ../../source/tutorial-quickstart-scikitlearn.rst:185 +#, fuzzy msgid "" -"The Flower server interacts with clients through an interface called :code:" -"`Client`. When the server selects a particular client for training, it sends " -"training instructions over the network. The client receives those " -"instructions and calls one of the :code:`Client` methods to run your code (i." -"e., to train the neural network we defined earlier)." +"The number of federated learning rounds is set in :code:`fit_round()` and" +" the evaluation is defined in :code:`get_evaluate_fn()`. The evaluation " +"function is called after each federated learning round and gives you " +"information about loss and accuracy. Note that we also make use of Flower" +" Datasets here to load the test split of the MNIST dataset for server-" +"side evaluation." msgstr "" -"Flower 服务器通过一个名为 :code:`Client` 的接口与客户端交互。当服务器选择一个" -"特定的客户端进行训练时,它会通过网络发送训练指令。客户端接收到这些指令后,会" -"调用 :code:`Client` 方法之一来运行您的代码(即训练我们之前定义的神经网络)。" +"联邦学习轮数在 :code:`fit_round()` 中设置,评估在 :code:`get_evaluate_fn()` " +"中定义。每轮联邦学习后都会调用评估函数,并提供有关损失值和准确率的信息。" -#: ../../source/tutorial-quickstart-mxnet.rst:166 +#: ../../source/tutorial-quickstart-scikitlearn.rst:213 msgid "" -"Flower provides a convenience class called :code:`NumPyClient` which makes " -"it easier to implement the :code:`Client` interface when your workload uses " -"MXNet. Implementing :code:`NumPyClient` usually means defining the following " -"methods (:code:`set_parameters` is optional though):" +"The :code:`main` contains the server-side parameter initialization " +":code:`utils.set_initial_params()` as well as the aggregation strategy " +":code:`fl.server.strategy:FedAvg()`. The strategy is the default one, " +"federated averaging (or FedAvg), with two clients and evaluation after " +"each federated learning round. The server can be started with the command" +" :code:`fl.server.start_server(server_address=\"0.0.0.0:8080\", " +"strategy=strategy, config=fl.server.ServerConfig(num_rounds=3))`." msgstr "" -"Flower 提供了一个名为 :code:`NumPyClient` 的便捷类,当您的工作负载使用 MXNet " -"时,它可以让您更轻松地实现 :code:`Client` 接口。实现 :code:`NumPyClient` 通常" -"意味着定义以下方法(:code:`set_parameters` 是可选的):" +":code:`main`包含服务器端参数初始化:code:`utils.set_initial_params()`以及聚合策略 " +":code:`fl.server.strategy:FedAvg()`。该策略是默认的联邦平均(或 " +"FedAvg)策略,有两个客户端,在每轮联邦学习后进行评估。可以使用 " +":code:`fl.server.start_server(server_address=\"0.0.0.0:8080\", " +"strategy=strategy, config=fl.server.ServerConfig(num_rounds=3))` 命令启动服务器。" -#: ../../source/tutorial-quickstart-mxnet.rst:172 -#: ../../source/tutorial-quickstart-pytorch.rst:156 -#: ../../source/tutorial-quickstart-scikitlearn.rst:109 -msgid "return the model weight as a list of NumPy ndarrays" -msgstr "以 NumPy ndarrays 列表形式返回模型参数" +#: ../../source/tutorial-quickstart-scikitlearn.rst:232 +msgid "" +"With both client and server ready, we can now run everything and see " +"federated learning in action. Federated learning systems usually have a " +"server and multiple clients. We, therefore, have to start the server " +"first:" +msgstr "客户端和服务器都准备就绪后,我们现在就可以运行一切,看看联邦学习的运行情况。联邦学习系统通常有一个服务器和多个客户端。因此,我们必须先启动服务器:" -#: ../../source/tutorial-quickstart-mxnet.rst:173 -#: ../../source/tutorial-quickstart-pytorch.rst:157 -#: ../../source/tutorial-quickstart-scikitlearn.rst:111 -msgid ":code:`set_parameters` (optional)" -msgstr ":code:`set_parameters` (可选)" +#: ../../source/tutorial-quickstart-scikitlearn.rst:286 +msgid "" +"Congratulations! You've successfully built and run your first federated " +"learning system. The full `source code " +"`_ for this example can be found in :code:`examples/sklearn-logreg-" +"mnist`." +msgstr "" +"恭喜您!您已经成功构建并运行了第一个联邦学习系统。本示例的`完整源代码 " +"`_ 可以在 :code:`examples/sklearn-logreg-mnist` 中找到。" -#: ../../source/tutorial-quickstart-mxnet.rst:174 -#: ../../source/tutorial-quickstart-pytorch.rst:158 -#: ../../source/tutorial-quickstart-scikitlearn.rst:111 +#: ../../source/tutorial-quickstart-tensorflow.rst:-1 msgid "" -"update the local model weights with the parameters received from the server" -msgstr "用从服务器接收到的参数更新本地模型参数" +"Check out this Federated Learning quickstart tutorial for using Flower " +"with TensorFlow to train a MobilNetV2 model on CIFAR-10." +msgstr "查看此联邦学习快速入门教程,了解如何使用 Flower 和 TensorFlow 在 CIFAR-10 上训练 MobilNetV2 模型。" -#: ../../source/tutorial-quickstart-mxnet.rst:176 -#: ../../source/tutorial-quickstart-pytorch.rst:160 -#: ../../source/tutorial-quickstart-scikitlearn.rst:114 -msgid "set the local model weights" -msgstr "设置本地模型参数" +#: ../../source/tutorial-quickstart-tensorflow.rst:5 +msgid "Quickstart TensorFlow" +msgstr "快速入门 TensorFlow" -#: ../../source/tutorial-quickstart-mxnet.rst:177 -#: ../../source/tutorial-quickstart-pytorch.rst:161 -#: ../../source/tutorial-quickstart-scikitlearn.rst:115 -msgid "train the local model" -msgstr "训练本地模型" +#: ../../source/tutorial-quickstart-tensorflow.rst:13 +msgid "Let's build a federated learning system in less than 20 lines of code!" +msgstr "让我们用不到 20 行代码构建一个联邦学习系统!" -#: ../../source/tutorial-quickstart-mxnet.rst:178 -#: ../../source/tutorial-quickstart-pytorch.rst:162 -#: ../../source/tutorial-quickstart-scikitlearn.rst:116 -msgid "receive the updated local model weights" -msgstr "接收更新的本地模型参数" +#: ../../source/tutorial-quickstart-tensorflow.rst:15 +msgid "Before Flower can be imported we have to install it:" +msgstr "在导入 Flower 之前,我们必须先安装它:" -#: ../../source/tutorial-quickstart-mxnet.rst:180 -#: ../../source/tutorial-quickstart-pytorch.rst:164 -#: ../../source/tutorial-quickstart-scikitlearn.rst:118 -msgid "test the local model" -msgstr "测试本地模型" +#: ../../source/tutorial-quickstart-tensorflow.rst:21 +msgid "" +"Since we want to use the Keras API of TensorFlow (TF), we have to install" +" TF as well:" +msgstr "由于我们要使用 TensorFlow (TF) 的 Keras API,因此还必须安装 TF:" -#: ../../source/tutorial-quickstart-mxnet.rst:182 -msgid "They can be implemented in the following way:" -msgstr "它们可以通过以下方式实现:" +#: ../../source/tutorial-quickstart-tensorflow.rst:31 +msgid "Next, in a file called :code:`client.py`, import Flower and TensorFlow:" +msgstr "接下来,在名为 :code:`client.py` 的文件中导入 Flower 和 TensorFlow:" -#: ../../source/tutorial-quickstart-mxnet.rst:212 +#: ../../source/tutorial-quickstart-tensorflow.rst:38 msgid "" -"We can now create an instance of our class :code:`MNISTClient` and add one " -"line to actually run this client:" +"We use the Keras utilities of TF to load CIFAR10, a popular colored image" +" classification dataset for machine learning. The call to " +":code:`tf.keras.datasets.cifar10.load_data()` downloads CIFAR10, caches " +"it locally, and then returns the entire training and test set as NumPy " +"ndarrays." msgstr "" -"现在我们可以创建一个 :code:`MNISTClient` 类的实例,并添加一行来实际运行该客户" -"端:" +"我们使用 TF 的 Keras 实用程序加载 CIFAR10,这是一个用于机器学习的流行彩色图像分类数据集。调用 " +":code:`tf.keras.datasets.cifar10.load_data()` 会下载 CIFAR10,将其缓存到本地,然后以 " +"NumPy ndarrays 的形式返回整个训练集和测试集。" -#: ../../source/tutorial-quickstart-mxnet.rst:219 +#: ../../source/tutorial-quickstart-tensorflow.rst:47 msgid "" -"That's it for the client. We only have to implement :code:`Client` or :code:" -"`NumPyClient` and call :code:`fl.client.start_client()` or :code:`fl.client." -"start_numpy_client()`. The string :code:`\"0.0.0.0:8080\"` tells the client " -"which server to connect to. In our case we can run the server and the client " -"on the same machine, therefore we use :code:`\"0.0.0.0:8080\"`. If we run a " -"truly federated workload with the server and clients running on different " -"machines, all that needs to change is the :code:`server_address` we pass to " -"the client." -msgstr "" -"这就是客户端。我们只需实现 :code:`Client` 或 :code:`NumPyClient` 并调用 :" -"code:`fl.client.start_client()` 或 :code:`fl.client.start_numpy_client()`。字" -"符串 :code:`\"0.0.0.0:8080\"`会告诉客户端要连接的服务器。在本例中,我们可以在" -"同一台机器上运行服务器和客户端,因此我们使用 :code:`\"0.0.0.0:8080\"`。如果我" -"们运行的是真正的联邦工作负载,服务器和客户端运行在不同的机器上,那么需要改变" -"的只是传递给客户端的 :code:`server_address`。" +"Next, we need a model. For the purpose of this tutorial, we use " +"MobilNetV2 with 10 output classes:" +msgstr "接下来,我们需要一个模型。在本教程中,我们使用带有 10 个输出类的 MobilNetV2:" -#: ../../source/tutorial-quickstart-mxnet.rst:241 +#: ../../source/tutorial-quickstart-tensorflow.rst:60 msgid "" -"With both client and server ready, we can now run everything and see " -"federated learning in action. Federated learning systems usually have a " -"server and multiple clients. We therefore have to start the server first:" +"Flower provides a convenience class called :code:`NumPyClient` which " +"makes it easier to implement the :code:`Client` interface when your " +"workload uses Keras. The :code:`NumPyClient` interface defines three " +"methods which can be implemented in the following way:" msgstr "" -"客户端和服务器都准备就绪后,我们现在就可以运行一切,看看联邦学习的运行情况。" -"联邦学习系统通常有一个服务器和多个客户端。因此,我们必须先启动服务器:" +"Flower 提供了一个名为 :code:`NumPyClient` 的便捷类,当您的工作负载使用 Keras 时,该类可以更轻松地实现 " +":code:`Client` 接口。:code:`NumPyClient` 接口定义了三个方法,可以通过以下方式实现:" -#: ../../source/tutorial-quickstart-mxnet.rst:249 -#: ../../source/tutorial-quickstart-pytorch.rst:226 -#: ../../source/tutorial-quickstart-scikitlearn.rst:224 -#: ../../source/tutorial-quickstart-tensorflow.rst:122 -#: ../../source/tutorial-quickstart-xgboost.rst:533 +#: ../../source/tutorial-quickstart-tensorflow.rst:135 +msgid "Each client will have its own dataset." +msgstr "每个客户都有自己的数据集。" + +#: ../../source/tutorial-quickstart-tensorflow.rst:137 +msgid "" +"You should now see how the training does in the very first terminal (the " +"one that started the server):" +msgstr "现在你应该能在第一个终端(启动服务器的终端)看到训练的效果了:" + +#: ../../source/tutorial-quickstart-tensorflow.rst:169 msgid "" -"Once the server is running we can start the clients in different terminals. " -"Open a new terminal and start the first client:" +"Congratulations! You've successfully built and run your first federated " +"learning system. The full `source code " +"`_ for this can be found in :code:`examples" +"/quickstart-tensorflow/client.py`." msgstr "" -"服务器运行后,我们就可以在不同终端启动客户端了。打开一个新终端,启动第一个客" -"户端:" +"恭喜您!您已经成功构建并运行了第一个联邦学习系统。`完整的源代码 " +"`_ 可以在 :code:`examples/quickstart-" +"tensorflow/client.py` 中找到。" -#: ../../source/tutorial-quickstart-mxnet.rst:256 -#: ../../source/tutorial-quickstart-pytorch.rst:233 -#: ../../source/tutorial-quickstart-scikitlearn.rst:231 -#: ../../source/tutorial-quickstart-tensorflow.rst:129 -#: ../../source/tutorial-quickstart-xgboost.rst:540 -msgid "Open another terminal and start the second client:" -msgstr "打开另一台终端,启动第二个客户端:" +#: ../../source/tutorial-quickstart-xgboost.rst:-1 +msgid "" +"Check out this Federated Learning quickstart tutorial for using Flower " +"with XGBoost to train classification models on trees." +msgstr "查看此联邦学习 快速入门教程,了解如何使用 Flower 和 XGBoost 上训练分类模型。" -#: ../../source/tutorial-quickstart-mxnet.rst:262 -#: ../../source/tutorial-quickstart-pytorch.rst:239 -#: ../../source/tutorial-quickstart-scikitlearn.rst:237 -#: ../../source/tutorial-quickstart-xgboost.rst:546 +#: ../../source/tutorial-quickstart-xgboost.rst:5 +msgid "Quickstart XGBoost" +msgstr "XGBoost快速入门" + +#: ../../source/tutorial-quickstart-xgboost.rst:14 +msgid "Federated XGBoost" +msgstr "联邦化 XGBoost" + +#: ../../source/tutorial-quickstart-xgboost.rst:16 msgid "" -"Each client will have its own dataset. You should now see how the training " -"does in the very first terminal (the one that started the server):" +"EXtreme Gradient Boosting (**XGBoost**) is a robust and efficient " +"implementation of gradient-boosted decision tree (**GBDT**), that " +"maximises the computational boundaries for boosted tree methods. It's " +"primarily designed to enhance both the performance and computational " +"speed of machine learning models. In XGBoost, trees are constructed " +"concurrently, unlike the sequential approach taken by GBDT." msgstr "" -"每个客户端都有自己的数据集。现在你应该看到第一个终端(启动服务器的终端)的训" -"练效果了:" +"EXtreme Gradient " +"Boosting(**XGBoost**)是梯度提升决策树(**GBDT**)的一种稳健而高效的实现方法,能最大限度地提高提升树方法的计算边界。它主要用于提高机器学习模型的性能和计算速度。在" +" XGBoost 中,决策树是并发构建的,与 GBDT 采用的顺序方法不同。" -#: ../../source/tutorial-quickstart-mxnet.rst:294 +#: ../../source/tutorial-quickstart-xgboost.rst:20 msgid "" -"Congratulations! You've successfully built and run your first federated " -"learning system. The full `source code `_ for this example can be found " -"in :code:`examples/quickstart-mxnet`." -msgstr "" -"恭喜您!您已经成功构建并运行了第一个联邦学习系统。本示例的`完整源代码 " -"`_ 可在 :code:`examples/quickstart-mxnet` 中找到。" +"Often, for tabular data on medium-sized datasets with fewer than 10k " +"training examples, XGBoost surpasses the results of deep learning " +"techniques." +msgstr "对于训练示例少于 10k 的中型数据集上的表格数据,XGBoost 的结果往往超过深度学习技术。" -#: ../../source/tutorial-quickstart-pandas.rst:-1 +#: ../../source/tutorial-quickstart-xgboost.rst:23 +msgid "Why federated XGBoost?" +msgstr "为什么选择联邦 XGBoost?" + +#: ../../source/tutorial-quickstart-xgboost.rst:25 +msgid "" +"Indeed, as the demand for data privacy and decentralized learning grows, " +"there's an increasing requirement to implement federated XGBoost systems " +"for specialised applications, like survival analysis and financial fraud " +"detection." +msgstr "事实上,随着对数据隐私和分散学习的需求不断增长,越来越多的专业应用(如生存分析和金融欺诈检测)需要实施联邦 XGBoost 系统。" + +#: ../../source/tutorial-quickstart-xgboost.rst:27 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower with " -"Pandas to perform Federated Analytics." +"Federated learning ensures that raw data remains on the local device, " +"making it an attractive approach for sensitive domains where data " +"security and privacy are paramount. Given the robustness and efficiency " +"of XGBoost, combining it with federated learning offers a promising " +"solution for these specific challenges." msgstr "" -"查看此联邦学习快速入门教程,了解如何使用 Flower 和 Pandas 执行联邦分析。" +"联邦学习可确保原始数据保留在本地设备上,因此对于数据安全和隐私至关重要的敏感领域来说,这是一种极具吸引力的方法。鉴于 XGBoost " +"的稳健性和高效性,将其与联邦学习相结合为应对这些特定挑战提供了一种前景广阔的解决方案。" -#: ../../source/tutorial-quickstart-pandas.rst:5 -msgid "Quickstart Pandas" -msgstr "快速入门Pandas" +#: ../../source/tutorial-quickstart-xgboost.rst:30 +msgid "" +"In this tutorial we will learn how to train a federated XGBoost model on " +"HIGGS dataset using Flower and :code:`xgboost` package. We use a simple " +"example (`full code xgboost-quickstart " +"`_)" +" with two *clients* and one *server* to demonstrate how federated XGBoost" +" works, and then we dive into a more complex example (`full code xgboost-" +"comprehensive `_) to run various experiments." +msgstr "" +"在本教程中,我们将学习如何使用 Flower 和 :code:`xgboost` 软件包在 HIGGS 数据集上训练联邦 XGBoost " +"模型。我们将使用一个包含两个 * 客户端* 和一个 * 服务器* 的简单示例 (`完整代码 xgboost-quickstart " +"`_)来演示联邦 XGBoost 如何工作,然后我们将深入到一个更复杂的示例 (`完整代码 xgboost-" +"comprehensive `_),以运行各种实验。" -#: ../../source/tutorial-quickstart-pandas.rst:10 -msgid "Let's build a federated analytics system using Pandas and Flower!" -msgstr "让我们使用 Pandas 和 Flower 建立一个联邦分析系统!" +#: ../../source/tutorial-quickstart-xgboost.rst:37 +msgid "Environment Setup" +msgstr "环境设定" + +#: ../../source/tutorial-quickstart-xgboost.rst:41 +msgid "" +"We first need to install Flower and Flower Datasets. You can do this by " +"running :" +msgstr "我们首先需要安装 Flower 和 Flower Datasets。您可以通过运行 :" + +#: ../../source/tutorial-quickstart-xgboost.rst:47 +msgid "" +"Since we want to use :code:`xgboost` package to build up XGBoost trees, " +"let's go ahead and install :code:`xgboost`:" +msgstr "既然我们要使用 :code:`xgboost` 软件包来构建 XGBoost 树,那就继续安装 :code:`xgboost`:" -#: ../../source/tutorial-quickstart-pandas.rst:12 +#: ../../source/tutorial-quickstart-xgboost.rst:57 msgid "" -"Please refer to the `full code example `_ to learn more." -msgstr "" -"请参阅 `完整代码示例 `_\" 了解更多信息。" +"*Clients* are responsible for generating individual weight-updates for " +"the model based on their local datasets. Now that we have all our " +"dependencies installed, let's run a simple distributed training with two " +"clients and one server." +msgstr "*客户端*负责根据其本地数据集为模型生成单独的模型参数更新。现在我们已经安装了所有的依赖项,让我们用两个客户端和一个服务器来运行一个简单的分布式训练。" -#: ../../source/tutorial-quickstart-pytorch.rst:-1 +#: ../../source/tutorial-quickstart-xgboost.rst:60 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower with " -"PyTorch to train a CNN model on MNIST." -msgstr "" -"查看此联邦学习快速入门教程,了解如何使用 Flower 和 PyTorch 在 MNIST 上训练 " -"CNN 模型。" +"In a file called :code:`client.py`, import xgboost, Flower, Flower " +"Datasets and other related functions:" +msgstr "在名为 :code:`client.py` 的文件中,导入 xgboost、Flower、Flower Datasets 和其他相关函数:" -#: ../../source/tutorial-quickstart-pytorch.rst:13 +#: ../../source/tutorial-quickstart-xgboost.rst:87 +msgid "Dataset partition and hyper-parameter selection" +msgstr "数据集划分和超参数选择" + +#: ../../source/tutorial-quickstart-xgboost.rst:89 msgid "" -"In this tutorial we will learn how to train a Convolutional Neural Network " -"on CIFAR10 using Flower and PyTorch." -msgstr "" -"在本教程中,我们将学习如何使用 Flower 和 PyTorch 在 CIFAR10 上训练卷积神经网" -"络。" +"Prior to local training, we require loading the HIGGS dataset from Flower" +" Datasets and conduct data partitioning for FL:" +msgstr "在本地训练之前,我们需要从 Flower Datasets 加载 HIGGS 数据集,并对 FL 进行数据分区:" -#: ../../source/tutorial-quickstart-pytorch.rst:15 -#: ../../source/tutorial-quickstart-xgboost.rst:39 -#, fuzzy +#: ../../source/tutorial-quickstart-xgboost.rst:102 msgid "" -"First of all, it is recommended to create a virtual environment and run " -"everything within a :doc:`virtualenv `." +"In this example, we split the dataset into two partitions with uniform " +"distribution (:code:`IidPartitioner(num_partitions=2)`). Then, we load " +"the partition for the given client based on :code:`node_id`:" msgstr "" -"首先,建议创建一个虚拟环境,并在 `virtualenv `_ 中运行一切。" +"在此示例中,我们将数据集分割成两个均匀分布的分区(:code:`IidPartitioner(num_partitions=2)`)。然后,我们根据" +" :code:`node_id` 为给定客户端加载分区:" -#: ../../source/tutorial-quickstart-pytorch.rst:19 +#: ../../source/tutorial-quickstart-xgboost.rst:121 msgid "" -"*Clients* are responsible for generating individual weight-updates for the " -"model based on their local datasets. These updates are then sent to the " -"*server* which will aggregate them to produce a better model. Finally, the " -"*server* sends this improved version of the model back to each *client*. A " -"complete cycle of weight updates is called a *round*." -msgstr "" -"*客户端*负责在其本地数据集上更新模型参数。然后,这些参数会被发送到*服务器*," -"由*服务器*聚合后生成一个更好的模型。最后,*服务器*将改进后的模型发送回每个*客" -"户端*。一个完整的模型参数更新周期称为一*轮*。" +"After that, we do train/test splitting on the given partition (client's " +"local data), and transform data format for :code:`xgboost` package." +msgstr "然后,我们在给定的分区(客户端的本地数据)上进行训练/测试分割,并为 :code:`xgboost` 软件包转换数据格式。" -#: ../../source/tutorial-quickstart-pytorch.rst:23 +#: ../../source/tutorial-quickstart-xgboost.rst:134 msgid "" -"Now that we have a rough idea of what is going on, let's get started. We " -"first need to install Flower. You can do this by running :" -msgstr "" -"现在,我们已经有了一个大致的概念了,那就让我们开始吧。首先,我们需要安装 " -"Flower。可以通过运行 :" +"The functions of :code:`train_test_split` and " +":code:`transform_dataset_to_dmatrix` are defined as below:" +msgstr ":code:`train_test_split` 和 :code:`transform_dataset_too_dmatrix` 的函数定义如下:" -#: ../../source/tutorial-quickstart-pytorch.rst:29 +#: ../../source/tutorial-quickstart-xgboost.rst:158 +msgid "Finally, we define the hyper-parameters used for XGBoost training." +msgstr "最后,我们定义了用于 XGBoost 训练的超参数。" + +#: ../../source/tutorial-quickstart-xgboost.rst:174 msgid "" -"Since we want to use PyTorch to solve a computer vision task, let's go ahead " -"and install PyTorch and the **torchvision** library:" +"The :code:`num_local_round` represents the number of iterations for local" +" tree boost. We use CPU for the training in default. One can shift it to " +"GPU by setting :code:`tree_method` to :code:`gpu_hist`. We use AUC as " +"evaluation metric." msgstr "" -"既然我们想用 PyTorch 解决计算机视觉任务,那就继续安装 PyTorch 和 " -"**torchvision** 库吧:" +"代码:`num_local_round`表示本地树的迭代次数。我们默认使用 CPU 进行训练。可以通过将 :code:`tree_method` " +"设置为 :code:`gpu_hist`,将其转换为 GPU。我们使用 AUC 作为评估指标。" -#: ../../source/tutorial-quickstart-pytorch.rst:39 +#: ../../source/tutorial-quickstart-xgboost.rst:181 +msgid "Flower client definition for XGBoost" +msgstr "用于 XGBoost 的 Flower 客户端定义" + +#: ../../source/tutorial-quickstart-xgboost.rst:183 msgid "" -"Now that we have all our dependencies installed, let's run a simple " -"distributed training with two clients and one server. Our training procedure " -"and network architecture are based on PyTorch's `Deep Learning with PyTorch " -"`_." +"After loading the dataset we define the Flower client. We follow the " +"general rule to define :code:`XgbClient` class inherited from " +":code:`fl.client.Client`." msgstr "" -"现在我们已经安装了所有的依赖项,让我们用两个客户端和一个服务器来运行一个简单" -"的分布式训练。我们的训练过程和网络架构基于 PyTorch 的《Deep Learning with " -"PyTorch `_》。" +"加载数据集后,我们定义 Flower 客户端。我们按照一般规则定义从 :code:`fl.client.Client` 继承而来的 " +":code:`XgbClient` 类。" -#: ../../source/tutorial-quickstart-pytorch.rst:41 +#: ../../source/tutorial-quickstart-xgboost.rst:193 msgid "" -"In a file called :code:`client.py`, import Flower and PyTorch related " -"packages:" -msgstr "在名为 :code:`client.py` 的文件中,导入 Flower 和 PyTorch 相关软件包:" - -#: ../../source/tutorial-quickstart-pytorch.rst:56 -msgid "In addition, we define the device allocation in PyTorch with:" -msgstr "此外,我们还在 PyTorch 中定义了设备分配:" +"The :code:`self.bst` is used to keep the Booster objects that remain " +"consistent across rounds, allowing them to store predictions from trees " +"integrated in earlier rounds and maintain other essential data structures" +" for training." +msgstr "" +"代码:`self.bst`用于保存在各轮中保持一致的 Booster " +"对象,使其能够存储在前几轮中集成的树的预测结果,并维护其他用于训练的重要数据结构。" -#: ../../source/tutorial-quickstart-pytorch.rst:62 +#: ../../source/tutorial-quickstart-xgboost.rst:196 msgid "" -"We use PyTorch to load CIFAR10, a popular colored image classification " -"dataset for machine learning. The PyTorch :code:`DataLoader()` downloads the " -"training and test data that are then normalized." +"Then, we override :code:`get_parameters`, :code:`fit` and " +":code:`evaluate` methods insides :code:`XgbClient` class as follows." msgstr "" -"我们使用 PyTorch 来加载 CIFAR10,这是一个用于机器学习的流行彩色图像分类数据" -"集。PyTorch :code:`DataLoader()`下载训练数据和测试数据,然后进行归一化处理。" +"然后,我们在 :code:`XgbClient` 类中重写 :code:`get_parameters`、:code:`fit` 和 " +":code:`evaluate` 方法如下。" -#: ../../source/tutorial-quickstart-pytorch.rst:78 +#: ../../source/tutorial-quickstart-xgboost.rst:210 msgid "" -"Define the loss and optimizer with PyTorch. The training of the dataset is " -"done by looping over the dataset, measure the corresponding loss and " -"optimize it." +"Unlike neural network training, XGBoost trees are not started from a " +"specified random weights. In this case, we do not use " +":code:`get_parameters` and :code:`set_parameters` to initialise model " +"parameters for XGBoost. As a result, let's return an empty tensor in " +":code:`get_parameters` when it is called by the server at the first " +"round." msgstr "" -"使用 PyTorch 定义损失和优化器。数据集的训练是通过循环数据集、测量相应的损失值" -"并对其进行优化来完成的。" +"与神经网络训练不同,XGBoost 树不是从指定的随机参数开始的。在这种情况下,我们不使用 :code:`get_parameters` 和 " +":code:`set_parameters` 来初始化 XGBoost 的模型参数。因此,当服务器在第一轮调用 " +":code:`get_parameters` 时,让我们在 :code:`get_parameters` 中返回一个空张量。" -#: ../../source/tutorial-quickstart-pytorch.rst:94 +#: ../../source/tutorial-quickstart-xgboost.rst:251 msgid "" -"Define then the validation of the machine learning network. We loop over " -"the test set and measure the loss and accuracy of the test set." +"In :code:`fit`, at the first round, we call :code:`xgb.train()` to build " +"up the first set of trees. the returned Booster object and config are " +"stored in :code:`self.bst` and :code:`self.config`, respectively. From " +"the second round, we load the global model sent from server to " +":code:`self.bst`, and then update model weights on local training data " +"with function :code:`local_boost` as follows:" msgstr "" -"然后定义机器学习网络的验证。我们在测试集上循环,计算测试集的损失值和准确率。" +"在 :code:`fit`中,第一轮我们调用 :code:`xgb.train()`来建立第一组树,返回的 Booster 对象和 config " +"分别存储在 :code:`self.bst` 和 :code:`self.config` 中。从第二轮开始,我们将服务器发送的全局模型加载到 " +":code:`self.bst`,然后使用函数 :code:`local_boost`更新本地训练数据的模型权重,如下所示:" -#: ../../source/tutorial-quickstart-pytorch.rst:113 +#: ../../source/tutorial-quickstart-xgboost.rst:269 msgid "" -"After defining the training and testing of a PyTorch machine learning model, " -"we use the functions for the Flower clients." +"Given :code:`num_local_round`, we update trees by calling " +":code:`self.bst.update` method. After training, the last " +":code:`N=num_local_round` trees will be extracted to send to the server." msgstr "" -"在定义了 PyTorch 机器学习模型的训练和测试之后,我们将这些功能用于 Flower 客户" -"端。" +"给定 :code:`num_local_round`,我们通过调用 " +":code:`self.bst.update`方法更新树。训练结束后,我们将提取最后一个 :code:`N=num_local_round` " +"树并发送给服务器。" -#: ../../source/tutorial-quickstart-pytorch.rst:115 +#: ../../source/tutorial-quickstart-xgboost.rst:291 msgid "" -"The Flower clients will use a simple CNN adapted from 'PyTorch: A 60 Minute " -"Blitz':" -msgstr "Flower 客户端将使用一个简单的从“PyTorch: 60 分钟突击\"改编的CNN:" +"In :code:`evaluate`, we call :code:`self.bst.eval_set` function to " +"conduct evaluation on valid set. The AUC value will be returned." +msgstr "在 :code:`evaluate`中,我们调用 :code:`self.bst.eval_set`函数对有效集合进行评估。将返回 AUC 值。" -#: ../../source/tutorial-quickstart-pytorch.rst:142 +#: ../../source/tutorial-quickstart-xgboost.rst:294 msgid "" -"After loading the data set with :code:`load_data()` we define the Flower " -"interface." -msgstr "使用 :code:`load_data()` 加载数据集后,我们定义了 Flower 接口。" +"Now, we can create an instance of our class :code:`XgbClient` and add one" +" line to actually run this client:" +msgstr "现在,我们可以创建一个 :code:`XgbClient` 类的实例,并添加一行来实际运行该客户端:" -#: ../../source/tutorial-quickstart-pytorch.rst:150 +#: ../../source/tutorial-quickstart-xgboost.rst:300 msgid "" -"Flower provides a convenience class called :code:`NumPyClient` which makes " -"it easier to implement the :code:`Client` interface when your workload uses " -"PyTorch. Implementing :code:`NumPyClient` usually means defining the " -"following methods (:code:`set_parameters` is optional though):" +"That's it for the client. We only have to implement :code:`Client`and " +"call :code:`fl.client.start_client()`. The string :code:`\"[::]:8080\"` " +"tells the client which server to connect to. In our case we can run the " +"server and the client on the same machine, therefore we use " +":code:`\"[::]:8080\"`. If we run a truly federated workload with the " +"server and clients running on different machines, all that needs to " +"change is the :code:`server_address` we point the client at." msgstr "" -"Flower 提供了一个名为 :code:`NumPyClient` 的便捷类,当您的工作负载使用 " -"PyTorch 时,它使 :code:`Client` 接口的实现变得更容易。实现 :code:" -"`NumPyClient` 通常意味着定义以下方法(:code:`set_parameters` 是可选的):" - -#: ../../source/tutorial-quickstart-pytorch.rst:166 -msgid "which can be implemented in the following way:" -msgstr "可以通过以下方式实现:" +"这就是客户端。我们只需实现 :code:`客户端`并调用 :code:`fl.client.start_client()`。字符串 " +":code:`\"[::]:8080\"`会告诉客户端要连接的服务器。在本例中,我们可以在同一台机器上运行服务器和客户端,因此我们使用 " +":code:`\"[::]:8080\"`。如果我们运行的是真正的联邦工作负载,服务器和客户端运行在不同的机器上,那么需要改变的只是客户端指向的 " +":code:`server_address`。" -#: ../../source/tutorial-quickstart-pytorch.rst:189 -#: ../../source/tutorial-quickstart-tensorflow.rst:82 +#: ../../source/tutorial-quickstart-xgboost.rst:311 msgid "" -"We can now create an instance of our class :code:`CifarClient` and add one " -"line to actually run this client:" +"These updates are then sent to the *server* which will aggregate them to " +"produce a better model. Finally, the *server* sends this improved version" +" of the model back to each *client* to finish a complete FL round." msgstr "" -"现在我们可以创建一个 :code:`CifarClient` 类的实例,并添加一行来实际运行该客户" -"端:" +"然后,这些更新会被发送到*服务器*,由*服务器*聚合后生成一个更好的模型。最后,*服务器*将这个改进版的模型发回给每个*客户端*,以完成一轮完整的" +" FL。" -#: ../../source/tutorial-quickstart-pytorch.rst:196 -#: ../../source/tutorial-quickstart-tensorflow.rst:90 -#, fuzzy +#: ../../source/tutorial-quickstart-xgboost.rst:314 msgid "" -"That's it for the client. We only have to implement :code:`Client` or :code:" -"`NumPyClient` and call :code:`fl.client.start_client()`. If you implement a " -"client of type :code:`NumPyClient` you'll need to first call its :code:" -"`to_client()` method. The string :code:`\"[::]:8080\"` tells the client " -"which server to connect to. In our case we can run the server and the client " -"on the same machine, therefore we use :code:`\"[::]:8080\"`. If we run a " -"truly federated workload with the server and clients running on different " -"machines, all that needs to change is the :code:`server_address` we point " -"the client at." +"In a file named :code:`server.py`, import Flower and FedXgbBagging from " +":code:`flwr.server.strategy`." msgstr "" -"这就是客户端。我们只需实现 :code:`Client` 或 :code:`NumPyClient` 并调用 :" -"code:`fl.client.start_client()` 或 :code:`fl.client.start_numpy_client()`。字" -"符串 :code:`\"[::]:8080\"`会告诉客户端要连接的服务器。在本例中,我们可以在同" -"一台机器上运行服务器和客户端,因此使用 :code:`\"[::]:8080\"。如果我们运行的是" -"真正的联邦工作负载,服务器和客户端运行在不同的机器上,那么需要改变的只是客户" -"端指向的 :code:`server_address`。" +"在名为 :code:`server.py` 的文件中,从 :code:`flwr.server.strategy` 导入 Flower 和 " +"FedXgbBagging。" -#: ../../source/tutorial-quickstart-pytorch.rst:271 -msgid "" -"Congratulations! You've successfully built and run your first federated " -"learning system. The full `source code `_ for this example can be found " -"in :code:`examples/quickstart-pytorch`." -msgstr "" -"恭喜您!您已经成功构建并运行了第一个联邦学习系统。本示例的`完整源代码 " -"`_ 可以在 :code:`examples/quickstart-pytorch` 中找到。" +#: ../../source/tutorial-quickstart-xgboost.rst:316 +msgid "We first define a strategy for XGBoost bagging aggregation." +msgstr "我们首先定义了 XGBoost bagging聚合策略。" -#: ../../source/tutorial-quickstart-pytorch-lightning.rst:-1 +#: ../../source/tutorial-quickstart-xgboost.rst:339 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower with " -"PyTorch Lightning to train an Auto Encoder model on MNIST." +"We use two clients for this example. An " +":code:`evaluate_metrics_aggregation` function is defined to collect and " +"wighted average the AUC values from clients." msgstr "" -"查看此联邦学习快速入门教程,了解如何使用 Flower 和 PyTorch Lightning 在 " -"MNIST 上训练自动编码器模型。" +"本示例使用两个客户端。我们定义了一个 :code:`evaluate_metrics_aggregation` 函数,用于收集客户机的 AUC " +"值并求取平均值。" -#: ../../source/tutorial-quickstart-pytorch-lightning.rst:5 -msgid "Quickstart PyTorch Lightning" -msgstr "快速入门 PyTorch Lightning" +#: ../../source/tutorial-quickstart-xgboost.rst:342 +msgid "Then, we start the server:" +msgstr "然后,我们启动服务器:" -#: ../../source/tutorial-quickstart-pytorch-lightning.rst:10 +#: ../../source/tutorial-quickstart-xgboost.rst:354 +msgid "Tree-based bagging aggregation" +msgstr "基于树的bagging聚合" + +#: ../../source/tutorial-quickstart-xgboost.rst:356 msgid "" -"Let's build a horizontal federated learning system using PyTorch Lightning " -"and Flower!" -msgstr "让我们使用 PyTorch Lightning 和 Flower 构建一个水平联邦学习系统!" +"You must be curious about how bagging aggregation works. Let's look into " +"the details." +msgstr "您一定很好奇bagging聚合是如何工作的。让我们来详细了解一下。" -#: ../../source/tutorial-quickstart-pytorch-lightning.rst:12 +#: ../../source/tutorial-quickstart-xgboost.rst:358 msgid "" -"Please refer to the `full code example `_ to learn more." +"In file :code:`flwr.server.strategy.fedxgb_bagging.py`, we define " +":code:`FedXgbBagging` inherited from :code:`flwr.server.strategy.FedAvg`." +" Then, we override the :code:`aggregate_fit`, :code:`aggregate_evaluate` " +"and :code:`evaluate` methods as follows:" msgstr "" -"请参阅 `完整代码示例 `_ 了解更多信息。" +"在文件 :code:`flwr.server.strategy.fedxgb_bagging.py`中,我们定义了从 " +":code:`flwr.server.strategy.FedAvg`继承的 :code:`FedXgbBagging`。然后,我们覆盖 " +":code:`aggregate_fit`、:code:`aggregate_evaluate` 和 :code:`evaluate` 方法如下:" -#: ../../source/tutorial-quickstart-scikitlearn.rst:-1 +#: ../../source/tutorial-quickstart-xgboost.rst:454 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower with " -"scikit-learn to train a linear regression model." +"In :code:`aggregate_fit`, we sequentially aggregate the clients' XGBoost " +"trees by calling :code:`aggregate()` function:" msgstr "" -"查看此联邦学习快速入门教程,了解如何使用 Flower 和 scikit-learn 训练线性回归" -"模型。" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:5 -msgid "Quickstart scikit-learn" -msgstr "scikit-learn快速入门" +"在 :code:`aggregate_fit` 中,我们通过调用 :code:`aggregate()` 函数,按顺序聚合客户端的 XGBoost" +" 树:" -#: ../../source/tutorial-quickstart-scikitlearn.rst:10 +#: ../../source/tutorial-quickstart-xgboost.rst:513 msgid "" -"In this tutorial, we will learn how to train a :code:`Logistic Regression` " -"model on MNIST using Flower and scikit-learn." +"In this function, we first fetch the number of trees and the number of " +"parallel trees for the current and previous model by calling " +":code:`_get_tree_nums`. Then, the fetched information will be aggregated." +" After that, the trees (containing model weights) are aggregated to " +"generate a new tree model." msgstr "" -"在本教程中,我们将学习如何使用 Flower 和 scikit-learn 在 MNIST 上训练一个 :" -"code:`Logistic Regression` 模型。" +"在该函数中,我们首先通过调用 :code:`_get_tree_nums` " +"获取当前模型和上一个模型的树数和并行树数。然后,对获取的信息进行聚合。然后,聚合树(包含模型参数)生成新的树模型。" -#: ../../source/tutorial-quickstart-scikitlearn.rst:26 -#, fuzzy -msgid "Since we want to use scikit-learn, let's go ahead and install it:" -msgstr "既然我们要使用 scikt-learn,那就继续安装吧:" +#: ../../source/tutorial-quickstart-xgboost.rst:518 +msgid "" +"After traversal of all clients' models, a new global model is generated, " +"followed by the serialisation, and sending back to each client." +msgstr "在遍历所有客户端的模型后,会生成一个新的全局模型,然后进行序列化,并发回给每个客户端。" -#: ../../source/tutorial-quickstart-scikitlearn.rst:32 -msgid "Or simply install all dependencies using Poetry:" -msgstr "或者直接使用 Poetry 安装所有依赖项:" +#: ../../source/tutorial-quickstart-xgboost.rst:523 +msgid "Launch Federated XGBoost!" +msgstr "启动联邦 XGBoost!" -#: ../../source/tutorial-quickstart-scikitlearn.rst:42 +#: ../../source/tutorial-quickstart-xgboost.rst:585 msgid "" -"Now that we have all our dependencies installed, let's run a simple " -"distributed training with two clients and one server. However, before " -"setting up the client and server, we will define all functionalities that we " -"need for our federated learning setup within :code:`utils.py`. The :code:" -"`utils.py` contains different functions defining all the machine learning " -"basics:" +"Congratulations! You've successfully built and run your first federated " +"XGBoost system. The AUC values can be checked in " +":code:`metrics_distributed`. One can see that the average AUC increases " +"over FL rounds." msgstr "" -"现在我们已经安装了所有的依赖项,让我们用两个客户端和一个服务器来运行一个简单" -"的分布式训练。不过,在设置客户端和服务器之前,我们将在 :code:`utils.py` 中定" -"义联邦学习设置所需的所有功能。:code:`utils.py`包含定义所有机器学习基础知识的" -"不同函数:" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:45 -msgid ":code:`get_model_parameters()`" -msgstr ":code:`get_model_parameters()`" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:46 -msgid "Returns the parameters of a :code:`sklearn` LogisticRegression model" -msgstr "返回 :code:`sklearn` LogisticRegression 模型的参数" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:47 -msgid ":code:`set_model_params()`" -msgstr ":code:`set_model_params()`" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:48 -msgid "Sets the parameters of a :code:`sklean` LogisticRegression model" -msgstr "设置:code:`sklean`的LogisticRegression模型的参数" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:49 -msgid ":code:`set_initial_params()`" -msgstr ":code:`set_initial_params()`" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:50 -msgid "Initializes the model parameters that the Flower server will ask for" -msgstr "初始化 Flower 服务器将要求的模型参数" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:51 -msgid ":code:`load_mnist()`" -msgstr ":code:`load_mnist()`" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:52 -msgid "Loads the MNIST dataset using OpenML" -msgstr "使用 OpenML 加载 MNIST 数据集" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:53 -msgid ":code:`shuffle()`" -msgstr ":code:`shuffle()`" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:54 -msgid "Shuffles data and its label" -msgstr "对数据及其标签进行洗牌" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:56 -msgid ":code:`partition()`" -msgstr ":code:`partition()`" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:56 -msgid "Splits datasets into a number of partitions" -msgstr "将数据集分割成多个分区" +"恭喜您!您已成功构建并运行了第一个联邦 XGBoost 系统。可以在 :code:`metrics_distributed` 中查看 AUC " +"值。我们可以看到,平均 AUC 随 FL 轮数的增加而增加。" -#: ../../source/tutorial-quickstart-scikitlearn.rst:58 +#: ../../source/tutorial-quickstart-xgboost.rst:590 msgid "" -"Please check out :code:`utils.py` `here `_ for more details. The pre-" -"defined functions are used in the :code:`client.py` and imported. The :code:" -"`client.py` also requires to import several packages such as Flower and " -"scikit-learn:" +"The full `source code `_ for this example can be found in :code:`examples" +"/xgboost-quickstart`." msgstr "" -"更多详情请查看 :code:`utils.py`` 这里 `_。在 :code:`client.py` 中使用并" -"导入了预定义函数。:code:`client.py` 还需要导入几个软件包,如 Flower 和 " -"scikit-learn:" +"此示例的`完整源代码 `_ 可在 :code:`examples/xgboost-quickstart` 中找到。" + +#: ../../source/tutorial-quickstart-xgboost.rst:594 +msgid "Comprehensive Federated XGBoost" +msgstr "综合的联邦 XGBoost" -#: ../../source/tutorial-quickstart-scikitlearn.rst:73 +#: ../../source/tutorial-quickstart-xgboost.rst:596 #, fuzzy msgid "" -"We load the MNIST dataset from `OpenML `_, a popular image classification dataset of " -"handwritten digits for machine learning. The utility :code:`utils." -"load_mnist()` downloads the training and test data. The training set is " -"split afterwards into 10 partitions with :code:`utils.partition()`." +"Now that you have known how federated XGBoost work with Flower, it's time" +" to run some more comprehensive experiments by customising the " +"experimental settings. In the xgboost-comprehensive example (`full code " +"`_), we provide more options to define various experimental" +" setups, including aggregation strategies, data partitioning and " +"centralised/distributed evaluation. We also support :doc:`Flower " +"simulation ` making it easy to simulate large " +"client cohorts in a resource-aware manner. Let's take a look!" msgstr "" -"我们从 `OpenML `_ 中加载 MNIST 数据集,这是一个" -"用于机器学习的流行手写数字图像分类数据集。实用程序 :code:`utils." -"load_mnist()` 下载训练和测试数据。然后使用 :code:`utils.partition()`将训练集" -"分割成 10 个分区。" +"既然您已经知道联合 XGBoost 如何与 Flower 协同工作,那么现在就该通过自定义实验设置来运行一些更综合的实验了。在 xgboost-" +"comprehensive 示例 (`完整代码 " +"`_)中,我们提供了更多选项来定义各种实验设置,包括数据分区和集中/分布式评估。让我们一起来看看!" -#: ../../source/tutorial-quickstart-scikitlearn.rst:85 -msgid "" -"Next, the logistic regression model is defined and initialized with :code:" -"`utils.set_initial_params()`." -msgstr "" -"接下来,使用 :code:`utils.set_initial_params()` 对逻辑回归模型进行定义和初始" -"化。" +#: ../../source/tutorial-quickstart-xgboost.rst:603 +#, fuzzy +msgid "Cyclic training" +msgstr "集中式训练" -#: ../../source/tutorial-quickstart-scikitlearn.rst:97 +#: ../../source/tutorial-quickstart-xgboost.rst:605 msgid "" -"The Flower server interacts with clients through an interface called :code:" -"`Client`. When the server selects a particular client for training, it sends " -"training instructions over the network. The client receives those " -"instructions and calls one of the :code:`Client` methods to run your code (i." -"e., to fit the logistic regression we defined earlier)." +"In addition to bagging aggregation, we offer a cyclic training scheme, " +"which performs FL in a client-by-client fashion. Instead of aggregating " +"multiple clients, there is only one single client participating in the " +"training per round in the cyclic training scenario. The trained local " +"XGBoost trees will be passed to the next client as an initialised model " +"for next round's boosting." msgstr "" -"Flower 服务器通过一个名为 :code:`Client` 的接口与客户端交互。当服务器选择一个" -"特定的客户端进行训练时,它会通过网络发送训练指令。客户端接收到这些指令后,会" -"调用 :code:`Client` 方法之一来运行您的代码(即拟合我们之前定义的逻辑回归)。" -#: ../../source/tutorial-quickstart-scikitlearn.rst:103 +#: ../../source/tutorial-quickstart-xgboost.rst:609 msgid "" -"Flower provides a convenience class called :code:`NumPyClient` which makes " -"it easier to implement the :code:`Client` interface when your workload uses " -"scikit-learn. Implementing :code:`NumPyClient` usually means defining the " -"following methods (:code:`set_parameters` is optional though):" +"To do this, we first customise a :code:`ClientManager` in " +":code:`server_utils.py`:" msgstr "" -"Flower 提供了一个名为 :code:`NumPyClient` 的便捷类,当你的工作负载使用 " -"scikit-learn 时,它可以让你更容易地实现 :code:`Client` 接口。实现 :code:" -"`NumPyClient` 通常意味着定义以下方法(:code:`set_parameters` 是可选的):" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:112 -msgid "is directly imported with :code:`utils.set_model_params()`" -msgstr "直接导入 :code:`utils.set_model_params()`" - -#: ../../source/tutorial-quickstart-scikitlearn.rst:120 -msgid "The methods can be implemented in the following way:" -msgstr "这些方法可以通过以下方式实现:" -#: ../../source/tutorial-quickstart-scikitlearn.rst:143 +#: ../../source/tutorial-quickstart-xgboost.rst:649 msgid "" -"We can now create an instance of our class :code:`MnistClient` and add one " -"line to actually run this client:" +"The customised :code:`ClientManager` samples all available clients in " +"each FL round based on the order of connection to the server. Then, we " +"define a new strategy :code:`FedXgbCyclic` in " +":code:`flwr.server.strategy.fedxgb_cyclic.py`, in order to sequentially " +"select only one client in given round and pass the received model to next" +" client." msgstr "" -"现在我们可以创建一个 :code:`MnistClient` 类的实例,并添加一行来实际运行该客户" -"端:" -#: ../../source/tutorial-quickstart-scikitlearn.rst:150 -#, fuzzy +#: ../../source/tutorial-quickstart-xgboost.rst:690 msgid "" -"That's it for the client. We only have to implement :code:`Client` or :code:" -"`NumPyClient` and call :code:`fl.client.start_client()`. If you implement a " -"client of type :code:`NumPyClient` you'll need to first call its :code:" -"`to_client()` method. The string :code:`\"0.0.0.0:8080\"` tells the client " -"which server to connect to. In our case we can run the server and the client " -"on the same machine, therefore we use :code:`\"0.0.0.0:8080\"`. If we run a " -"truly federated workload with the server and clients running on different " -"machines, all that needs to change is the :code:`server_address` we pass to " -"the client." +"Unlike the original :code:`FedAvg`, we don't perform aggregation here. " +"Instead, we just make a copy of the received client model as global model" +" by overriding :code:`aggregate_fit`." msgstr "" -"这就是客户端。我们只需实现 :code:`Client` 或 :code:`NumPyClient` 并调用 :" -"code:`fl.client.start_client()` 或 :code:`fl.client.start_numpy_client()`。字" -"符串 :code:`\"0.0.0.0:8080\"`会告诉客户端要连接的服务器。在本例中,我们可以在" -"同一台机器上运行服务器和客户端,因此我们使用 :code:`\"0.0.0.0:8080\"`。如果我" -"们运行的是真正的联邦工作负载,服务器和客户端运行在不同的机器上,那么需要改变" -"的只是传递给客户端的 :code:`server_address`。" -#: ../../source/tutorial-quickstart-scikitlearn.rst:159 +#: ../../source/tutorial-quickstart-xgboost.rst:693 msgid "" -"The following Flower server is a little bit more advanced and returns an " -"evaluation function for the server-side evaluation. First, we import again " -"all required libraries such as Flower and scikit-learn." +"Also, the customised :code:`configure_fit` and :code:`configure_evaluate`" +" methods ensure the clients to be sequentially selected given FL round:" msgstr "" -"下面的 Flower 服务器更先进一些,会返回一个用于服务器端评估的评估函数。首先," -"我们再次导入所有需要的库,如 Flower 和 scikit-learn。" -#: ../../source/tutorial-quickstart-scikitlearn.rst:162 -msgid ":code:`server.py`, import Flower and start the server:" -msgstr ":code:`server.py`, 导入 Flower 并启动服务器:" +#: ../../source/tutorial-quickstart-xgboost.rst:757 +msgid "Customised data partitioning" +msgstr "定制数据分区" -#: ../../source/tutorial-quickstart-scikitlearn.rst:173 +#: ../../source/tutorial-quickstart-xgboost.rst:759 msgid "" -"The number of federated learning rounds is set in :code:`fit_round()` and " -"the evaluation is defined in :code:`get_evaluate_fn()`. The evaluation " -"function is called after each federated learning round and gives you " -"information about loss and accuracy." +"In :code:`dataset.py`, we have a function :code:`instantiate_partitioner`" +" to instantiate the data partitioner based on the given " +":code:`num_partitions` and :code:`partitioner_type`. Currently, we " +"provide four supported partitioner type to simulate the uniformity/non-" +"uniformity in data quantity (uniform, linear, square, exponential)." msgstr "" -"联邦学习轮数在 :code:`fit_round()` 中设置,评估在 :code:`get_evaluate_fn()` " -"中定义。每轮联邦学习后都会调用评估函数,并提供有关损失值和准确率的信息。" +"在 :code:`dataset.py` 中,我们有一个函数 :code:`instantiate_partitioner` 来根据给定的 " +":code:`num_partitions` 和 :code:`partitioner_type` " +"来实例化数据分区器。目前,我们提供四种支持的分区器类型(均匀、线性、正方形、指数)来模拟数据量的均匀性/非均匀性。" -#: ../../source/tutorial-quickstart-scikitlearn.rst:198 -msgid "" -"The :code:`main` contains the server-side parameter initialization :code:" -"`utils.set_initial_params()` as well as the aggregation strategy :code:`fl." -"server.strategy:FedAvg()`. The strategy is the default one, federated " -"averaging (or FedAvg), with two clients and evaluation after each federated " -"learning round. The server can be started with the command :code:`fl.server." -"start_server(server_address=\"0.0.0.0:8080\", strategy=strategy, config=fl." -"server.ServerConfig(num_rounds=3))`." -msgstr "" -":code:`main`包含服务器端参数初始化:code:`utils.set_initial_params()`以及聚合" -"策略 :code:`fl.server.strategy:FedAvg()`。该策略是默认的联邦平均(或 FedAvg)" -"策略,有两个客户端,在每轮联邦学习后进行评估。可以使用 :code:`fl.server." -"start_server(server_address=\"0.0.0.0:8080\", strategy=strategy, config=fl." -"server.ServerConfig(num_rounds=3))` 命令启动服务器。" +#: ../../source/tutorial-quickstart-xgboost.rst:790 +msgid "Customised centralised/distributed evaluation" +msgstr "定制的集中/分布式评估" -#: ../../source/tutorial-quickstart-scikitlearn.rst:217 +#: ../../source/tutorial-quickstart-xgboost.rst:792 +#, fuzzy msgid "" -"With both client and server ready, we can now run everything and see " -"federated learning in action. Federated learning systems usually have a " -"server and multiple clients. We, therefore, have to start the server first:" -msgstr "" -"客户端和服务器都准备就绪后,我们现在就可以运行一切,看看联邦学习的运行情况。" -"联邦学习系统通常有一个服务器和多个客户端。因此,我们必须先启动服务器:" +"To facilitate centralised evaluation, we define a function in " +":code:`server_utils.py`:" +msgstr "为便于集中评估,我们在 :code:`server.py` 中定义了一个函数:" -#: ../../source/tutorial-quickstart-scikitlearn.rst:271 +#: ../../source/tutorial-quickstart-xgboost.rst:824 msgid "" -"Congratulations! You've successfully built and run your first federated " -"learning system. The full `source code `_ for this example can be found in :code:" -"`examples/sklearn-logreg-mnist`." +"This function returns a evaluation function which instantiates a " +":code:`Booster` object and loads the global model weights to it. The " +"evaluation is conducted by calling :code:`eval_set()` method, and the " +"tested AUC value is reported." msgstr "" -"恭喜您!您已经成功构建并运行了第一个联邦学习系统。本示例的`完整源代码 " -"`_ 可" -"以在 :code:`examples/sklearn-logreg-mnist` 中找到。" +"此函数返回一个评估函数,该函数实例化一个 :code:`Booster` 对象,并向其加载全局模型参数。评估通过调用 " +":code:`eval_set()` 方法进行,并报告测试的 AUC 值。" -#: ../../source/tutorial-quickstart-tensorflow.rst:-1 +#: ../../source/tutorial-quickstart-xgboost.rst:827 +#, fuzzy msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower with " -"TensorFlow to train a MobilNetV2 model on CIFAR-10." +"As for distributed evaluation on the clients, it's same as the quick-" +"start example by overriding the :code:`evaluate()` method insides the " +":code:`XgbClient` class in :code:`client_utils.py`." msgstr "" -"查看此联邦学习快速入门教程,了解如何使用 Flower 和 TensorFlow 在 CIFAR-10 上" -"训练 MobilNetV2 模型。" - -#: ../../source/tutorial-quickstart-tensorflow.rst:5 -msgid "Quickstart TensorFlow" -msgstr "快速入门 TensorFlow" +"至于客户端上的分布式评估,与快速启动示例相同,通过覆盖 :code:`client.py` 中 :code:`XgbClient` 类内部的 " +":code:`evaluate()` 方法。" -#: ../../source/tutorial-quickstart-tensorflow.rst:13 -msgid "Let's build a federated learning system in less than 20 lines of code!" -msgstr "让我们用不到 20 行代码构建一个联邦学习系统!" - -#: ../../source/tutorial-quickstart-tensorflow.rst:15 -msgid "Before Flower can be imported we have to install it:" -msgstr "在导入 Flower 之前,我们必须先安装它:" +#: ../../source/tutorial-quickstart-xgboost.rst:831 +#, fuzzy +msgid "Flower simulation" +msgstr "运行模拟" -#: ../../source/tutorial-quickstart-tensorflow.rst:21 +#: ../../source/tutorial-quickstart-xgboost.rst:832 msgid "" -"Since we want to use the Keras API of TensorFlow (TF), we have to install TF " -"as well:" -msgstr "由于我们要使用 TensorFlow (TF) 的 Keras API,因此还必须安装 TF:" +"We also provide an example code (:code:`sim.py`) to use the simulation " +"capabilities of Flower to simulate federated XGBoost training on either a" +" single machine or a cluster of machines." +msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:31 -msgid "Next, in a file called :code:`client.py`, import Flower and TensorFlow:" -msgstr "接下来,在名为 :code:`client.py` 的文件中导入 Flower 和 TensorFlow:" +#: ../../source/tutorial-quickstart-xgboost.rst:866 +msgid "" +"After importing all required packages, we define a :code:`main()` " +"function to perform the simulation process:" +msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:38 +#: ../../source/tutorial-quickstart-xgboost.rst:921 msgid "" -"We use the Keras utilities of TF to load CIFAR10, a popular colored image " -"classification dataset for machine learning. The call to :code:`tf.keras." -"datasets.cifar10.load_data()` downloads CIFAR10, caches it locally, and then " -"returns the entire training and test set as NumPy ndarrays." +"We first load the dataset and perform data partitioning, and the pre-" +"processed data is stored in a :code:`list`. After the simulation begins, " +"the clients won't need to pre-process their partitions again." msgstr "" -"我们使用 TF 的 Keras 实用程序加载 CIFAR10,这是一个用于机器学习的流行彩色图像" -"分类数据集。调用 :code:`tf.keras.datasets.cifar10.load_data()` 会下载 " -"CIFAR10,将其缓存到本地,然后以 NumPy ndarrays 的形式返回整个训练集和测试集。" -#: ../../source/tutorial-quickstart-tensorflow.rst:47 +#: ../../source/tutorial-quickstart-xgboost.rst:924 +msgid "Then, we define the strategies and other hyper-parameters:" +msgstr "" + +#: ../../source/tutorial-quickstart-xgboost.rst:975 msgid "" -"Next, we need a model. For the purpose of this tutorial, we use MobilNetV2 " -"with 10 output classes:" +"After that, we start the simulation by calling " +":code:`fl.simulation.start_simulation`:" msgstr "" -"接下来,我们需要一个模型。在本教程中,我们使用带有 10 个输出类的 MobilNetV2:" -#: ../../source/tutorial-quickstart-tensorflow.rst:60 +#: ../../source/tutorial-quickstart-xgboost.rst:995 msgid "" -"Flower provides a convenience class called :code:`NumPyClient` which makes " -"it easier to implement the :code:`Client` interface when your workload uses " -"Keras. The :code:`NumPyClient` interface defines three methods which can be " -"implemented in the following way:" +"One of key parameters for :code:`start_simulation` is :code:`client_fn` " +"which returns a function to construct a client. We define it as follows:" msgstr "" -"Flower 提供了一个名为 :code:`NumPyClient` 的便捷类,当您的工作负载使用 Keras " -"时,该类可以更轻松地实现 :code:`Client` 接口。:code:`NumPyClient` 接口定义了" -"三个方法,可以通过以下方式实现:" -#: ../../source/tutorial-quickstart-tensorflow.rst:135 -msgid "Each client will have its own dataset." -msgstr "每个客户都有自己的数据集。" +#: ../../source/tutorial-quickstart-xgboost.rst:1038 +msgid "Arguments parser" +msgstr "参数解析器" -#: ../../source/tutorial-quickstart-tensorflow.rst:137 +#: ../../source/tutorial-quickstart-xgboost.rst:1040 +#, fuzzy msgid "" -"You should now see how the training does in the very first terminal (the one " -"that started the server):" -msgstr "现在你应该能在第一个终端(启动服务器的终端)看到训练的效果了:" +"In :code:`utils.py`, we define the arguments parsers for clients, server " +"and simulation, allowing users to specify different experimental " +"settings. Let's first see the sever side:" +msgstr "在 :code:`utils.py` 中,我们定义了客户端和服务器端的参数解析器,允许用户指定不同的实验设置。让我们先看看服务器端:" -#: ../../source/tutorial-quickstart-tensorflow.rst:169 +#: ../../source/tutorial-quickstart-xgboost.rst:1086 +#, fuzzy msgid "" -"Congratulations! You've successfully built and run your first federated " -"learning system. The full `source code `_ for this can be found in :" -"code:`examples/quickstart-tensorflow/client.py`." +"This allows user to specify training strategies / the number of total " +"clients / FL rounds / participating clients / clients for evaluation, and" +" evaluation fashion. Note that with :code:`--centralised-eval`, the sever" +" will do centralised evaluation and all functionalities for client " +"evaluation will be disabled." msgstr "" -"恭喜您!您已经成功构建并运行了第一个联邦学习系统。`完整的源代码 `_ " -"可以在 :code:`examples/quickstart-tensorflow/client.py` 中找到。" +"这允许用户指定总客户数/FL 轮数/参与客户数/评估客户数以及评估方式。请注意,如果使用 :code:`--centralised-" +"eval`,服务器将进行集中评估,客户端评估的所有功能将被禁用。" -#: ../../source/tutorial-quickstart-xgboost.rst:-1 +#: ../../source/tutorial-quickstart-xgboost.rst:1090 +msgid "Then, the argument parser on client side:" +msgstr "然后是客户端的参数解析器:" + +#: ../../source/tutorial-quickstart-xgboost.rst:1144 +#, fuzzy msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower with " -"XGBoost to train classification models on trees." +"This defines various options for client data partitioning. Besides, " +"clients also have an option to conduct evaluation on centralised test set" +" by setting :code:`--centralised-eval`, as well as an option to perform " +"scaled learning rate based on the number of clients by setting :code" +":`--scaled-lr`." +msgstr "这定义了客户端数据分区的各种选项。此外,通过设置 :code:`-centralised-eval`,客户端还可以选择在集中测试集上进行评估。" + +#: ../../source/tutorial-quickstart-xgboost.rst:1148 +msgid "We also have an argument parser for simulation:" msgstr "" -"查看此联邦学习 快速入门教程,了解如何使用 Flower 和 XGBoost 上训练分类模型。" -#: ../../source/tutorial-quickstart-xgboost.rst:5 -msgid "Quickstart XGBoost" -msgstr "XGBoost快速入门" +#: ../../source/tutorial-quickstart-xgboost.rst:1226 +msgid "This integrates all arguments for both client and server sides." +msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:14 -msgid "Federated XGBoost" -msgstr "联邦化 XGBoost" +#: ../../source/tutorial-quickstart-xgboost.rst:1229 +msgid "Example commands" +msgstr "命令示例" -#: ../../source/tutorial-quickstart-xgboost.rst:16 +#: ../../source/tutorial-quickstart-xgboost.rst:1231 +#, fuzzy msgid "" -"EXtreme Gradient Boosting (**XGBoost**) is a robust and efficient " -"implementation of gradient-boosted decision tree (**GBDT**), that maximises " -"the computational boundaries for boosted tree methods. It's primarily " -"designed to enhance both the performance and computational speed of machine " -"learning models. In XGBoost, trees are constructed concurrently, unlike the " -"sequential approach taken by GBDT." +"To run a centralised evaluated experiment with bagging strategy on 5 " +"clients with exponential distribution for 50 rounds, we first start the " +"server as below:" +msgstr "为了在 5 个客户端上进行 50 轮指数分布的集中评估实验,我们首先启动服务器,如下所示:" + +#: ../../source/tutorial-quickstart-xgboost.rst:1238 +msgid "Then, on each client terminal, we start the clients:" +msgstr "然后,我们在每个客户终端上启动客户机:" + +#: ../../source/tutorial-quickstart-xgboost.rst:1244 +msgid "To run the same experiment with Flower simulation:" msgstr "" -"EXtreme Gradient Boosting(**XGBoost**)是梯度提升决策树(**GBDT**)的一种稳" -"健而高效的实现方法,能最大限度地提高提升树方法的计算边界。它主要用于提高机器" -"学习模型的性能和计算速度。在 XGBoost 中,决策树是并发构建的,与 GBDT 采用的顺" -"序方法不同。" -#: ../../source/tutorial-quickstart-xgboost.rst:20 +#: ../../source/tutorial-quickstart-xgboost.rst:1250 +#, fuzzy msgid "" -"Often, for tabular data on medium-sized datasets with fewer than 10k " -"training examples, XGBoost surpasses the results of deep learning techniques." +"The full `code `_ for this comprehensive example can be found in" +" :code:`examples/xgboost-comprehensive`." msgstr "" -"对于训练示例少于 10k 的中型数据集上的表格数据,XGBoost 的结果往往超过深度学习" -"技术。" +"此综合示例的全部`源代码 `_ 可在 :code:`examples/xgboost-comprehensive` 中找到。" -#: ../../source/tutorial-quickstart-xgboost.rst:23 -msgid "Why federated XGBoost?" -msgstr "为什么选择联邦 XGBoost?" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:9 +msgid "Build a strategy from scratch" +msgstr "从零开始制定策略" -#: ../../source/tutorial-quickstart-xgboost.rst:25 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:11 msgid "" -"Indeed, as the demand for data privacy and decentralized learning grows, " -"there's an increasing requirement to implement federated XGBoost systems for " -"specialised applications, like survival analysis and financial fraud " -"detection." -msgstr "" -"事实上,随着对数据隐私和分散学习的需求不断增长,越来越多的专业应用(如生存分" -"析和金融欺诈检测)需要实施联邦 XGBoost 系统。" +"Welcome to the third part of the Flower federated learning tutorial. In " +"previous parts of this tutorial, we introduced federated learning with " +"PyTorch and Flower (`part 1 `__) and we learned how strategies " +"can be used to customize the execution on both the server and the clients" +" (`part 2 `__)." +msgstr "" +"欢迎来到 Flower 联邦学习教程的第三部分。在本教程的前几部分,我们介绍了 PyTorch 和 Flower 的联邦学习(`part 1 " +"`__),并学习了如何使用策略来定制服务器和客户端的执行(`part 2 " +"`__)。" -#: ../../source/tutorial-quickstart-xgboost.rst:27 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:13 msgid "" -"Federated learning ensures that raw data remains on the local device, making " -"it an attractive approach for sensitive domains where data security and " -"privacy are paramount. Given the robustness and efficiency of XGBoost, " -"combining it with federated learning offers a promising solution for these " -"specific challenges." +"In this notebook, we'll continue to customize the federated learning " +"system we built previously by creating a custom version of FedAvg (again," +" using `Flower `__ and `PyTorch " +"`__)." msgstr "" -"联邦学习可确保原始数据保留在本地设备上,因此对于数据安全和隐私至关重要的敏感" -"领域来说,这是一种极具吸引力的方法。鉴于 XGBoost 的稳健性和高效性,将其与联邦" -"学习相结合为应对这些特定挑战提供了一种前景广阔的解决方案。" +"在本笔记中,我们将通过创建 FedAvg 的自定义版本(再次使用 `Flower `__ 和 " +"`PyTorch `__),继续定制我们之前构建的联邦学习系统。" -#: ../../source/tutorial-quickstart-xgboost.rst:30 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:15 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:16 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:15 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:15 msgid "" -"In this tutorial we will learn how to train a federated XGBoost model on " -"HIGGS dataset using Flower and :code:`xgboost` package. We use a simple " -"example (`full code xgboost-quickstart `_) with two *clients* and one *server* to " -"demonstrate how federated XGBoost works, and then we dive into a more " -"complex example (`full code xgboost-comprehensive `_) to run various " -"experiments." -msgstr "" -"在本教程中,我们将学习如何使用 Flower 和 :code:`xgboost` 软件包在 HIGGS 数据" -"集上训练联邦 XGBoost 模型。我们将使用一个包含两个 * 客户端* 和一个 * 服务器* " -"的简单示例 (`完整代码 xgboost-quickstart `_)来演示联邦 XGBoost 如何工作,然后我" -"们将深入到一个更复杂的示例 (`完整代码 xgboost-comprehensive `_),以运行各种实" -"验。" +"`Star Flower on GitHub `__ ⭐️ and join " +"the Flower community on Slack to connect, ask questions, and get help: " +"`Join Slack `__ 🌼 We'd love to hear from " +"you in the ``#introductions`` channel! And if anything is unclear, head " +"over to the ``#questions`` channel." +msgstr "" +"`Star Flower on GitHub `__ ⭐️ 并加入 Slack " +"上的 Flower 社区,进行交流、提问并获得帮助: 加入 Slack `__ 🌼 " +"我们希望在 ``#introductions`` 频道听到您的声音!如果有任何不清楚的地方,请访问 ``#questions`` 频道。" -#: ../../source/tutorial-quickstart-xgboost.rst:37 -msgid "Environment Setup" -msgstr "环境设定" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:17 +msgid "Let's build a new ``Strategy`` from scratch!" +msgstr "让我们从头开始构建一个新的``Strategy``!" -#: ../../source/tutorial-quickstart-xgboost.rst:41 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:29 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:29 +msgid "Preparation" +msgstr "准备工作" + +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:31 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:32 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:31 msgid "" -"We first need to install Flower and Flower Datasets. You can do this by " -"running :" -msgstr "我们首先需要安装 Flower 和 Flower Datasets。您可以通过运行 :" +"Before we begin with the actual code, let's make sure that we have " +"everything we need." +msgstr "在开始实际代码之前,让我们先确保我们已经准备好了所需的一切。" -#: ../../source/tutorial-quickstart-xgboost.rst:47 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:43 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:44 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:43 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:43 +msgid "Installing dependencies" +msgstr "安装依赖项" + +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:45 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:46 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:45 +msgid "First, we install the necessary packages:" +msgstr "首先,我们安装必要的软件包:" + +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:65 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:66 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:65 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:65 msgid "" -"Since we want to use :code:`xgboost` package to build up XGBoost trees, " -"let's go ahead and install :code:`xgboost`:" -msgstr "" -"既然我们要使用 :code:`xgboost` 软件包来构建 XGBoost 树,那就继续安装 :code:" -"`xgboost`:" +"Now that we have all dependencies installed, we can import everything we " +"need for this tutorial:" +msgstr "现在我们已经安装了所有依赖项,可以导入本教程所需的所有内容:" -#: ../../source/tutorial-quickstart-xgboost.rst:57 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:101 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:102 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:101 msgid "" -"*Clients* are responsible for generating individual weight-updates for the " -"model based on their local datasets. Now that we have all our dependencies " -"installed, let's run a simple distributed training with two clients and one " -"server." +"It is possible to switch to a runtime that has GPU acceleration enabled " +"(on Google Colab: ``Runtime > Change runtime type > Hardware acclerator: " +"GPU > Save``). Note, however, that Google Colab is not always able to " +"offer GPU acceleration. If you see an error related to GPU availability " +"in one of the following sections, consider switching back to CPU-based " +"execution by setting ``DEVICE = torch.device(\"cpu\")``. If the runtime " +"has GPU acceleration enabled, you should see the output ``Training on " +"cuda``, otherwise it'll say ``Training on cpu``." msgstr "" -"*客户端*负责根据其本地数据集为模型生成单独的模型参数更新。现在我们已经安装了" -"所有的依赖项,让我们用两个客户端和一个服务器来运行一个简单的分布式训练。" +"可以切换到已启用 GPU 加速的运行时(在 Google Colab 上: 运行时 > 更改运行时类型 > 硬件加速: GPU > " +"保存``)。但请注意,Google Colab 并非总能提供 GPU 加速。如果在以下部分中看到与 GPU 可用性相关的错误,请考虑通过设置 " +"``DEVICE = torch.device(\"cpu\")`` 切回基于 CPU 的执行。如果运行时已启用 GPU " +"加速,你应该会看到输出``Training on cuda``,否则会显示``Training on cpu``。" -#: ../../source/tutorial-quickstart-xgboost.rst:60 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:114 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:115 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:114 +msgid "Data loading" +msgstr "数据加载" + +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:116 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:116 msgid "" -"In a file called :code:`client.py`, import xgboost, Flower, Flower Datasets " -"and other related functions:" +"Let's now load the CIFAR-10 training and test set, partition them into " +"ten smaller datasets (each split into training and validation set), and " +"wrap everything in their own ``DataLoader``. We introduce a new parameter" +" ``num_clients`` which allows us to call ``load_datasets`` with different" +" numbers of clients." msgstr "" -"在名为 :code:`client.py` 的文件中,导入 xgboost、Flower、Flower Datasets 和其" -"他相关函数:" +"现在,让我们加载 CIFAR-10 训练集和测试集,将它们分割成 10 " +"个较小的数据集(每个数据集又分为训练集和验证集),并将所有数据都封装在各自的 ``DataLoader`` 中。我们引入了一个新参数 " +"``num_clients``,它允许我们使用不同数量的客户端调用 ``load_datasets``。" -#: ../../source/tutorial-quickstart-xgboost.rst:87 -msgid "Dataset partition and hyper-parameter selection" -msgstr "数据集划分和超参数选择" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:167 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:168 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:167 +msgid "Model training/evaluation" +msgstr "模型培训/评估" -#: ../../source/tutorial-quickstart-xgboost.rst:89 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:169 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:170 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:169 msgid "" -"Prior to local training, we require loading the HIGGS dataset from Flower " -"Datasets and conduct data partitioning for FL:" -msgstr "" -"在本地训练之前,我们需要从 Flower Datasets 加载 HIGGS 数据集,并对 FL 进行数" -"据分区:" +"Let's continue with the usual model definition (including " +"``set_parameters`` and ``get_parameters``), training and test functions:" +msgstr "让我们继续使用常见的模型定义(包括 `set_parameters` 和 `get_parameters`)、训练和测试函数:" -#: ../../source/tutorial-quickstart-xgboost.rst:102 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:258 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:258 +msgid "Flower client" +msgstr "Flower 客户端" + +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:260 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:260 msgid "" -"In this example, we split the dataset into two partitions with uniform " -"distribution (:code:`IidPartitioner(num_partitions=2)`). Then, we load the " -"partition for the given client based on :code:`node_id`:" +"To implement the Flower client, we (again) create a subclass of " +"``flwr.client.NumPyClient`` and implement the three methods " +"``get_parameters``, ``fit``, and ``evaluate``. Here, we also pass the " +"``cid`` to the client and use it log additional details:" msgstr "" -"在此示例中,我们将数据集分割成两个均匀分布的分区(:code:" -"`IidPartitioner(num_partitions=2)`)。然后,我们根据 :code:`node_id` 为给定客" -"户端加载分区:" +"为了实现 Flower 客户端,我们(再次)创建了 ``flwr.client.NumPyClient`` 的子类,并实现了 " +"``get_parameters``、``fit`` 和 ``evaluate``三个方法。在这里,我们还将 ``cid`` " +"传递给客户端,并使用它记录其他详细信息:" -#: ../../source/tutorial-quickstart-xgboost.rst:121 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:308 +msgid "Let's test what we have so far before we continue:" +msgstr "在继续之前,让我们先测试一下我们目前掌握的情况:" + +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:339 +msgid "Build a Strategy from scratch" +msgstr "从零开始构建策略" + +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:341 msgid "" -"After that, we do train/test splitting on the given partition (client's " -"local data), and transform data format for :code:`xgboost` package." +"Let’s overwrite the ``configure_fit`` method such that it passes a higher" +" learning rate (potentially also other hyperparameters) to the optimizer " +"of a fraction of the clients. We will keep the sampling of the clients as" +" it is in ``FedAvg`` and then change the configuration dictionary (one of" +" the ``FitIns`` attributes)." msgstr "" -"然后,我们在给定的分区(客户端的本地数据)上进行训练/测试分割,并为 :code:" -"`xgboost` 软件包转换数据格式。" +"让我们重写 ``configure_fit`` 方法,使其向一部分客户的优化器传递更高的学习率(可能还有其他超参数)。我们将保持 " +"``FedAvg`` 中的客户端采样,然后更改配置字典(``FitIns`` 属性之一)。" -#: ../../source/tutorial-quickstart-xgboost.rst:134 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:507 msgid "" -"The functions of :code:`train_test_split` and :code:" -"`transform_dataset_to_dmatrix` are defined as below:" -msgstr "" -":code:`train_test_split` 和 :code:`transform_dataset_too_dmatrix` 的函数定义" -"如下:" +"The only thing left is to use the newly created custom Strategy " +"``FedCustom`` when starting the experiment:" +msgstr "剩下的唯一工作就是在启动实验时使用新创建的自定义策略 ``FedCustom`` :" -#: ../../source/tutorial-quickstart-xgboost.rst:158 -msgid "Finally, we define the hyper-parameters used for XGBoost training." -msgstr "最后,我们定义了用于 XGBoost 训练的超参数。" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:534 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:932 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:697 +msgid "Recap" +msgstr "回顾" -#: ../../source/tutorial-quickstart-xgboost.rst:174 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:536 msgid "" -"The :code:`num_local_round` represents the number of iterations for local " -"tree boost. We use CPU for the training in default. One can shift it to GPU " -"by setting :code:`tree_method` to :code:`gpu_hist`. We use AUC as evaluation " -"metric." +"In this notebook, we’ve seen how to implement a custom strategy. A custom" +" strategy enables granular control over client node configuration, result" +" aggregation, and more. To define a custom strategy, you only have to " +"overwrite the abstract methods of the (abstract) base class ``Strategy``." +" To make custom strategies even more powerful, you can pass custom " +"functions to the constructor of your new class (``__init__``) and then " +"call these functions whenever needed." msgstr "" -"代码:`num_local_round`表示本地树的迭代次数。我们默认使用 CPU 进行训练。可以通" -"过将 :code:`tree_method` 设置为 :code:`gpu_hist`,将其转换为 GPU。我们使用 " -"AUC 作为评估指标。" - -#: ../../source/tutorial-quickstart-xgboost.rst:181 -msgid "Flower client definition for XGBoost" -msgstr "用于 XGBoost 的 Flower 客户端定义" +"在本笔记中,我们了解了如何实施自定义策略。自定义策略可以对客户端节点配置、结果聚合等进行细粒度控制。要定义自定义策略,只需覆盖(抽象)基类 " +"``Strategy`` " +"的抽象方法即可。为使自定义策略更加强大,您可以将自定义函数传递给新类的构造函数(`__init__``),然后在需要时调用这些函数。" -#: ../../source/tutorial-quickstart-xgboost.rst:183 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:550 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:948 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:729 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:715 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:369 msgid "" -"After loading the dataset we define the Flower client. We follow the general " -"rule to define :code:`XgbClient` class inherited from :code:`fl.client." -"Client`." +"Before you continue, make sure to join the Flower community on Slack: " +"`Join Slack `__" msgstr "" -"加载数据集后,我们定义 Flower 客户端。我们按照一般规则定义从 :code:`fl." -"client.Client` 继承而来的 :code:`XgbClient` 类。" +"在继续之前,请务必加入 Slack 上的 Flower 社区:`Join Slack `__" -#: ../../source/tutorial-quickstart-xgboost.rst:193 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:552 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:950 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:731 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:717 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:371 msgid "" -"The :code:`self.bst` is used to keep the Booster objects that remain " -"consistent across rounds, allowing them to store predictions from trees " -"integrated in earlier rounds and maintain other essential data structures " -"for training." -msgstr "" -"代码:`self.bst`用于保存在各轮中保持一致的 Booster 对象,使其能够存储在前几轮" -"中集成的树的预测结果,并维护其他用于训练的重要数据结构。" +"There's a dedicated ``#questions`` channel if you need help, but we'd " +"also love to hear who you are in ``#introductions``!" +msgstr "如果您需要帮助,我们有专门的 ``#questions`` 频道,但我们也很乐意在 ``#introductions`` 中了解您是谁!" -#: ../../source/tutorial-quickstart-xgboost.rst:196 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:554 msgid "" -"Then, we override :code:`get_parameters`, :code:`fit` and :code:`evaluate` " -"methods insides :code:`XgbClient` class as follows." +"The `Flower Federated Learning Tutorial - Part 4 " +"`__ introduces ``Client``, the flexible API underlying " +"``NumPyClient``." msgstr "" -"然后,我们在 :code:`XgbClient` 类中重写 :code:`get_parameters`、:code:`fit` " -"和 :code:`evaluate` 方法如下。" +"Flower联邦学习教程 - 第4部分 `__ 介绍了``Client``,它是``NumPyClient``底层的灵活应用程序接口。" -#: ../../source/tutorial-quickstart-xgboost.rst:210 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:9 +msgid "Customize the client" +msgstr "自定义客户端" + +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:11 msgid "" -"Unlike neural network training, XGBoost trees are not started from a " -"specified random weights. In this case, we do not use :code:`get_parameters` " -"and :code:`set_parameters` to initialise model parameters for XGBoost. As a " -"result, let's return an empty tensor in :code:`get_parameters` when it is " -"called by the server at the first round." -msgstr "" -"与神经网络训练不同,XGBoost 树不是从指定的随机参数开始的。在这种情况下,我们" -"不使用 :code:`get_parameters` 和 :code:`set_parameters` 来初始化 XGBoost 的模" -"型参数。因此,当服务器在第一轮调用 :code:`get_parameters` 时,让我们在 :code:" -"`get_parameters` 中返回一个空张量。" +"Welcome to the fourth part of the Flower federated learning tutorial. In " +"the previous parts of this tutorial, we introduced federated learning " +"with PyTorch and Flower (`part 1 `__), we learned how " +"strategies can be used to customize the execution on both the server and " +"the clients (`part 2 `__), and we built our own " +"custom strategy from scratch (`part 3 `__)." +msgstr "" +"欢迎来到 Flower 联邦学习教程的第四部分。在本教程的前几部分中,我们介绍了 PyTorch 和 Flower 的联邦学习(`part 1 " +"`__),了解了如何使用策略来定制服务器和客户端的执行(`part 2 " +"`__),并从头开始构建了我们自己的定制策略(`part 3 " +"`__)。" -#: ../../source/tutorial-quickstart-xgboost.rst:251 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:14 msgid "" -"In :code:`fit`, at the first round, we call :code:`xgb.train()` to build up " -"the first set of trees. the returned Booster object and config are stored " -"in :code:`self.bst` and :code:`self.config`, respectively. From the second " -"round, we load the global model sent from server to :code:`self.bst`, and " -"then update model weights on local training data with function :code:" -"`local_boost` as follows:" +"In this notebook, we revisit ``NumPyClient`` and introduce a new " +"baseclass for building clients, simply named ``Client``. In previous " +"parts of this tutorial, we've based our client on ``NumPyClient``, a " +"convenience class which makes it easy to work with machine learning " +"libraries that have good NumPy interoperability. With ``Client``, we gain" +" a lot of flexibility that we didn't have before, but we'll also have to " +"do a few things the we didn't have to do before." msgstr "" -"在 :code:`fit`中,第一轮我们调用 :code:`xgb.train()`来建立第一组树,返回的 " -"Booster 对象和 config 分别存储在 :code:`self.bst` 和 :code:`self.config` 中。" -"从第二轮开始,我们将服务器发送的全局模型加载到 :code:`self.bst`,然后使用函" -"数 :code:`local_boost`更新本地训练数据的模型权重,如下所示:" +"在本笔记中,我们将重温 ``NumPyClient`` 并引入一个用于构建客户端的新基类,简单命名为 " +"``Client``。在本教程的前几部分中,我们的客户端基于``NumPyClient``,这是一个便捷类,可以让我们轻松地与具有良好 NumPy" +" 互操作性的机器学习库协同工作。有了 ``Client``,我们获得了很多以前没有的灵活性,但我们也必须做一些以前不需要做的事情。" -#: ../../source/tutorial-quickstart-xgboost.rst:269 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:18 msgid "" -"Given :code:`num_local_round`, we update trees by calling :code:`self.bst." -"update` method. After training, the last :code:`N=num_local_round` trees " -"will be extracted to send to the server." -msgstr "" -"给定 :code:`num_local_round`,我们通过调用 :code:`self.bst.update`方法更新" -"树。训练结束后,我们将提取最后一个 :code:`N=num_local_round` 树并发送给服务" -"器。" +"Let's go deeper and see what it takes to move from ``NumPyClient`` to " +"``Client``!" +msgstr "让我们深入了解一下从 ``NumPyClient`` 到 ``Client`` 的过程!" -#: ../../source/tutorial-quickstart-xgboost.rst:291 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:30 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:29 +msgid "Step 0: Preparation" +msgstr "步骤 0:准备工作" + +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:117 msgid "" -"In :code:`evaluate`, we call :code:`self.bst.eval_set` function to conduct " -"evaluation on valid set. The AUC value will be returned." +"Let's now load the CIFAR-10 training and test set, partition them into " +"ten smaller datasets (each split into training and validation set), and " +"wrap everything in their own ``DataLoader``." msgstr "" -"在 :code:`evaluate`中,我们调用 :code:`self.bst.eval_set`函数对有效集合进行评" -"估。将返回 AUC 值。" +"现在,让我们加载 CIFAR-10 训练集和测试集,将它们分割成十个较小的数据集(每个数据集又分为训练集和验证集),并将所有数据都封装在各自的 " +"``DataLoader`` 中。" -#: ../../source/tutorial-quickstart-xgboost.rst:294 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:259 +msgid "Step 1: Revisiting NumPyClient" +msgstr "步骤 1:重温 NumPyClient" + +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:261 msgid "" -"Now, we can create an instance of our class :code:`XgbClient` and add one " -"line to actually run this client:" +"So far, we've implemented our client by subclassing " +"``flwr.client.NumPyClient``. The three methods we implemented are " +"``get_parameters``, ``fit``, and ``evaluate``. Finally, we wrap the " +"creation of instances of this class in a function called ``client_fn``:" msgstr "" -"现在,我们可以创建一个 :code:`XgbClient` 类的实例,并添加一行来实际运行该客户" -"端:" +"到目前为止,我们通过子类化 ``flwr.client.NumPyClient`` " +"实现了我们的客户端。我们实现了三个方法:``get_parameters``, ``fit`, 和``evaluate``。最后,我们用一个名为 " +"``client_fn`` 的函数来创建该类的实例:" -#: ../../source/tutorial-quickstart-xgboost.rst:300 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:309 msgid "" -"That's it for the client. We only have to implement :code:`Client`and call :" -"code:`fl.client.start_client()`. The string :code:`\"[::]:8080\"` tells the " -"client which server to connect to. In our case we can run the server and the " -"client on the same machine, therefore we use :code:`\"[::]:8080\"`. If we " -"run a truly federated workload with the server and clients running on " -"different machines, all that needs to change is the :code:`server_address` " -"we point the client at." +"We've seen this before, there's nothing new so far. The only *tiny* " +"difference compared to the previous notebook is naming, we've changed " +"``FlowerClient`` to ``FlowerNumPyClient`` and ``client_fn`` to " +"``numpyclient_fn``. Let's run it to see the output we get:" msgstr "" -"这就是客户端。我们只需实现 :code:`客户端`并调用 :code:`fl.client." -"start_client()`。字符串 :code:`\"[::]:8080\"`会告诉客户端要连接的服务器。在本" -"例中,我们可以在同一台机器上运行服务器和客户端,因此我们使用 :code:" -"`\"[::]:8080\"`。如果我们运行的是真正的联邦工作负载,服务器和客户端运行在不同" -"的机器上,那么需要改变的只是客户端指向的 :code:`server_address`。" +"我们以前见过这种情况,目前没有什么新东西。与之前的笔记相比,唯一*小*的不同是命名,我们把 ``FlowerClient`` 改成了 " +"``FlowerNumPyClient``,把 `client_fn` 改成了 ``numpyclient_fn``。让我们运行它看看输出结果:" + +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:339 +msgid "" +"This works as expected, two clients are training for three rounds of " +"federated learning." +msgstr "结果不出所料,两个客户端正在进行三轮联邦学习训练。" -#: ../../source/tutorial-quickstart-xgboost.rst:311 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:341 msgid "" -"These updates are then sent to the *server* which will aggregate them to " -"produce a better model. Finally, the *server* sends this improved version of " -"the model back to each *client* to finish a complete FL round." +"Let's dive a little bit deeper and discuss how Flower executes this " +"simulation. Whenever a client is selected to do some work, " +"``start_simulation`` calls the function ``numpyclient_fn`` to create an " +"instance of our ``FlowerNumPyClient`` (along with loading the model and " +"the data)." msgstr "" -"然后,这些更新会被发送到*服务器*,由*服务器*聚合后生成一个更好的模型。最后,*" -"服务器*将这个改进版的模型发回给每个*客户端*,以完成一轮完整的 FL。" +"让我们再深入一点,讨论一下 Flower 是如何执行模拟的。每当一个客户端被选中进行工作时,`start_simulation`` 就会调用函数 " +"`numpyclient_fn` 来创建我们的 ``FlowerNumPyClient`` 实例(同时加载模型和数据)。" -#: ../../source/tutorial-quickstart-xgboost.rst:314 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:343 msgid "" -"In a file named :code:`server.py`, import Flower and FedXgbBagging from :" -"code:`flwr.server.strategy`." -msgstr "" -"在名为 :code:`server.py` 的文件中,从 :code:`flwr.server.strategy` 导入 " -"Flower 和 FedXgbBagging。" - -#: ../../source/tutorial-quickstart-xgboost.rst:316 -msgid "We first define a strategy for XGBoost bagging aggregation." -msgstr "我们首先定义了 XGBoost bagging聚合策略。" +"But here's the perhaps surprising part: Flower doesn't actually use the " +"``FlowerNumPyClient`` object directly. Instead, it wraps the object to " +"makes it look like a subclass of ``flwr.client.Client``, not " +"``flwr.client.NumPyClient``. In fact, the Flower core framework doesn't " +"know how to handle ``NumPyClient``'s, it only knows how to handle " +"``Client``'s. ``NumPyClient`` is just a convenience abstraction built on " +"top of ``Client``." +msgstr "" +"但令人惊讶的部分也许就在这里: Flower 实际上并不直接使用 ``FlowerNumPyClient`` " +"对象。相反,它封装了该对象,使其看起来像 ``flwr.client.Client`` 的子类,而不是 " +"``flwr.client.NumPyClient``。事实上,Flower 核心框架不知道如何处理 " +"``NumPyClient``,它只知道如何处理 ``Client``。``NumPyClient`` " +"只是建立在``Client``之上的便捷抽象类。" -#: ../../source/tutorial-quickstart-xgboost.rst:339 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:345 msgid "" -"We use two clients for this example. An :code:`evaluate_metrics_aggregation` " -"function is defined to collect and wighted average the AUC values from " -"clients." -msgstr "" -"本示例使用两个客户端。我们定义了一个 :code:`evaluate_metrics_aggregation` 函" -"数,用于收集客户机的 AUC 值并求取平均值。" +"Instead of building on top of ``NumPyClient``, we can directly build on " +"top of ``Client``." +msgstr "与其在 ``NumPyClient`` 上构建,我们可以直接在 ``Client`` 上构建。" -#: ../../source/tutorial-quickstart-xgboost.rst:342 -msgid "Then, we start the server:" -msgstr "然后,我们启动服务器:" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:357 +msgid "Step 2: Moving from ``NumPyClient`` to ``Client``" +msgstr "步骤 2:从 ``NumPyClient`` 移至 ``Client``" -#: ../../source/tutorial-quickstart-xgboost.rst:354 -msgid "Tree-based bagging aggregation" -msgstr "基于树的bagging聚合" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:359 +msgid "" +"Let's try to do the same thing using ``Client`` instead of " +"``NumPyClient``." +msgstr "让我们尝试使用 ``Client`` 代替 ``NumPyClient`` 做同样的事情。" -#: ../../source/tutorial-quickstart-xgboost.rst:356 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:465 msgid "" -"You must be curious about how bagging aggregation works. Let's look into the " -"details." -msgstr "您一定很好奇bagging聚合是如何工作的。让我们来详细了解一下。" +"Before we discuss the code in more detail, let's try to run it! Gotta " +"make sure our new ``Client``-based client works, right?" +msgstr "在详细讨论代码之前,让我们试着运行它!必须确保我们基于 ``Client`` 的新客户端能正常运行,对吗?" -#: ../../source/tutorial-quickstart-xgboost.rst:358 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:490 msgid "" -"In file :code:`flwr.server.strategy.fedxgb_bagging.py`, we define :code:" -"`FedXgbBagging` inherited from :code:`flwr.server.strategy.FedAvg`. Then, we " -"override the :code:`aggregate_fit`, :code:`aggregate_evaluate` and :code:" -"`evaluate` methods as follows:" -msgstr "" -"在文件 :code:`flwr.server.strategy.fedxgb_bagging.py`中,我们定义了从 :code:" -"`flwr.server.strategy.FedAvg`继承的 :code:`FedXgbBagging`。然后,我们覆盖 :" -"code:`aggregate_fit`、:code:`aggregate_evaluate` 和 :code:`evaluate` 方法如" -"下:" +"That's it, we're now using ``Client``. It probably looks similar to what " +"we've done with ``NumPyClient``. So what's the difference?" +msgstr "就是这样,我们现在开始使用 ``Client``。它看起来可能与我们使用 ``NumPyClient`` 所做的类似。那么有什么不同呢?" -#: ../../source/tutorial-quickstart-xgboost.rst:454 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:492 msgid "" -"In :code:`aggregate_fit`, we sequentially aggregate the clients' XGBoost " -"trees by calling :code:`aggregate()` function:" -msgstr "" -"在 :code:`aggregate_fit` 中,我们通过调用 :code:`aggregate()` 函数,按顺序聚" -"合客户端的 XGBoost 树:" +"First of all, it's more code. But why? The difference comes from the fact" +" that ``Client`` expects us to take care of parameter serialization and " +"deserialization. For Flower to be able to send parameters over the " +"network, it eventually needs to turn these parameters into ``bytes``. " +"Turning parameters (e.g., NumPy ``ndarray``'s) into raw bytes is called " +"serialization. Turning raw bytes into something more useful (like NumPy " +"``ndarray``'s) is called deserialization. Flower needs to do both: it " +"needs to serialize parameters on the server-side and send them to the " +"client, the client needs to deserialize them to use them for local " +"training, and then serialize the updated parameters again to send them " +"back to the server, which (finally!) deserializes them again in order to " +"aggregate them with the updates received from other clients." +msgstr "" +"首先,它的代码更多。但为什么呢?区别在于 ``Client`` 希望我们处理参数的序列化和反序列化。Flower " +"要想通过网络发送参数,最终需要将这些参数转化为 ``字节``。把参数(例如 NumPy 的 ``ndarray`` " +"参数)变成原始字节叫做序列化。将原始字节转换成更有用的东西(如 NumPy ``ndarray`)称为反序列化。Flower " +"需要同时做这两件事:它需要在服务器端序列化参数并将其发送到客户端,客户端需要反序列化参数以便将其用于本地训练,然后再次序列化更新后的参数并将其发送回服务器,服务器(最后)再次反序列化参数以便将其与从其他客户端接收到的更新汇总在一起。" -#: ../../source/tutorial-quickstart-xgboost.rst:513 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:495 msgid "" -"In this function, we first fetch the number of trees and the number of " -"parallel trees for the current and previous model by calling :code:" -"`_get_tree_nums`. Then, the fetched information will be aggregated. After " -"that, the trees (containing model weights) are aggregated to generate a new " -"tree model." +"The only *real* difference between Client and NumPyClient is that " +"NumPyClient takes care of serialization and deserialization for you. It " +"can do so because it expects you to return parameters as NumPy ndarray's," +" and it knows how to handle these. This makes working with machine " +"learning libraries that have good NumPy support (most of them) a breeze." msgstr "" -"在该函数中,我们首先通过调用 :code:`_get_tree_nums` 获取当前模型和上一个模型" -"的树数和并行树数。然后,对获取的信息进行聚合。然后,聚合树(包含模型参数)生" -"成新的树模型。" +"Client 与 NumPyClient 之间的唯一**真正区别在于,NumPyClient " +"会为你处理序列化和反序列化。NumPyClient之所以能做到这一点,是因为它预计你会以NumPy " +"ndarray的形式返回参数,而且它知道如何处理这些参数。这使得与具有良好 NumPy 支持的大多数机器学习库一起工作变得轻而易举。" -#: ../../source/tutorial-quickstart-xgboost.rst:518 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:497 msgid "" -"After traversal of all clients' models, a new global model is generated, " -"followed by the serialisation, and sending back to each client." +"In terms of API, there's one major difference: all methods in Client take" +" exactly one argument (e.g., ``FitIns`` in ``Client.fit``) and return " +"exactly one value (e.g., ``FitRes`` in ``Client.fit``). The methods in " +"``NumPyClient`` on the other hand have multiple arguments (e.g., " +"``parameters`` and ``config`` in ``NumPyClient.fit``) and multiple return" +" values (e.g., ``parameters``, ``num_example``, and ``metrics`` in " +"``NumPyClient.fit``) if there are multiple things to handle. These " +"``*Ins`` and ``*Res`` objects in ``Client`` wrap all the individual " +"values you're used to from ``NumPyClient``." msgstr "" -"在遍历所有客户端的模型后,会生成一个新的全局模型,然后进行序列化,并发回给每" -"个客户端。" +"在 API 方面,有一个主要区别:Client 中的所有方法都只接受一个参数(例如,``Client.fit`` 中的 " +"``FitIns``),并只返回一个值(例如,``Client.fit`` 中的 " +"``FitRes``)。另一方面,``NumPyClient``中的方法有多个参数(例如,``NumPyClient.fit``中的``parameters``和``config``)和多个返回值(例如,``NumPyClient.fit``中的``parameters``、``num_example``和``metrics``)。在" +" ``Client`` 中的这些 ``*Ins`` 和 ``*Res`` 对象封装了你在 ``NumPyClient`` 中习惯使用的所有单个值。" -#: ../../source/tutorial-quickstart-xgboost.rst:523 -msgid "Launch Federated XGBoost!" -msgstr "启动联邦 XGBoost!" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:510 +msgid "Step 3: Custom serialization" +msgstr "步骤 3:自定义序列化" -#: ../../source/tutorial-quickstart-xgboost.rst:585 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:512 msgid "" -"Congratulations! You've successfully built and run your first federated " -"XGBoost system. The AUC values can be checked in :code:" -"`metrics_distributed`. One can see that the average AUC increases over FL " -"rounds." -msgstr "" -"恭喜您!您已成功构建并运行了第一个联邦 XGBoost 系统。可以在 :code:" -"`metrics_distributed` 中查看 AUC 值。我们可以看到,平均 AUC 随 FL 轮数的增加" -"而增加。" +"Here we will explore how to implement custom serialization with a simple " +"example." +msgstr "下面我们将通过一个简单的示例来探讨如何实现自定义序列化。" -#: ../../source/tutorial-quickstart-xgboost.rst:590 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:514 msgid "" -"The full `source code `_ for this example can be found in :code:`examples/" -"xgboost-quickstart`." +"But first what is serialization? Serialization is just the process of " +"converting an object into raw bytes, and equally as important, " +"deserialization is the process of converting raw bytes back into an " +"object. This is very useful for network communication. Indeed, without " +"serialization, you could not just a Python object through the internet." msgstr "" -"此示例的`完整源代码 `_ 可在 :code:`examples/xgboost-quickstart` 中找到。" +"首先,什么是序列化?序列化只是将对象转换为原始字节的过程,同样重要的是,反序列化是将原始字节转换回对象的过程。这对网络通信非常有用。事实上,如果没有序列化,你就无法通过互联网传输一个" +" Python 对象。" -#: ../../source/tutorial-quickstart-xgboost.rst:594 -msgid "Comprehensive Federated XGBoost" -msgstr "综合的联邦 XGBoost" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:516 +msgid "" +"Federated Learning relies heavily on internet communication for training " +"by sending Python objects back and forth between the clients and the " +"server. This means that serialization is an essential part of Federated " +"Learning." +msgstr "通过在客户端和服务器之间来回发送 Python 对象,联合学习在很大程度上依赖于互联网通信进行训练。这意味着序列化是联邦学习的重要组成部分。" -#: ../../source/tutorial-quickstart-xgboost.rst:596 -#, fuzzy +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:518 msgid "" -"Now that you have known how federated XGBoost work with Flower, it's time to " -"run some more comprehensive experiments by customising the experimental " -"settings. In the xgboost-comprehensive example (`full code `_), we provide " -"more options to define various experimental setups, including aggregation " -"strategies, data partitioning and centralised/distributed evaluation. We " -"also support :doc:`Flower simulation ` making it " -"easy to simulate large client cohorts in a resource-aware manner. Let's take " -"a look!" +"In the following section, we will write a basic example where instead of " +"sending a serialized version of our ``ndarray``\\ s containing our " +"parameters, we will first convert the ``ndarray`` into sparse matrices, " +"before sending them. This technique can be used to save bandwidth, as in " +"certain cases where the weights of a model are sparse (containing many 0 " +"entries), converting them to a sparse matrix can greatly improve their " +"bytesize." msgstr "" -"既然您已经知道联合 XGBoost 如何与 Flower 协同工作,那么现在就该通过自定义实验" -"设置来运行一些更综合的实验了。在 xgboost-comprehensive 示例 (`完整代码 " -"`_)" -"中,我们提供了更多选项来定义各种实验设置,包括数据分区和集中/分布式评估。让我" -"们一起来看看!" +"在下面的章节中,我们将编写一个基本示例,在发送包含参数的 ``ndarray`` 前,我们将首先把 ``ndarray`` " +"转换为稀疏矩阵,而不是发送序列化版本。这种技术可以用来节省带宽,因为在某些情况下,模型的参数是稀疏的(包含许多 0 " +"条目),将它们转换成稀疏矩阵可以大大提高它们的字节数。" -#: ../../source/tutorial-quickstart-xgboost.rst:603 -#, fuzzy -msgid "Cyclic training" -msgstr "集中式训练" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:521 +msgid "Our custom serialization/deserialization functions" +msgstr "我们的定制序列化/反序列化功能" -#: ../../source/tutorial-quickstart-xgboost.rst:605 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:523 msgid "" -"In addition to bagging aggregation, we offer a cyclic training scheme, which " -"performs FL in a client-by-client fashion. Instead of aggregating multiple " -"clients, there is only one single client participating in the training per " -"round in the cyclic training scenario. The trained local XGBoost trees will " -"be passed to the next client as an initialised model for next round's " -"boosting." +"This is where the real serialization/deserialization will happen, " +"especially in ``ndarray_to_sparse_bytes`` for serialization and " +"``sparse_bytes_to_ndarray`` for deserialization." msgstr "" +"这才是真正的序列化/反序列化,尤其是在用于序列化的 ``ndarray_too_sparse_bytes`` 和用于反序列化的 " +"``sparse_bytes_too_ndarray`` 中。" -#: ../../source/tutorial-quickstart-xgboost.rst:609 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:525 msgid "" -"To do this, we first customise a :code:`ClientManager` in :code:" -"`server_utils.py`:" -msgstr "" +"Note that we imported the ``scipy.sparse`` library in order to convert " +"our arrays." +msgstr "请注意,为了转换数组,我们导入了 ``scipy.sparse`` 库。" -#: ../../source/tutorial-quickstart-xgboost.rst:649 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:613 +msgid "Client-side" +msgstr "客户端" + +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:615 msgid "" -"The customised :code:`ClientManager` samples all available clients in each " -"FL round based on the order of connection to the server. Then, we define a " -"new strategy :code:`FedXgbCyclic` in :code:`flwr.server.strategy." -"fedxgb_cyclic.py`, in order to sequentially select only one client in given " -"round and pass the received model to next client." -msgstr "" +"To be able to serialize our ``ndarray``\\ s into sparse parameters, we " +"will just have to call our custom functions in our " +"``flwr.client.Client``." +msgstr "为了能够将我们的 ``ndarray`` 序列化为稀疏参数,我们只需在 ``flwr.client.Client`` 中调用我们的自定义函数。" -#: ../../source/tutorial-quickstart-xgboost.rst:690 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:617 msgid "" -"Unlike the original :code:`FedAvg`, we don't perform aggregation here. " -"Instead, we just make a copy of the received client model as global model by " -"overriding :code:`aggregate_fit`." +"Indeed, in ``get_parameters`` we need to serialize the parameters we got " +"from our network using our custom ``ndarrays_to_sparse_parameters`` " +"defined above." msgstr "" +"事实上,在 `get_parameters` 中,我们需要使用上文定义的自定义 `ndarrays_too_sparse_parameters` " +"序列化从网络中获取的参数。" -#: ../../source/tutorial-quickstart-xgboost.rst:693 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:619 msgid "" -"Also, the customised :code:`configure_fit` and :code:`configure_evaluate` " -"methods ensure the clients to be sequentially selected given FL round:" +"In ``fit``, we first need to deserialize the parameters coming from the " +"server using our custom ``sparse_parameters_to_ndarrays`` and then we " +"need to serialize our local results with " +"``ndarrays_to_sparse_parameters``." msgstr "" +"在 ``fit`` 中,我们首先需要使用自定义的 ``sparse_parameters_to_ndarrays`` " +"反序列化来自服务器的参数,然后使用 ``ndarrays_to_sparse_parameters`` 序列化本地结果。" -#: ../../source/tutorial-quickstart-xgboost.rst:757 -msgid "Customised data partitioning" -msgstr "定制数据分区" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:621 +msgid "" +"In ``evaluate``, we will only need to deserialize the global parameters " +"with our custom function." +msgstr "在 ``evaluate`` 中,我们只需要用自定义函数反序列化全局参数。" -#: ../../source/tutorial-quickstart-xgboost.rst:759 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:725 +msgid "Server-side" +msgstr "服务器端" + +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:727 msgid "" -"In :code:`dataset.py`, we have a function :code:`instantiate_partitioner` to " -"instantiate the data partitioner based on the given :code:`num_partitions` " -"and :code:`partitioner_type`. Currently, we provide four supported " -"partitioner type to simulate the uniformity/non-uniformity in data quantity " -"(uniform, linear, square, exponential)." +"For this example, we will just use ``FedAvg`` as a strategy. To change " +"the serialization and deserialization here, we only need to reimplement " +"the ``evaluate`` and ``aggregate_fit`` functions of ``FedAvg``. The other" +" functions of the strategy will be inherited from the super class " +"``FedAvg``." msgstr "" -"在 :code:`dataset.py` 中,我们有一个函数 :code:`instantiate_partitioner` 来根" -"据给定的 :code:`num_partitions` 和 :code:`partitioner_type` 来实例化数据分区" -"器。目前,我们提供四种支持的分区器类型(均匀、线性、正方形、指数)来模拟数据" -"量的均匀性/非均匀性。" +"在本例中,我们将只使用 ``FedAvg`` 作为策略。要改变这里的序列化和反序列化,我们只需重新实现 ``FedAvg`` 的 " +"``evaluate`` 和 ``aggregate_fit`` 函数。策略的其他函数将从超类 ``FedAvg`` 继承。" -#: ../../source/tutorial-quickstart-xgboost.rst:790 -msgid "Customised centralised/distributed evaluation" -msgstr "定制的集中/分布式评估" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:729 +msgid "As you can see only one line as change in ``evaluate``:" +msgstr "正如你所看到的,``evaluate``中只修改了一行:" -#: ../../source/tutorial-quickstart-xgboost.rst:792 -#, fuzzy +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:735 msgid "" -"To facilitate centralised evaluation, we define a function in :code:" -"`server_utils.py`:" -msgstr "为便于集中评估,我们在 :code:`server.py` 中定义了一个函数:" +"And for ``aggregate_fit``, we will first deserialize every result we " +"received:" +msgstr "而对于 ``aggregate_fit``,我们将首先反序列化收到的每个结果:" -#: ../../source/tutorial-quickstart-xgboost.rst:824 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:744 +msgid "And then serialize the aggregated result:" +msgstr "然后将汇总结果序列化:" + +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:903 +msgid "We can now run our custom serialization example!" +msgstr "现在我们可以运行自定义序列化示例!" + +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:934 msgid "" -"This function returns a evaluation function which instantiates a :code:" -"`Booster` object and loads the global model weights to it. The evaluation is " -"conducted by calling :code:`eval_set()` method, and the tested AUC value is " -"reported." +"In this part of the tutorial, we've seen how we can build clients by " +"subclassing either ``NumPyClient`` or ``Client``. ``NumPyClient`` is a " +"convenience abstraction that makes it easier to work with machine " +"learning libraries that have good NumPy interoperability. ``Client`` is a" +" more flexible abstraction that allows us to do things that are not " +"possible in ``NumPyClient``. In order to do so, it requires us to handle " +"parameter serialization and deserialization ourselves." msgstr "" -"此函数返回一个评估函数,该函数实例化一个 :code:`Booster` 对象,并向其加载全局" -"模型参数。评估通过调用 :code:`eval_set()` 方法进行,并报告测试的 AUC 值。" +"在本部分教程中,我们已经了解了如何通过子类化 ``NumPyClient`` 或 ``Client`` 来构建客户端。NumPyClient " +"\"是一个便捷的抽象类,可以让我们更容易地与具有良好NumPy互操作性的机器学习库一起工作。``Client``是一个更灵活的抽象类,允许我们做一些在`NumPyClient``中做不到的事情。为此,它要求我们自己处理参数序列化和反序列化。" -#: ../../source/tutorial-quickstart-xgboost.rst:827 -#, fuzzy +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:952 msgid "" -"As for distributed evaluation on the clients, it's same as the quick-start " -"example by overriding the :code:`evaluate()` method insides the :code:" -"`XgbClient` class in :code:`client_utils.py`." -msgstr "" -"至于客户端上的分布式评估,与快速启动示例相同,通过覆盖 :code:`client.py` 中 :" -"code:`XgbClient` 类内部的 :code:`evaluate()` 方法。" +"This is the final part of the Flower tutorial (for now!), " +"congratulations! You're now well equipped to understand the rest of the " +"documentation. There are many topics we didn't cover in the tutorial, we " +"recommend the following resources:" +msgstr "这暂时是 Flower 教程的最后一部分,恭喜您!您现在已经具备了理解其余文档的能力。本教程还有许多内容没有涉及,我们推荐您参考以下资源:" -#: ../../source/tutorial-quickstart-xgboost.rst:831 -#, fuzzy -msgid "Flower simulation" -msgstr "运行模拟" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:954 +msgid "`Read Flower Docs `__" +msgstr "阅读Flower文档 `__" -#: ../../source/tutorial-quickstart-xgboost.rst:832 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:955 msgid "" -"We also provide an example code (:code:`sim.py`) to use the simulation " -"capabilities of Flower to simulate federated XGBoost training on either a " -"single machine or a cluster of machines." -msgstr "" +"`Check out Flower Code Examples " +"`__" +msgstr "查看 Flower 代码示例 `__" -#: ../../source/tutorial-quickstart-xgboost.rst:866 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:956 msgid "" -"After importing all required packages, we define a :code:`main()` function " -"to perform the simulation process:" -msgstr "" +"`Use Flower Baselines for your research " +"`__" +msgstr "使用 \"Flower Baselines \"进行研究 `__" -#: ../../source/tutorial-quickstart-xgboost.rst:921 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:957 msgid "" -"We first load the dataset and perform data partitioning, and the pre-" -"processed data is stored in a :code:`list`. After the simulation begins, the " -"clients won't need to pre-process their partitions again." -msgstr "" +"`Watch Flower Summit 2023 videos `__" +msgstr "观看 2023 年Flower峰会视频 `__" -#: ../../source/tutorial-quickstart-xgboost.rst:924 -msgid "Then, we define the strategies and other hyper-parameters:" -msgstr "" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:9 +msgid "Get started with Flower" +msgstr "开始使用Flower" -#: ../../source/tutorial-quickstart-xgboost.rst:975 -msgid "" -"After that, we start the simulation by calling :code:`fl.simulation." -"start_simulation`:" -msgstr "" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:11 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:11 +msgid "Welcome to the Flower federated learning tutorial!" +msgstr "欢迎阅读Flower联邦学习教程!" -#: ../../source/tutorial-quickstart-xgboost.rst:995 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:13 +#, fuzzy msgid "" -"One of key parameters for :code:`start_simulation` is :code:`client_fn` " -"which returns a function to construct a client. We define it as follows:" +"In this notebook, we'll build a federated learning system using Flower, " +"`Flower Datasets `__ and PyTorch. In " +"part 1, we use PyTorch for the model training pipeline and data loading. " +"In part 2, we continue to federate the PyTorch-based pipeline using " +"Flower." msgstr "" +"在本笔记中,我们将使用 Flower 和 PyTorch 构建一个联邦学习系统。在第一部分中,我们使用 PyTorch " +"进行模型训练和数据加载。在第二部分中,我们将继续使用 Flower 联邦化基于 PyTorch 的框架。" -#: ../../source/tutorial-quickstart-xgboost.rst:1038 -msgid "Arguments parser" -msgstr "参数解析器" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:17 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:19 +msgid "Let's get started!" +msgstr "让我们开始吧!" -#: ../../source/tutorial-quickstart-xgboost.rst:1040 -#, fuzzy +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:31 msgid "" -"In :code:`utils.py`, we define the arguments parsers for clients, server and " -"simulation, allowing users to specify different experimental settings. Let's " -"first see the sever side:" -msgstr "" -"在 :code:`utils.py` 中,我们定义了客户端和服务器端的参数解析器,允许用户指定" -"不同的实验设置。让我们先看看服务器端:" +"Before we begin with any actual code, let's make sure that we have " +"everything we need." +msgstr "在开始编写实际代码之前,让我们先确保我们已经准备好了所需的一切。" -#: ../../source/tutorial-quickstart-xgboost.rst:1086 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:45 #, fuzzy msgid "" -"This allows user to specify training strategies / the number of total " -"clients / FL rounds / participating clients / clients for evaluation, and " -"evaluation fashion. Note that with :code:`--centralised-eval`, the sever " -"will do centralised evaluation and all functionalities for client evaluation " -"will be disabled." -msgstr "" -"这允许用户指定总客户数/FL 轮数/参与客户数/评估客户数以及评估方式。请注意,如" -"果使用 :code:`--centralised-eval`,服务器将进行集中评估,客户端评估的所有功能" -"将被禁用。" - -#: ../../source/tutorial-quickstart-xgboost.rst:1090 -msgid "Then, the argument parser on client side:" -msgstr "然后是客户端的参数解析器:" +"Next, we install the necessary packages for PyTorch (``torch`` and " +"``torchvision``), Flower Datasets (``flwr-datasets``) and Flower " +"(``flwr``):" +msgstr "接下来,我们为 PyTorch(`torch`` 和`torchvision``)和 Flower(`flwr`)安装必要的软件包:" -#: ../../source/tutorial-quickstart-xgboost.rst:1144 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:105 #, fuzzy msgid "" -"This defines various options for client data partitioning. Besides, clients " -"also have an option to conduct evaluation on centralised test set by " -"setting :code:`--centralised-eval`, as well as an option to perform scaled " -"learning rate based on the number of clients by setting :code:`--scaled-lr`." -msgstr "" -"这定义了客户端数据分区的各种选项。此外,通过设置 :code:`-centralised-eval`," -"客户端还可以选择在集中测试集上进行评估。" - -#: ../../source/tutorial-quickstart-xgboost.rst:1148 -msgid "We also have an argument parser for simulation:" -msgstr "" - -#: ../../source/tutorial-quickstart-xgboost.rst:1226 -msgid "This integrates all arguments for both client and server sides." +"It is possible to switch to a runtime that has GPU acceleration enabled " +"(on Google Colab: ``Runtime > Change runtime type > Hardware accelerator:" +" GPU > Save``). Note, however, that Google Colab is not always able to " +"offer GPU acceleration. If you see an error related to GPU availability " +"in one of the following sections, consider switching back to CPU-based " +"execution by setting ``DEVICE = torch.device(\"cpu\")``. If the runtime " +"has GPU acceleration enabled, you should see the output ``Training on " +"cuda``, otherwise it'll say ``Training on cpu``." msgstr "" +"可以切换到已启用 GPU 加速的运行时(在 Google Colab 上: 运行时 > 更改运行时类型 > 硬件加速: GPU > " +"保存``)。但请注意,Google Colab 并非总能提供 GPU 加速。如果在以下部分中看到与 GPU 可用性相关的错误,请考虑通过设置 " +"``DEVICE = torch.device(\"cpu\")`` 切回基于 CPU 的执行。如果运行时已启用 GPU " +"加速,你应该会看到输出``Training on cuda``,否则会显示``Training on cpu``。" -#: ../../source/tutorial-quickstart-xgboost.rst:1229 -msgid "Example commands" -msgstr "命令示例" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:118 +msgid "Loading the data" +msgstr "加载数据" -#: ../../source/tutorial-quickstart-xgboost.rst:1231 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:120 #, fuzzy msgid "" -"To run a centralised evaluated experiment with bagging strategy on 5 clients " -"with exponential distribution for 50 rounds, we first start the server as " -"below:" +"Federated learning can be applied to many different types of tasks across" +" different domains. In this tutorial, we introduce federated learning by " +"training a simple convolutional neural network (CNN) on the popular " +"CIFAR-10 dataset. CIFAR-10 can be used to train image classifiers that " +"distinguish between images from ten different classes: 'airplane', " +"'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', and " +"'truck'." msgstr "" -"为了在 5 个客户端上进行 50 轮指数分布的集中评估实验,我们首先启动服务器,如下" -"所示:" - -#: ../../source/tutorial-quickstart-xgboost.rst:1238 -msgid "Then, on each client terminal, we start the clients:" -msgstr "然后,我们在每个客户终端上启动客户机:" +"联邦学习可应用于不同领域的多种不同类型任务。在本教程中,我们将通过在流行的 CIFAR-10 数据集上训练一个简单的卷积神经网络 (CNN) " +"来介绍联合学习。CIFAR-10 可用于训练图像分类器,以区分来自十个不同类别的图像:" -#: ../../source/tutorial-quickstart-xgboost.rst:1244 -msgid "To run the same experiment with Flower simulation:" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:131 +msgid "" +"We simulate having multiple datasets from multiple organizations (also " +"called the \"cross-silo\" setting in federated learning) by splitting the" +" original CIFAR-10 dataset into multiple partitions. Each partition will " +"represent the data from a single organization. We're doing this purely " +"for experimentation purposes, in the real world there's no need for data " +"splitting because each organization already has their own data (so the " +"data is naturally partitioned)." msgstr "" +"我们通过将原始 CIFAR-10 数据集拆分成多个分区来模拟来自多个组织的多个数据集(也称为联邦学习中的 \"跨分区 " +"\"设置)。每个分区代表一个组织的数据。我们这样做纯粹是为了实验目的,在现实世界中不需要拆分数据,因为每个组织都已经有了自己的数据(所以数据是自然分区的)。" -#: ../../source/tutorial-quickstart-xgboost.rst:1250 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:133 #, fuzzy msgid "" -"The full `code `_ for this comprehensive example can be found in :code:" -"`examples/xgboost-comprehensive`." +"Each organization will act as a client in the federated learning system. " +"So having ten organizations participate in a federation means having ten " +"clients connected to the federated learning server." +msgstr "每个组织都将充当联邦学习系统中的客户端。因此,有十个组织参与联邦学习,就意味着有十个客户端连接到联邦学习服务器:" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:144 +msgid "" +"Let's now create the Federated Dataset abstraction that from ``flwr-" +"datasets`` that partitions the CIFAR-10. We will create small training " +"and test set for each edge device and wrap each of them into a PyTorch " +"``DataLoader``:" msgstr "" -"此综合示例的全部`源代码 `_ 可在 :code:`examples/xgboost-comprehensive` 中找" -"到。" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:9 -msgid "Build a strategy from scratch" -msgstr "从零开始制定策略" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:198 +#, fuzzy +msgid "" +"We now have a list of ten training sets and ten validation sets " +"(``trainloaders`` and ``valloaders``) representing the data of ten " +"different organizations. Each ``trainloader``/``valloader`` pair contains" +" 4000 training examples and 1000 validation examples. There's also a " +"single ``testloader`` (we did not split the test set). Again, this is " +"only necessary for building research or educational systems, actual " +"federated learning systems have their data naturally distributed across " +"multiple partitions." +msgstr "" +"现在,我们有一个包含十个训练集和十个验证集(`trainloaders`` 和`valloaders``)的列表,代表十个不同组织的数据。每对 " +"``trainloader``/``valloader`` 都包含 4500 个训练示例和 500 个验证数据。还有一个单独的 " +"``测试加载器``(我们没有拆分测试集)。同样,这只有在构建研究或教育系统时才有必要,实际的联邦学习系统的数据自然分布在多个分区中。" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:11 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:201 msgid "" -"Welcome to the third part of the Flower federated learning tutorial. In " -"previous parts of this tutorial, we introduced federated learning with " -"PyTorch and Flower (`part 1 `__) and we learned how strategies can be " -"used to customize the execution on both the server and the clients (`part 2 " -"`__)." -msgstr "" -"欢迎来到 Flower 联邦学习教程的第三部分。在本教程的前几部分,我们介绍了 " -"PyTorch 和 Flower 的联邦学习(`part 1 `__),并学习了如何使用策略来定" -"制服务器和客户端的执行(`part 2 `__)。" +"Let's take a look at the first batch of images and labels in the first " +"training set (i.e., ``trainloaders[0]``) before we move on:" +msgstr "在继续之前,让我们先看看第一个训练集中的第一批图像和标签(即 ``trainloaders[0]``):" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:13 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:240 msgid "" -"In this notebook, we'll continue to customize the federated learning system " -"we built previously by creating a custom version of FedAvg (again, using " -"`Flower `__ and `PyTorch `__)." +"The output above shows a random batch of images from the first " +"``trainloader`` in our list of ten ``trainloaders``. It also prints the " +"labels associated with each image (i.e., one of the ten possible labels " +"we've seen above). If you run the cell again, you should see another " +"batch of images." msgstr "" -"在本笔记中,我们将通过创建 FedAvg 的自定义版本(再次使用 `Flower `__ 和 `PyTorch `__),继续定制我们之前构建" -"的联邦学习系统。" +"上面的输出显示了来自十个 \"trainloader \"列表中第一个 \"trainloader " +"\"的随机图像。它还打印了与每幅图像相关的标签(即我们上面看到的十个可能标签之一)。如果您再次运行该单元,应该会看到另一批图像。" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:15 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:16 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:15 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:15 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:252 +msgid "Step 1: Centralized Training with PyTorch" +msgstr "步骤 1:使用 PyTorch 进行集中训练" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:263 msgid "" -"`Star Flower on GitHub `__ ⭐️ and join the " -"Flower community on Slack to connect, ask questions, and get help: `Join " -"Slack `__ 🌼 We'd love to hear from you in the " -"``#introductions`` channel! And if anything is unclear, head over to the " -"``#questions`` channel." +"Next, we're going to use PyTorch to define a simple convolutional neural " +"network. This introduction assumes basic familiarity with PyTorch, so it " +"doesn't cover the PyTorch-related aspects in full detail. If you want to " +"dive deeper into PyTorch, we recommend `DEEP LEARNING WITH PYTORCH: A 60 " +"MINUTE BLITZ " +"`__." msgstr "" -"`Star Flower on GitHub `__ ⭐️ 并加入 Slack 上" -"的 Flower 社区,进行交流、提问并获得帮助: 加入 Slack `__ 🌼 我们希望在 ``#introductions`` 频道听到您的声音!如果有任何" -"不清楚的地方,请访问 ``#questions`` 频道。" - -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:17 -msgid "Let's build a new ``Strategy`` from scratch!" -msgstr "让我们从头开始构建一个新的``Strategy``!" +"接下来,我们将使用 PyTorch 来定义一个简单的卷积神经网络。本介绍假定您对 PyTorch 有基本的了解,因此不会详细介绍与 PyTorch" +" 相关的内容。如果你想更深入地了解 PyTorch,我们推荐你阅读 `DEEP LEARNING WITH PYTORCH: a 60 " +"minute blitz " +"`__。" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:29 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:29 -msgid "Preparation" -msgstr "准备工作" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:275 +msgid "Defining the model" +msgstr "定义模型" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:31 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:32 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:31 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:277 msgid "" -"Before we begin with the actual code, let's make sure that we have " -"everything we need." -msgstr "在开始实际代码之前,让我们先确保我们已经准备好了所需的一切。" +"We use the simple CNN described in the `PyTorch tutorial " +"`__:" +msgstr "" +"我们使用` PyTorch 教程 " +"`__ 中描述的简单 CNN:" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:43 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:44 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:43 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:43 -msgid "Installing dependencies" -msgstr "安装依赖项" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:314 +msgid "Let's continue with the usual training and test functions:" +msgstr "让我们继续进行常规的训练和测试功能:" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:45 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:46 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:45 -msgid "First, we install the necessary packages:" -msgstr "首先,我们安装必要的软件包:" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:374 +msgid "Training the model" +msgstr "训练模型" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:65 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:66 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:65 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:65 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:376 msgid "" -"Now that we have all dependencies installed, we can import everything we " -"need for this tutorial:" -msgstr "现在我们已经安装了所有依赖项,可以导入本教程所需的所有内容:" +"We now have all the basic building blocks we need: a dataset, a model, a " +"training function, and a test function. Let's put them together to train " +"the model on the dataset of one of our organizations " +"(``trainloaders[0]``). This simulates the reality of most machine " +"learning projects today: each organization has their own data and trains " +"models only on this internal data:" +msgstr "现在我们拥有了所需的所有基本构件:数据集、模型、训练函数和测试函数。让我们把它们放在一起,在我们其中一个组织的数据集(``trainloaders[0]``)上训练模型。这模拟了当今大多数机器学习项目的实际情况:每个组织都有自己的数据,并且只在这些内部数据上训练模型:" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:101 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:102 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:101 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:406 msgid "" -"It is possible to switch to a runtime that has GPU acceleration enabled (on " -"Google Colab: ``Runtime > Change runtime type > Hardware acclerator: GPU > " -"Save``). Note, however, that Google Colab is not always able to offer GPU " -"acceleration. If you see an error related to GPU availability in one of the " -"following sections, consider switching back to CPU-based execution by " -"setting ``DEVICE = torch.device(\"cpu\")``. If the runtime has GPU " -"acceleration enabled, you should see the output ``Training on cuda``, " -"otherwise it'll say ``Training on cpu``." -msgstr "" -"可以切换到已启用 GPU 加速的运行时(在 Google Colab 上: 运行时 > 更改运行时类" -"型 > 硬件加速: GPU > 保存``)。但请注意,Google Colab 并非总能提供 GPU 加" -"速。如果在以下部分中看到与 GPU 可用性相关的错误,请考虑通过设置 ``DEVICE = " -"torch.device(\"cpu\")`` 切回基于 CPU 的执行。如果运行时已启用 GPU 加速,你应" -"该会看到输出``Training on cuda``,否则会显示``Training on cpu``。" +"Training the simple CNN on our CIFAR-10 split for 5 epochs should result " +"in a test set accuracy of about 41%, which is not good, but at the same " +"time, it doesn't really matter for the purposes of this tutorial. The " +"intent was just to show a simplistic centralized training pipeline that " +"sets the stage for what comes next - federated learning!" +msgstr "" +"在我们的 CIFAR-10 分片上对简单 CNN 进行 5 个遍历的训练后,测试集的准确率应为 " +"41%,这并不理想,但同时对本教程而言也并不重要。我们只是想展示一个简单的集中式训练流程,为接下来的联邦学习做好铺垫!" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:114 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:115 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:114 -msgid "Data loading" -msgstr "数据加载" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:418 +msgid "Step 2: Federated Learning with Flower" +msgstr "步骤 2:使用 Flower 联邦学习" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:116 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:116 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:420 msgid "" -"Let's now load the CIFAR-10 training and test set, partition them into ten " -"smaller datasets (each split into training and validation set), and wrap " -"everything in their own ``DataLoader``. We introduce a new parameter " -"``num_clients`` which allows us to call ``load_datasets`` with different " -"numbers of clients." +"Step 1 demonstrated a simple centralized training pipeline. All data was " +"in one place (i.e., a single ``trainloader`` and a single ``valloader``)." +" Next, we'll simulate a situation where we have multiple datasets in " +"multiple organizations and where we train a model over these " +"organizations using federated learning." msgstr "" -"现在,让我们加载 CIFAR-10 训练集和测试集,将它们分割成 10 个较小的数据集(每" -"个数据集又分为训练集和验证集),并将所有数据都封装在各自的 ``DataLoader`` " -"中。我们引入了一个新参数 ``num_clients``,它允许我们使用不同数量的客户端调用 " -"``load_datasets``。" +"步骤 1 演示了一个简单的集中式训练流程。所有数据都在一个地方(即一个 \"trainloader \"和一个 " +"\"valloader\")。接下来,我们将模拟在多个组织中拥有多个数据集的情况,并使用联邦学习在这些组织中训练一个模型。" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:167 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:168 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:167 -msgid "Model training/evaluation" -msgstr "模型培训/评估" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:432 +msgid "Updating model parameters" +msgstr "更新模型参数" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:169 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:170 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:169 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:434 msgid "" -"Let's continue with the usual model definition (including ``set_parameters`` " -"and ``get_parameters``), training and test functions:" -msgstr "" -"让我们继续使用常见的模型定义(包括 `set_parameters` 和 `get_parameters`)、训" -"练和测试函数:" - -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:258 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:258 -msgid "Flower client" -msgstr "Flower 客户端" +"In federated learning, the server sends the global model parameters to " +"the client, and the client updates the local model with the parameters " +"received from the server. It then trains the model on the local data " +"(which changes the model parameters locally) and sends the " +"updated/changed model parameters back to the server (or, alternatively, " +"it sends just the gradients back to the server, not the full model " +"parameters)." +msgstr "在联邦学习中,服务器将全局模型参数发送给客户端,客户端根据从服务器接收到的参数更新本地模型。然后,客户端根据本地数据对模型进行训练(在本地更改模型参数),并将更新/更改后的模型参数发回服务器(或者,客户端只将梯度参数发回服务器,而不是全部模型参数)。" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:260 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:260 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:436 msgid "" -"To implement the Flower client, we (again) create a subclass of ``flwr." -"client.NumPyClient`` and implement the three methods ``get_parameters``, " -"``fit``, and ``evaluate``. Here, we also pass the ``cid`` to the client and " -"use it log additional details:" +"We need two helper functions to update the local model with parameters " +"received from the server and to get the updated model parameters from the" +" local model: ``set_parameters`` and ``get_parameters``. The following " +"two functions do just that for the PyTorch model above." msgstr "" -"为了实现 Flower 客户端,我们(再次)创建了 ``flwr.client.NumPyClient`` 的子" -"类,并实现了 ``get_parameters``、``fit`` 和 ``evaluate``三个方法。在这里,我" -"们还将 ``cid`` 传递给客户端,并使用它记录其他详细信息:" +"我们需要两个辅助函数,用从服务器接收到的参数更新本地模型,并从本地模型获取更新后的模型参数:`` " +"set_parameters```和`get_parameters``。下面两个函数就是为上面的 PyTorch 模型做这些工作的。" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:308 -msgid "Let's test what we have so far before we continue:" -msgstr "在继续之前,让我们先测试一下我们目前掌握的情况:" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:438 +msgid "" +"The details of how this works are not really important here (feel free to" +" consult the PyTorch documentation if you want to learn more). In " +"essence, we use ``state_dict`` to access PyTorch model parameter tensors." +" The parameter tensors are then converted to/from a list of NumPy " +"ndarray's (which Flower knows how to serialize/deserialize):" +msgstr "" +"在这里,如何工作的细节并不重要(如果你想了解更多,请随时查阅 PyTorch 文档)。本质上,我们使用 ``state_dict`` 访问 " +"PyTorch 模型参数张量。然后,参数张量会被转换成/转换成 NumPy ndarray 列表(Flower 知道如何序列化/反序列化):" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:339 -msgid "Build a Strategy from scratch" -msgstr "从零开始构建策略" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:466 +msgid "Implementing a Flower client" +msgstr "实现 Flower 客户端" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:341 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:468 msgid "" -"Let’s overwrite the ``configure_fit`` method such that it passes a higher " -"learning rate (potentially also other hyperparameters) to the optimizer of a " -"fraction of the clients. We will keep the sampling of the clients as it is " -"in ``FedAvg`` and then change the configuration dictionary (one of the " -"``FitIns`` attributes)." +"With that out of the way, let's move on to the interesting part. " +"Federated learning systems consist of a server and multiple clients. In " +"Flower, we create clients by implementing subclasses of " +"``flwr.client.Client`` or ``flwr.client.NumPyClient``. We use " +"``NumPyClient`` in this tutorial because it is easier to implement and " +"requires us to write less boilerplate." msgstr "" -"让我们重写 ``configure_fit`` 方法,使其向一部分客户的优化器传递更高的学习率" -"(可能还有其他超参数)。我们将保持 ``FedAvg`` 中的客户端采样,然后更改配置字" -"典(``FitIns`` 属性之一)。" +"说完这些,让我们进入有趣的部分。联邦学习系统由一个服务器和多个客户端组成。在 Flower 中,我们通过实现 " +"``flwr.client.Client`` 或 ``flwr.client.NumPyClient`` " +"的子类来创建客户端。在本教程中,我们使用``NumPyClient``,因为它更容易实现,需要我们编写的模板也更少。" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:507 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:470 msgid "" -"The only thing left is to use the newly created custom Strategy " -"``FedCustom`` when starting the experiment:" -msgstr "剩下的唯一工作就是在启动实验时使用新创建的自定义策略 ``FedCustom`` :" +"To implement the Flower client, we create a subclass of " +"``flwr.client.NumPyClient`` and implement the three methods " +"``get_parameters``, ``fit``, and ``evaluate``:" +msgstr "" +"为实现 Flower 客户端,我们创建了 ``flwr.client.NumPyClient`` 的子类,并实现了 " +"``get_parameters``、``fit`` 和``evaluate`` 三个方法:" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:534 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:932 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:697 -msgid "Recap" -msgstr "回顾" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:472 +msgid "``get_parameters``: Return the current local model parameters" +msgstr "``get_parameters``: 返回当前本地模型参数" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:536 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:473 msgid "" -"In this notebook, we’ve seen how to implement a custom strategy. A custom " -"strategy enables granular control over client node configuration, result " -"aggregation, and more. To define a custom strategy, you only have to " -"overwrite the abstract methods of the (abstract) base class ``Strategy``. To " -"make custom strategies even more powerful, you can pass custom functions to " -"the constructor of your new class (``__init__``) and then call these " -"functions whenever needed." -msgstr "" -"在本笔记中,我们了解了如何实施自定义策略。自定义策略可以对客户端节点配置、结" -"果聚合等进行细粒度控制。要定义自定义策略,只需覆盖(抽象)基类 ``Strategy`` " -"的抽象方法即可。为使自定义策略更加强大,您可以将自定义函数传递给新类的构造函" -"数(`__init__``),然后在需要时调用这些函数。" +"``fit``: Receive model parameters from the server, train the model " +"parameters on the local data, and return the (updated) model parameters " +"to the server" +msgstr "``fit``: 从服务器接收模型参数,在本地数据上训练模型参数,并将(更新的)模型参数返回服务器" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:550 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:948 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:729 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:715 -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:369 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:474 msgid "" -"Before you continue, make sure to join the Flower community on Slack: `Join " -"Slack `__" -msgstr "" -"在继续之前,请务必加入 Slack 上的 Flower 社区:`Join Slack `__" +"``evaluate``: Receive model parameters from the server, evaluate the " +"model parameters on the local data, and return the evaluation result to " +"the server" +msgstr "``evaluate ``: 从服务器接收模型参数,在本地数据上评估模型参数,并将评估结果返回服务器" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:552 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:950 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:731 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:717 -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:371 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:476 msgid "" -"There's a dedicated ``#questions`` channel if you need help, but we'd also " -"love to hear who you are in ``#introductions``!" +"We mentioned that our clients will use the previously defined PyTorch " +"components for model training and evaluation. Let's see a simple Flower " +"client implementation that brings everything together:" msgstr "" -"如果您需要帮助,我们有专门的 ``#questions`` 频道,但我们也很乐意在 " -"``#introductions`` 中了解您是谁!" +"我们提到,我们的客户端将使用之前定义的 PyTorch 组件进行模型训练和评估。让我们来看看一个简单的 Flower " +"客户端实现,它将一切都整合在一起:" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:554 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:513 msgid "" -"The `Flower Federated Learning Tutorial - Part 4 `__ introduces " -"``Client``, the flexible API underlying ``NumPyClient``." -msgstr "" -"Flower联邦学习教程 - 第4部分 `__ 介绍了``Client``,它是``NumPyClient``底" -"层的灵活应用程序接口。" - -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:9 -msgid "Customize the client" -msgstr "自定义客户端" +"Our class ``FlowerClient`` defines how local training/evaluation will be " +"performed and allows Flower to call the local training/evaluation through" +" ``fit`` and ``evaluate``. Each instance of ``FlowerClient`` represents a" +" *single client* in our federated learning system. Federated learning " +"systems have multiple clients (otherwise, there's not much to federate), " +"so each client will be represented by its own instance of " +"``FlowerClient``. If we have, for example, three clients in our workload," +" then we'd have three instances of ``FlowerClient``. Flower calls " +"``FlowerClient.fit`` on the respective instance when the server selects a" +" particular client for training (and ``FlowerClient.evaluate`` for " +"evaluation)." +msgstr "" +"我们的类 ``FlowerClient`` 定义了本地训练/评估的执行方式,并允许 Flower 通过 ``fit`` 和 " +"``evaluate`` 调用本地训练/评估。每个 ``FlowerClient`` " +"实例都代表联邦学习系统中的*单个客户端*。联邦学习系统有多个客户端(否则就没有什么可联邦的),因此每个客户端都将由自己的 " +"``FlowerClient`` 实例来代表。例如,如果我们的工作负载中有三个客户端,那么我们就会有三个 ``FlowerClient`` " +"实例。当服务器选择特定客户端进行训练时,Flower 会调用相应实例上的 ``FlowerClient.fit`` (评估时调用 " +"``FlowerClient.evaluate``)。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:11 -msgid "" -"Welcome to the fourth part of the Flower federated learning tutorial. In the " -"previous parts of this tutorial, we introduced federated learning with " -"PyTorch and Flower (`part 1 `__), we learned how strategies can be used " -"to customize the execution on both the server and the clients (`part 2 " -"`__), and we built our own custom strategy from scratch (`part " -"3 `__)." -msgstr "" -"欢迎来到 Flower 联邦学习教程的第四部分。在本教程的前几部分中,我们介绍了 " -"PyTorch 和 Flower 的联邦学习(`part 1 `__),了解了如何使用策略来定制" -"服务器和客户端的执行(`part 2 `__),并从头开始构建了我们自己的" -"定制策略(`part 3 `__)。" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:517 +msgid "Using the Virtual Client Engine" +msgstr "使用虚拟客户端引擎" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:14 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:519 msgid "" -"In this notebook, we revisit ``NumPyClient`` and introduce a new baseclass " -"for building clients, simply named ``Client``. In previous parts of this " -"tutorial, we've based our client on ``NumPyClient``, a convenience class " -"which makes it easy to work with machine learning libraries that have good " -"NumPy interoperability. With ``Client``, we gain a lot of flexibility that " -"we didn't have before, but we'll also have to do a few things the we didn't " -"have to do before." +"In this notebook, we want to simulate a federated learning system with 10" +" clients on a single machine. This means that the server and all 10 " +"clients will live on a single machine and share resources such as CPU, " +"GPU, and memory. Having 10 clients would mean having 10 instances of " +"``FlowerClient`` in memory. Doing this on a single machine can quickly " +"exhaust the available memory resources, even if only a subset of these " +"clients participates in a single round of federated learning." msgstr "" -"在本笔记中,我们将重温 ``NumPyClient`` 并引入一个用于构建客户端的新基类,简单" -"命名为 ``Client``。在本教程的前几部分中,我们的客户端基于``NumPyClient``,这" -"是一个便捷类,可以让我们轻松地与具有良好 NumPy 互操作性的机器学习库协同工作。" -"有了 ``Client``,我们获得了很多以前没有的灵活性,但我们也必须做一些以前不需要" -"做的事情。" +"在本笔记中,我们要模拟一个联邦学习系统,在一台机器上有 10 个客户端。这意味着服务器和所有 10 个客户端都将位于一台机器上,并共享 " +"CPU、GPU 和内存等资源。有 10 个客户端就意味着内存中有 10 个 ``FlowerClient`` " +"实例。在单台机器上这样做会很快耗尽可用的内存资源,即使这些客户端中只有一个子集参与了一轮联邦学习。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:18 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:521 msgid "" -"Let's go deeper and see what it takes to move from ``NumPyClient`` to " -"``Client``!" -msgstr "让我们深入了解一下从 ``NumPyClient`` 到 ``Client`` 的过程!" +"In addition to the regular capabilities where server and clients run on " +"multiple machines, Flower, therefore, provides special simulation " +"capabilities that create ``FlowerClient`` instances only when they are " +"actually necessary for training or evaluation. To enable the Flower " +"framework to create clients when necessary, we need to implement a " +"function called ``client_fn`` that creates a ``FlowerClient`` instance on" +" demand. Flower calls ``client_fn`` whenever it needs an instance of one " +"particular client to call ``fit`` or ``evaluate`` (those instances are " +"usually discarded after use, so they should not keep any local state). " +"Clients are identified by a client ID, or short ``cid``. The ``cid`` can " +"be used, for example, to load different local data partitions for " +"different clients, as can be seen below:" +msgstr "" +"除了服务器和客户端在多台机器上运行的常规功能外,Flower 还提供了特殊的模拟功能,即只有在训练或评估实际需要时才创建 " +"``FlowerClient`` 实例。为了让 Flower 框架能在必要时创建客户端,我们需要实现一个名为 ``client_fn`` " +"的函数,它能按需创建一个 ``FlowerClient`` 实例。每当 Flower 需要一个特定的客户端实例来调用 ``fit`` 或 " +"``evaluate`` 时,它就会调用 " +"``client_fn``(这些实例在使用后通常会被丢弃,因此它们不应保留任何本地状态)。客户端由一个客户端 ID 或简短的 ``cid`` " +"标识。例如,可以使用 ``cid`` 为不同的客户端加载不同的本地数据分区,如下所示:" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:30 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:29 -msgid "Step 0: Preparation" -msgstr "步骤 0:准备工作" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:556 +msgid "Starting the training" +msgstr "开始训练" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:117 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:558 msgid "" -"Let's now load the CIFAR-10 training and test set, partition them into ten " -"smaller datasets (each split into training and validation set), and wrap " -"everything in their own ``DataLoader``." +"We now have the class ``FlowerClient`` which defines client-side " +"training/evaluation and ``client_fn`` which allows Flower to create " +"``FlowerClient`` instances whenever it needs to call ``fit`` or " +"``evaluate`` on one particular client. The last step is to start the " +"actual simulation using ``flwr.simulation.start_simulation``." msgstr "" -"现在,让我们加载 CIFAR-10 训练集和测试集,将它们分割成十个较小的数据集(每个" -"数据集又分为训练集和验证集),并将所有数据都封装在各自的 ``DataLoader`` 中。" +"现在我们有了定义客户端训练/评估的类 ``FlowerClient`` 和允许 Flower 在需要调用某个客户端的 ``fit` 或 " +"``evaluate` 时创建 ``FlowerClient`` 实例的 ``client_fn` 类。最后一步是使用 " +"``flwr.simulation.start_simulation`` 启动实际模拟。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:259 -msgid "Step 1: Revisiting NumPyClient" -msgstr "步骤 1:重温 NumPyClient" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:560 +msgid "" +"The function ``start_simulation`` accepts a number of arguments, amongst " +"them the ``client_fn`` used to create ``FlowerClient`` instances, the " +"number of clients to simulate (``num_clients``), the number of federated " +"learning rounds (``num_rounds``), and the strategy. The strategy " +"encapsulates the federated learning approach/algorithm, for example, " +"*Federated Averaging* (FedAvg)." +msgstr "" +"函数 ``start_simulation`` 接受许多参数,其中包括用于创建 ``FlowerClient`` 实例的 " +"``client_fn``、要模拟的客户端数量(``num_clients``)、联邦学习轮数(``num_rounds``)和策略。策略封装了联邦学习方法/算法,例如*联邦平均*" +" (FedAvg)。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:261 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:562 msgid "" -"So far, we've implemented our client by subclassing ``flwr.client." -"NumPyClient``. The three methods we implemented are ``get_parameters``, " -"``fit``, and ``evaluate``. Finally, we wrap the creation of instances of " -"this class in a function called ``client_fn``:" +"Flower has a number of built-in strategies, but we can also use our own " +"strategy implementations to customize nearly all aspects of the federated" +" learning approach. For this example, we use the built-in ``FedAvg`` " +"implementation and customize it using a few basic parameters. The last " +"step is the actual call to ``start_simulation`` which - you guessed it - " +"starts the simulation:" msgstr "" -"到目前为止,我们通过子类化 ``flwr.client.NumPyClient`` 实现了我们的客户端。我" -"们实现了三个方法:``get_parameters``, ``fit`, 和``evaluate``。最后,我们用一" -"个名为 ``client_fn`` 的函数来创建该类的实例:" +"Flower 有许多内置策略,但我们也可以使用自己的策略实现来定制联邦学习方法的几乎所有方面。在本例中,我们使用内置的 ``FedAvg`` " +"实现,并使用一些基本参数对其进行定制。最后一步是实际调用 ``start_simulation``开始模拟:" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:309 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:608 +msgid "Behind the scenes" +msgstr "幕后" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:610 +msgid "So how does this work? How does Flower execute this simulation?" +msgstr "那么它是如何工作的呢?Flower 如何进行模拟?" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:612 +#, python-format msgid "" -"We've seen this before, there's nothing new so far. The only *tiny* " -"difference compared to the previous notebook is naming, we've changed " -"``FlowerClient`` to ``FlowerNumPyClient`` and ``client_fn`` to " -"``numpyclient_fn``. Let's run it to see the output we get:" +"When we call ``start_simulation``, we tell Flower that there are 10 " +"clients (``num_clients=10``). Flower then goes ahead an asks the " +"``FedAvg`` strategy to select clients. ``FedAvg`` knows that it should " +"select 100% of the available clients (``fraction_fit=1.0``), so it goes " +"ahead and selects 10 random clients (i.e., 100% of 10)." msgstr "" -"我们以前见过这种情况,目前没有什么新东西。与之前的笔记相比,唯一*小*的不同是" -"命名,我们把 ``FlowerClient`` 改成了 ``FlowerNumPyClient``,把 `client_fn` 改" -"成了 ``numpyclient_fn``。让我们运行它看看输出结果:" +"当我们调用 ``start_simulation`` 时,我们会告诉 Flower 有 10 " +"个客户(`num_clients=10``)。然后,Flower 会要求 ``FedAvg`` 策略选择客户。``FedAvg`` 知道它应该选择" +" 100%的可用客户(``fraction_fit=1.0``),所以它会随机选择 10 个客户(即 10 的 100%)。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:339 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:614 msgid "" -"This works as expected, two clients are training for three rounds of " -"federated learning." -msgstr "结果不出所料,两个客户端正在进行三轮联邦学习训练。" +"Flower then asks the selected 10 clients to train the model. When the " +"server receives the model parameter updates from the clients, it hands " +"those updates over to the strategy (*FedAvg*) for aggregation. The " +"strategy aggregates those updates and returns the new global model, which" +" then gets used in the next round of federated learning." +msgstr "" +"然后,Flower 会要求选定的 10 " +"个客户端对模型进行训练。服务器收到客户端的模型参数更新后,会将这些更新交给策略(*FedAvg*)进行聚合。策略会聚合这些更新并返回新的全局模型,然后将其用于下一轮联邦学习。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:341 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:626 +msgid "Where's the accuracy?" +msgstr "准确度在哪里找?" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:628 msgid "" -"Let's dive a little bit deeper and discuss how Flower executes this " -"simulation. Whenever a client is selected to do some work, " -"``start_simulation`` calls the function ``numpyclient_fn`` to create an " -"instance of our ``FlowerNumPyClient`` (along with loading the model and the " -"data)." +"You may have noticed that all metrics except for ``losses_distributed`` " +"are empty. Where did the ``{\"accuracy\": float(accuracy)}`` go?" msgstr "" -"让我们再深入一点,讨论一下 Flower 是如何执行模拟的。每当一个客户端被选中进行" -"工作时,`start_simulation`` 就会调用函数 `numpyclient_fn` 来创建我们的 " -"``FlowerNumPyClient`` 实例(同时加载模型和数据)。" +"您可能已经注意到,除了 ``losses_distributed`` 以外,所有指标都是空的。{\"准确度\": " +"float(准确度)}``去哪儿了?" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:343 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:630 msgid "" -"But here's the perhaps surprising part: Flower doesn't actually use the " -"``FlowerNumPyClient`` object directly. Instead, it wraps the object to makes " -"it look like a subclass of ``flwr.client.Client``, not ``flwr.client." -"NumPyClient``. In fact, the Flower core framework doesn't know how to handle " -"``NumPyClient``'s, it only knows how to handle ``Client``'s. ``NumPyClient`` " -"is just a convenience abstraction built on top of ``Client``." -msgstr "" -"但令人惊讶的部分也许就在这里: Flower 实际上并不直接使用 " -"``FlowerNumPyClient`` 对象。相反,它封装了该对象,使其看起来像 ``flwr.client." -"Client`` 的子类,而不是 ``flwr.client.NumPyClient``。事实上,Flower 核心框架" -"不知道如何处理 ``NumPyClient``,它只知道如何处理 ``Client``。``NumPyClient`` " -"只是建立在``Client``之上的便捷抽象类。" +"Flower can automatically aggregate losses returned by individual clients," +" but it cannot do the same for metrics in the generic metrics dictionary " +"(the one with the ``accuracy`` key). Metrics dictionaries can contain " +"very different kinds of metrics and even key/value pairs that are not " +"metrics at all, so the framework does not (and can not) know how to " +"handle these automatically." +msgstr "" +"Flower 可以自动汇总单个客户端返回的损失值,但无法对通用度量字典中的度量进行同样的处理(即带有 \"准确度 " +"\"键的度量字典)。度量值字典可以包含非常不同种类的度量值,甚至包含根本不是度量值的键/值对,因此框架不知道(也无法知道)如何自动处理这些度量值。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:345 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:632 msgid "" -"Instead of building on top of ``NumPyClient``, we can directly build on top " -"of ``Client``." -msgstr "与其在 ``NumPyClient`` 上构建,我们可以直接在 ``Client`` 上构建。" +"As users, we need to tell the framework how to handle/aggregate these " +"custom metrics, and we do so by passing metric aggregation functions to " +"the strategy. The strategy will then call these functions whenever it " +"receives fit or evaluate metrics from clients. The two possible functions" +" are ``fit_metrics_aggregation_fn`` and " +"``evaluate_metrics_aggregation_fn``." +msgstr "" +"作为用户,我们需要告诉框架如何处理/聚合这些自定义指标,为此,我们将指标聚合函数传递给策略。然后,只要从客户端接收到拟合或评估指标,策略就会调用这些函数。两个可能的函数是" +" ``fit_metrics_aggregation_fn`` 和 ``evaluate_metrics_aggregation_fn``。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:357 -msgid "Step 2: Moving from ``NumPyClient`` to ``Client``" -msgstr "步骤 2:从 ``NumPyClient`` 移至 ``Client``" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:634 +msgid "" +"Let's create a simple weighted averaging function to aggregate the " +"``accuracy`` metric we return from ``evaluate``:" +msgstr "让我们创建一个简单的加权平均函数来汇总从 ``evaluate`` 返回的 ``accuracy`` 指标:" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:359 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:660 msgid "" -"Let's try to do the same thing using ``Client`` instead of ``NumPyClient``." -msgstr "让我们尝试使用 ``Client`` 代替 ``NumPyClient`` 做同样的事情。" +"The only thing left to do is to tell the strategy to call this function " +"whenever it receives evaluation metric dictionaries from the clients:" +msgstr "剩下要做的就是告诉策略,每当它从客户端接收到评估度量字典时,都要调用这个函数:" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:465 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:697 msgid "" -"Before we discuss the code in more detail, let's try to run it! Gotta make " -"sure our new ``Client``-based client works, right?" +"We now have a full system that performs federated training and federated " +"evaluation. It uses the ``weighted_average`` function to aggregate custom" +" evaluation metrics and calculates a single ``accuracy`` metric across " +"all clients on the server side." msgstr "" -"在详细讨论代码之前,让我们试着运行它!必须确保我们基于 ``Client`` 的新客户端" -"能正常运行,对吗?" +"我们现在有了一个完整的系统,可以执行联邦训练和联邦评估。它使用 ``weighted_average`` " +"函数汇总自定义评估指标,并在服务器端计算所有客户端的单一 ``accuracy`` 指标。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:490 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:699 msgid "" -"That's it, we're now using ``Client``. It probably looks similar to what " -"we've done with ``NumPyClient``. So what's the difference?" +"The other two categories of metrics (``losses_centralized`` and " +"``metrics_centralized``) are still empty because they only apply when " +"centralized evaluation is being used. Part two of the Flower tutorial " +"will cover centralized evaluation." msgstr "" -"就是这样,我们现在开始使用 ``Client``。它看起来可能与我们使用 " -"``NumPyClient`` 所做的类似。那么有什么不同呢?" +"其他两类指标(`losses_centralized`` 和 " +"`metrics_centralized`)仍然是空的,因为它们只适用于集中评估。Flower 教程的第二部分将介绍集中式评估。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:492 -msgid "" -"First of all, it's more code. But why? The difference comes from the fact " -"that ``Client`` expects us to take care of parameter serialization and " -"deserialization. For Flower to be able to send parameters over the network, " -"it eventually needs to turn these parameters into ``bytes``. Turning " -"parameters (e.g., NumPy ``ndarray``'s) into raw bytes is called " -"serialization. Turning raw bytes into something more useful (like NumPy " -"``ndarray``'s) is called deserialization. Flower needs to do both: it needs " -"to serialize parameters on the server-side and send them to the client, the " -"client needs to deserialize them to use them for local training, and then " -"serialize the updated parameters again to send them back to the server, " -"which (finally!) deserializes them again in order to aggregate them with the " -"updates received from other clients." -msgstr "" -"首先,它的代码更多。但为什么呢?区别在于 ``Client`` 希望我们处理参数的序列化" -"和反序列化。Flower 要想通过网络发送参数,最终需要将这些参数转化为 ``字节``。" -"把参数(例如 NumPy 的 ``ndarray`` 参数)变成原始字节叫做序列化。将原始字节转" -"换成更有用的东西(如 NumPy ``ndarray`)称为反序列化。Flower 需要同时做这两件" -"事:它需要在服务器端序列化参数并将其发送到客户端,客户端需要反序列化参数以便" -"将其用于本地训练,然后再次序列化更新后的参数并将其发送回服务器,服务器(最" -"后)再次反序列化参数以便将其与从其他客户端接收到的更新汇总在一起。" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:711 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:351 +msgid "Final remarks" +msgstr "结束语" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:495 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:713 msgid "" -"The only *real* difference between Client and NumPyClient is that " -"NumPyClient takes care of serialization and deserialization for you. It can " -"do so because it expects you to return parameters as NumPy ndarray's, and it " -"knows how to handle these. This makes working with machine learning " -"libraries that have good NumPy support (most of them) a breeze." +"Congratulations, you just trained a convolutional neural network, " +"federated over 10 clients! With that, you understand the basics of " +"federated learning with Flower. The same approach you've seen can be used" +" with other machine learning frameworks (not just PyTorch) and tasks (not" +" just CIFAR-10 images classification), for example NLP with Hugging Face " +"Transformers or speech with SpeechBrain." msgstr "" -"Client 与 NumPyClient 之间的唯一**真正区别在于,NumPyClient 会为你处理序列化" -"和反序列化。NumPyClient之所以能做到这一点,是因为它预计你会以NumPy ndarray的" -"形式返回参数,而且它知道如何处理这些参数。这使得与具有良好 NumPy 支持的大多数" -"机器学习库一起工作变得轻而易举。" - -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:497 -msgid "" -"In terms of API, there's one major difference: all methods in Client take " -"exactly one argument (e.g., ``FitIns`` in ``Client.fit``) and return exactly " -"one value (e.g., ``FitRes`` in ``Client.fit``). The methods in " -"``NumPyClient`` on the other hand have multiple arguments (e.g., " -"``parameters`` and ``config`` in ``NumPyClient.fit``) and multiple return " -"values (e.g., ``parameters``, ``num_example``, and ``metrics`` in " -"``NumPyClient.fit``) if there are multiple things to handle. These ``*Ins`` " -"and ``*Res`` objects in ``Client`` wrap all the individual values you're " -"used to from ``NumPyClient``." -msgstr "" -"在 API 方面,有一个主要区别:Client 中的所有方法都只接受一个参数(例如," -"``Client.fit`` 中的 ``FitIns``),并只返回一个值(例如,``Client.fit`` 中的 " -"``FitRes``)。另一方面,``NumPyClient``中的方法有多个参数(例如," -"``NumPyClient.fit``中的``parameters``和``config``)和多个返回值(例如," -"``NumPyClient.fit``中的``parameters``、``num_example``和``metrics``)。在 " -"``Client`` 中的这些 ``*Ins`` 和 ``*Res`` 对象封装了你在 ``NumPyClient`` 中习" -"惯使用的所有单个值。" - -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:510 -msgid "Step 3: Custom serialization" -msgstr "步骤 3:自定义序列化" +"恭喜您,你刚刚训练了一个由 10 个客户端组成的卷积神经网络!这样,你就了解了使用 Flower " +"进行联邦学习的基础知识。你所看到的方法同样适用于其他机器学习框架(不只是 PyTorch)和任务(不只是 CIFAR-10 图像分类),例如使用 " +"Hugging Face Transformers 的 NLP 或使用 SpeechBrain 的语音。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:512 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:715 msgid "" -"Here we will explore how to implement custom serialization with a simple " -"example." -msgstr "下面我们将通过一个简单的示例来探讨如何实现自定义序列化。" +"In the next notebook, we're going to cover some more advanced concepts. " +"Want to customize your strategy? Initialize parameters on the server " +"side? Or evaluate the aggregated model on the server side? We'll cover " +"all this and more in the next tutorial." +msgstr "在下一个笔记中,我们将介绍一些更先进的概念。想定制你的策略吗?在服务器端初始化参数?或者在服务器端评估聚合模型?我们将在下一个教程中介绍所有这些内容以及更多。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:514 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:733 msgid "" -"But first what is serialization? Serialization is just the process of " -"converting an object into raw bytes, and equally as important, " -"deserialization is the process of converting raw bytes back into an object. " -"This is very useful for network communication. Indeed, without " -"serialization, you could not just a Python object through the internet." +"The `Flower Federated Learning Tutorial - Part 2 " +"`__ goes into more depth about strategies and all " +"the advanced things you can build with them." msgstr "" -"首先,什么是序列化?序列化只是将对象转换为原始字节的过程,同样重要的是,反序" -"列化是将原始字节转换回对象的过程。这对网络通信非常有用。事实上,如果没有序列" -"化,你就无法通过互联网传输一个 Python 对象。" +"`Flower 联邦学习教程 - 第 2 部分 `__ 更深入地介绍了策略以及可以使用策略构建的所有高级功能。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:516 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:9 +msgid "Use a federated learning strategy" +msgstr "使用联邦学习策略" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:11 msgid "" -"Federated Learning relies heavily on internet communication for training by " -"sending Python objects back and forth between the clients and the server. " -"This means that serialization is an essential part of Federated Learning." +"Welcome to the next part of the federated learning tutorial. In previous " +"parts of this tutorial, we introduced federated learning with PyTorch and" +" Flower (`part 1 `__)." msgstr "" -"通过在客户端和服务器之间来回发送 Python 对象,联合学习在很大程度上依赖于互联" -"网通信进行训练。这意味着序列化是联邦学习的重要组成部分。" +"欢迎来到联邦学习教程的下一部分。在本教程的前几部分,我们介绍了使用 PyTorch 和 Flower 进行联邦学习(`第 1 部分 " +"`___)。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:518 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:13 msgid "" -"In the following section, we will write a basic example where instead of " -"sending a serialized version of our ``ndarray``\\ s containing our " -"parameters, we will first convert the ``ndarray`` into sparse matrices, " -"before sending them. This technique can be used to save bandwidth, as in " -"certain cases where the weights of a model are sparse (containing many 0 " -"entries), converting them to a sparse matrix can greatly improve their " -"bytesize." +"In this notebook, we'll begin to customize the federated learning system " +"we built in the introductory notebook (again, using `Flower " +"`__ and `PyTorch `__)." msgstr "" -"在下面的章节中,我们将编写一个基本示例,在发送包含参数的 ``ndarray`` 前,我们" -"将首先把 ``ndarray`` 转换为稀疏矩阵,而不是发送序列化版本。这种技术可以用来节" -"省带宽,因为在某些情况下,模型的参数是稀疏的(包含许多 0 条目),将它们转换成" -"稀疏矩阵可以大大提高它们的字节数。" +"在本笔记中,我们将开始定制在入门笔记中构建的联邦学习系统(再次使用 `Flower `__ 和 " +"`PyTorch `__)。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:521 -msgid "Our custom serialization/deserialization functions" -msgstr "我们的定制序列化/反序列化功能" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:17 +msgid "Let's move beyond FedAvg with Flower strategies!" +msgstr "让我们超越 FedAvg,采用Flower策略!" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:523 -msgid "" -"This is where the real serialization/deserialization will happen, especially " -"in ``ndarray_to_sparse_bytes`` for serialization and " -"``sparse_bytes_to_ndarray`` for deserialization." -msgstr "" -"这才是真正的序列化/反序列化,尤其是在用于序列化的 " -"``ndarray_too_sparse_bytes`` 和用于反序列化的 ``sparse_bytes_too_ndarray`` " -"中。" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:309 +msgid "Strategy customization" +msgstr "策略定制" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:525 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:311 msgid "" -"Note that we imported the ``scipy.sparse`` library in order to convert our " -"arrays." -msgstr "请注意,为了转换数组,我们导入了 ``scipy.sparse`` 库。" +"So far, everything should look familiar if you've worked through the " +"introductory notebook. With that, we're ready to introduce a number of " +"new features." +msgstr "到目前为止,如果您已经阅读过入门笔记本,那么一切都应该很熟悉了。接下来,我们将介绍一些新功能。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:613 -msgid "Client-side" -msgstr "客户端" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:323 +msgid "Server-side parameter **initialization**" +msgstr "服务器端参数 **初始化**" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:615 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:325 msgid "" -"To be able to serialize our ``ndarray``\\ s into sparse parameters, we will " -"just have to call our custom functions in our ``flwr.client.Client``." +"Flower, by default, initializes the global model by asking one random " +"client for the initial parameters. In many cases, we want more control " +"over parameter initialization though. Flower therefore allows you to " +"directly pass the initial parameters to the Strategy:" msgstr "" -"为了能够将我们的 ``ndarray`` 序列化为稀疏参数,我们只需在 ``flwr.client." -"Client`` 中调用我们的自定义函数。" +"默认情况下,Flower 会通过向一个随机客户端询问初始参数来初始化全局模型。但在许多情况下,我们需要对参数初始化进行更多控制。因此,Flower" +" 允许您直接将初始参数传递给策略:" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:617 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:370 msgid "" -"Indeed, in ``get_parameters`` we need to serialize the parameters we got " -"from our network using our custom ``ndarrays_to_sparse_parameters`` defined " -"above." +"Passing ``initial_parameters`` to the ``FedAvg`` strategy prevents Flower" +" from asking one of the clients for the initial parameters. If we look " +"closely, we can see that the logs do not show any calls to the " +"``FlowerClient.get_parameters`` method." msgstr "" -"事实上,在 `get_parameters` 中,我们需要使用上文定义的自定义 " -"`ndarrays_too_sparse_parameters` 序列化从网络中获取的参数。" +"向 ``FedAvg`` 策略传递 ``initial_parameters`` 可以防止 Flower " +"向其中一个客户端询问初始参数。如果我们仔细观察,就会发现日志中没有显示对 ``FlowerClient.get_parameters`` " +"方法的任何调用。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:619 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:382 +msgid "Starting with a customized strategy" +msgstr "从定制战略开始" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:384 msgid "" -"In ``fit``, we first need to deserialize the parameters coming from the " -"server using our custom ``sparse_parameters_to_ndarrays`` and then we need " -"to serialize our local results with ``ndarrays_to_sparse_parameters``." +"We've seen the function ``start_simulation`` before. It accepts a number " +"of arguments, amongst them the ``client_fn`` used to create " +"``FlowerClient`` instances, the number of clients to simulate " +"``num_clients``, the number of rounds ``num_rounds``, and the strategy." msgstr "" -"在 ``fit`` 中,我们首先需要使用自定义的 ``sparse_parameters_to_ndarrays`` 反" -"序列化来自服务器的参数,然后使用 ``ndarrays_to_sparse_parameters`` 序列化本地" -"结果。" +"我们以前见过函数 ``start_simulation``。它接受许多参数,其中包括用于创建 ``FlowerClient`` 实例的 " +"``client_fn``、要模拟的客户数量 ``num_clients``、回合数 ``num_rounds``和策略。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:621 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:386 msgid "" -"In ``evaluate``, we will only need to deserialize the global parameters with " -"our custom function." -msgstr "在 ``evaluate`` 中,我们只需要用自定义函数反序列化全局参数。" +"The strategy encapsulates the federated learning approach/algorithm, for " +"example, ``FedAvg`` or ``FedAdagrad``. Let's try to use a different " +"strategy this time:" +msgstr "该策略封装了联邦学习方法/算法,例如`FedAvg``或`FedAdagrad``。这次让我们尝试使用不同的策略:" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:725 -msgid "Server-side" -msgstr "服务器端" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:424 +msgid "Server-side parameter **evaluation**" +msgstr "服务器端参数**评估**" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:727 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:426 msgid "" -"For this example, we will just use ``FedAvg`` as a strategy. To change the " -"serialization and deserialization here, we only need to reimplement the " -"``evaluate`` and ``aggregate_fit`` functions of ``FedAvg``. The other " -"functions of the strategy will be inherited from the super class ``FedAvg``." -msgstr "" -"在本例中,我们将只使用 ``FedAvg`` 作为策略。要改变这里的序列化和反序列化,我" -"们只需重新实现 ``FedAvg`` 的 ``evaluate`` 和 ``aggregate_fit`` 函数。策略的其" -"他函数将从超类 ``FedAvg`` 继承。" +"Flower can evaluate the aggregated model on the server-side or on the " +"client-side. Client-side and server-side evaluation are similar in some " +"ways, but different in others." +msgstr "Flower 可以在服务器端或客户端评估聚合模型。客户端和服务器端评估在某些方面相似,但也有不同之处。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:729 -msgid "As you can see only one line as change in ``evaluate``:" -msgstr "正如你所看到的,``evaluate``中只修改了一行:" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:428 +msgid "" +"**Centralized Evaluation** (or *server-side evaluation*) is conceptually " +"simple: it works the same way that evaluation in centralized machine " +"learning does. If there is a server-side dataset that can be used for " +"evaluation purposes, then that's great. We can evaluate the newly " +"aggregated model after each round of training without having to send the " +"model to clients. We're also fortunate in the sense that our entire " +"evaluation dataset is available at all times." +msgstr "**集中评估**(或*服务器端评估*)在概念上很简单:它的工作方式与集中式机器学习中的评估方式相同。如果有一个服务器端数据集可用于评估目的,那就太好了。我们可以在每一轮训练后对新聚合的模型进行评估,而无需将模型发送给客户端。我们也很幸运,因为我们的整个评估数据集随时可用。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:735 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:430 msgid "" -"And for ``aggregate_fit``, we will first deserialize every result we " -"received:" -msgstr "而对于 ``aggregate_fit``,我们将首先反序列化收到的每个结果:" +"**Federated Evaluation** (or *client-side evaluation*) is more complex, " +"but also more powerful: it doesn't require a centralized dataset and " +"allows us to evaluate models over a larger set of data, which often " +"yields more realistic evaluation results. In fact, many scenarios require" +" us to use **Federated Evaluation** if we want to get representative " +"evaluation results at all. But this power comes at a cost: once we start " +"to evaluate on the client side, we should be aware that our evaluation " +"dataset can change over consecutive rounds of learning if those clients " +"are not always available. Moreover, the dataset held by each client can " +"also change over consecutive rounds. This can lead to evaluation results " +"that are not stable, so even if we would not change the model, we'd see " +"our evaluation results fluctuate over consecutive rounds." +msgstr "**联邦评估**(或*客户端评估*)更为复杂,但也更为强大:它不需要集中的数据集,允许我们在更大的数据集上对模型进行评估,这通常会产生更真实的评估结果。事实上,如果我们想得到有代表性的评估结果,很多情况下都需要使用**联邦评估**。但是,这种能力是有代价的:一旦我们开始在客户端进行评估,我们就应该意识到,如果这些客户端并不总是可用,我们的评估数据集可能会在连续几轮学习中发生变化。此外,每个客户端所拥有的数据集也可能在连续几轮学习中发生变化。这可能会导致评估结果不稳定,因此即使我们不改变模型,也会看到评估结果在连续几轮中波动。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:744 -msgid "And then serialize the aggregated result:" -msgstr "然后将汇总结果序列化:" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:433 +msgid "" +"We've seen how federated evaluation works on the client side (i.e., by " +"implementing the ``evaluate`` method in ``FlowerClient``). Now let's see " +"how we can evaluate aggregated model parameters on the server-side:" +msgstr "" +"我们已经了解了联邦评估如何在客户端工作(即通过在 ``FlowerClient`` 中实现 ``evaluate`` " +"方法)。现在让我们看看如何在服务器端评估聚合模型参数:" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:903 -msgid "We can now run our custom serialization example!" -msgstr "现在我们可以运行自定义序列化示例!" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:490 +msgid "Sending/receiving arbitrary values to/from clients" +msgstr "向/从客户端发送/接收任意值" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:934 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:492 msgid "" -"In this part of the tutorial, we've seen how we can build clients by " -"subclassing either ``NumPyClient`` or ``Client``. ``NumPyClient`` is a " -"convenience abstraction that makes it easier to work with machine learning " -"libraries that have good NumPy interoperability. ``Client`` is a more " -"flexible abstraction that allows us to do things that are not possible in " -"``NumPyClient``. In order to do so, it requires us to handle parameter " -"serialization and deserialization ourselves." -msgstr "" -"在本部分教程中,我们已经了解了如何通过子类化 ``NumPyClient`` 或 ``Client`` 来" -"构建客户端。NumPyClient \"是一个便捷的抽象类,可以让我们更容易地与具有良好" -"NumPy互操作性的机器学习库一起工作。``Client``是一个更灵活的抽象类,允许我们做" -"一些在`NumPyClient``中做不到的事情。为此,它要求我们自己处理参数序列化和反序" -"列化。" +"In some situations, we want to configure client-side execution (training," +" evaluation) from the server-side. One example for that is the server " +"asking the clients to train for a certain number of local epochs. Flower " +"provides a way to send configuration values from the server to the " +"clients using a dictionary. Let's look at an example where the clients " +"receive values from the server through the ``config`` parameter in " +"``fit`` (``config`` is also available in ``evaluate``). The ``fit`` " +"method receives the configuration dictionary through the ``config`` " +"parameter and can then read values from this dictionary. In this example," +" it reads ``server_round`` and ``local_epochs`` and uses those values to " +"improve the logging and configure the number of local training epochs:" +msgstr "" +"在某些情况下,我们希望从服务器端配置客户端的执行(训练、评估)。其中一个例子就是服务器要求客户端训练一定数量的本地遍历。Flower " +"提供了一种使用字典从服务器向客户端发送配置值的方法。让我们来看一个例子:客户端通过 ``fit`` 中的 ``config`` " +"参数从服务器接收配置值(``evaluate`` 中也有 ``config`` 参数)。``fit`` 方法通过 ``config`` " +"参数接收配置字典,然后从字典中读取值。在本例中,它读取了 ``server_round`` 和 " +"``local_epochs``,并使用这些值来改进日志记录和配置本地训练遍历的数量:" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:952 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:546 msgid "" -"This is the final part of the Flower tutorial (for now!), congratulations! " -"You're now well equipped to understand the rest of the documentation. There " -"are many topics we didn't cover in the tutorial, we recommend the following " -"resources:" +"So how can we send this config dictionary from server to clients? The " +"built-in Flower Strategies provide way to do this, and it works similarly" +" to the way server-side evaluation works. We provide a function to the " +"strategy, and the strategy calls this function for every round of " +"federated learning:" msgstr "" -"这暂时是 Flower 教程的最后一部分,恭喜您!您现在已经具备了理解其余文档的能" -"力。本教程还有许多内容没有涉及,我们推荐您参考以下资源:" - -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:954 -msgid "`Read Flower Docs `__" -msgstr "阅读Flower文档 `__" +"那么,如何将配置字典从服务器发送到客户端呢?内置的 \"Flower策略\"(Flower " +"Strategies)提供了这样的方法,其工作原理与服务器端评估的工作原理类似。我们为策略提供一个函数,策略会在每一轮联邦学习中调用这个函数:" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:955 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:576 msgid "" -"`Check out Flower Code Examples `__" -msgstr "" -"查看 Flower 代码示例 `__" +"Next, we'll just pass this function to the FedAvg strategy before " +"starting the simulation:" +msgstr "接下来,我们只需在开始模拟前将此函数传递给 FedAvg 策略即可:" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:956 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:613 msgid "" -"`Use Flower Baselines for your research `__" +"As we can see, the client logs now include the current round of federated" +" learning (which they read from the ``config`` dictionary). We can also " +"configure local training to run for one epoch during the first and second" +" round of federated learning, and then for two epochs during the third " +"round." msgstr "" -"使用 \"Flower Baselines \"进行研究 `__" +"我们可以看到,客户端日志现在包含了当前一轮的联邦学习(从 ``config`` " +"字典中读取)。我们还可以将本地训练配置为在第一轮和第二轮联邦学习期间运行一个遍历,然后在第三轮联邦学习期间运行两个遍历。" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:957 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:615 msgid "" -"`Watch Flower Summit 2023 videos `__" +"Clients can also return arbitrary values to the server. To do so, they " +"return a dictionary from ``fit`` and/or ``evaluate``. We have seen and " +"used this concept throughout this notebook without mentioning it " +"explicitly: our ``FlowerClient`` returns a dictionary containing a custom" +" key/value pair as the third return value in ``evaluate``." msgstr "" -"观看 2023 年Flower峰会视频 `__" - -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:9 -msgid "Get started with Flower" -msgstr "开始使用Flower" +"客户端还可以向服务器返回任意值。为此,它们会从 ``fit`` 和/或 ``evaluate`` " +"返回一个字典。我们在本笔记中看到并使用了这一概念,但并未明确提及:我们的 ``FlowerClient`` 返回一个包含自定义键/值对的字典,作为" +" ``evaluate`` 中的第三个返回值。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:11 -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:11 -msgid "Welcome to the Flower federated learning tutorial!" -msgstr "欢迎阅读Flower联邦学习教程!" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:627 +msgid "Scaling federated learning" +msgstr "扩大联邦学习的规模" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:13 -#, fuzzy +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:629 msgid "" -"In this notebook, we'll build a federated learning system using Flower, " -"`Flower Datasets `__ and PyTorch. In part " -"1, we use PyTorch for the model training pipeline and data loading. In part " -"2, we continue to federate the PyTorch-based pipeline using Flower." -msgstr "" -"在本笔记中,我们将使用 Flower 和 PyTorch 构建一个联邦学习系统。在第一部分中," -"我们使用 PyTorch 进行模型训练和数据加载。在第二部分中,我们将继续使用 Flower " -"联邦化基于 PyTorch 的框架。" +"As a last step in this notebook, let's see how we can use Flower to " +"experiment with a large number of clients." +msgstr "作为本笔记的最后一步,让我们看看如何使用 Flower 对大量客户端进行实验。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:17 -msgid "Let's get stated!" -msgstr "让我们开始吧!" +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:651 +#, python-format +msgid "" +"We now have 1000 partitions, each holding 45 training and 5 validation " +"examples. Given that the number of training examples on each client is " +"quite small, we should probably train the model a bit longer, so we " +"configure the clients to perform 3 local training epochs. We should also " +"adjust the fraction of clients selected for training during each round " +"(we don't want all 1000 clients participating in every round), so we " +"adjust ``fraction_fit`` to ``0.05``, which means that only 5% of " +"available clients (so 50 clients) will be selected for training each " +"round:" +msgstr "" +"现在我们有 1000 个分区,每个分区有 45 个训练数据和 5 " +"个验证数据。鉴于每个客户端上的训练示例数量较少,我们可能需要对模型进行更长时间的训练,因此我们将客户端配置为执行 3 " +"个本地训练遍历。我们还应该调整每轮训练中被选中的客户端的比例(我们不希望每轮训练都有 1000 个客户端参与),因此我们将 " +"``fraction_fit`` 调整为 ``0.05``,这意味着每轮训练只选中 5%的可用客户端(即 50 个客户端):" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:31 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:699 msgid "" -"Before we begin with any actual code, let's make sure that we have " -"everything we need." -msgstr "在开始编写实际代码之前,让我们先确保我们已经准备好了所需的一切。" +"In this notebook, we've seen how we can gradually enhance our system by " +"customizing the strategy, initializing parameters on the server side, " +"choosing a different strategy, and evaluating models on the server-side. " +"That's quite a bit of flexibility with so little code, right?" +msgstr "在本笔记中,我们看到了如何通过自定义策略、在服务器端初始化参数、选择不同的策略以及在服务器端评估模型来逐步增强我们的系统。用这么少的代码就能实现这么大的灵活性,不是吗?" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:45 -#, fuzzy +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:701 msgid "" -"Next, we install the necessary packages for PyTorch (``torch`` and " -"``torchvision``), Flower Datasets (``flwr-datasets``) and Flower (``flwr``):" +"In the later sections, we've seen how we can communicate arbitrary values" +" between server and clients to fully customize client-side execution. " +"With that capability, we built a large-scale Federated Learning " +"simulation using the Flower Virtual Client Engine and ran an experiment " +"involving 1000 clients in the same workload - all in a Jupyter Notebook!" msgstr "" -"接下来,我们为 PyTorch(`torch`` 和`torchvision``)和 Flower(`flwr`)安装必" -"要的软件包:" +"在后面的章节中,我们将看到如何在服务器和客户端之间传递任意值,以完全自定义客户端执行。有了这种能力,我们使用 Flower " +"虚拟客户端引擎构建了一个大规模的联邦学习模拟,并在 Jupyter Notebook 中进行了一次实验,在相同的工作负载中运行了 1000 " +"个客户端!" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:105 -#, fuzzy +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:719 msgid "" -"It is possible to switch to a runtime that has GPU acceleration enabled (on " -"Google Colab: ``Runtime > Change runtime type > Hardware accelerator: GPU > " -"Save``). Note, however, that Google Colab is not always able to offer GPU " -"acceleration. If you see an error related to GPU availability in one of the " -"following sections, consider switching back to CPU-based execution by " -"setting ``DEVICE = torch.device(\"cpu\")``. If the runtime has GPU " -"acceleration enabled, you should see the output ``Training on cuda``, " -"otherwise it'll say ``Training on cpu``." +"The `Flower Federated Learning Tutorial - Part 3 " +"`__ shows how to build a fully custom ``Strategy`` from " +"scratch." msgstr "" -"可以切换到已启用 GPU 加速的运行时(在 Google Colab 上: 运行时 > 更改运行时类" -"型 > 硬件加速: GPU > 保存``)。但请注意,Google Colab 并非总能提供 GPU 加" -"速。如果在以下部分中看到与 GPU 可用性相关的错误,请考虑通过设置 ``DEVICE = " -"torch.device(\"cpu\")`` 切回基于 CPU 的执行。如果运行时已启用 GPU 加速,你应" -"该会看到输出``Training on cuda``,否则会显示``Training on cpu``。" +"`Flower 联邦学习教程 - 第 3 部分 `__ 展示了如何从头开始构建完全自定义的 \"策略\"。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:118 -msgid "Loading the data" -msgstr "加载数据" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:9 +msgid "What is Federated Learning?" +msgstr "什么是联邦学习?" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:120 -#, fuzzy +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:13 msgid "" -"Federated learning can be applied to many different types of tasks across " -"different domains. In this tutorial, we introduce federated learning by " -"training a simple convolutional neural network (CNN) on the popular CIFAR-10 " -"dataset. CIFAR-10 can be used to train image classifiers that distinguish " -"between images from ten different classes: 'airplane', 'automobile', 'bird', " -"'cat', 'deer', 'dog', 'frog', 'horse', 'ship', and 'truck'." +"In this tutorial, you will learn what federated learning is, build your " +"first system in Flower, and gradually extend it. If you work through all " +"parts of the tutorial, you will be able to build advanced federated " +"learning systems that approach the current state of the art in the field." msgstr "" -"联邦学习可应用于不同领域的多种不同类型任务。在本教程中,我们将通过在流行的 " -"CIFAR-10 数据集上训练一个简单的卷积神经网络 (CNN) 来介绍联合学习。CIFAR-10 可" -"用于训练图像分类器,以区分来自十个不同类别的图像:" - -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:131 -msgid "" -"We simulate having multiple datasets from multiple organizations (also " -"called the \"cross-silo\" setting in federated learning) by splitting the " -"original CIFAR-10 dataset into multiple partitions. Each partition will " -"represent the data from a single organization. We're doing this purely for " -"experimentation purposes, in the real world there's no need for data " -"splitting because each organization already has their own data (so the data " -"is naturally partitioned)." -msgstr "" -"我们通过将原始 CIFAR-10 数据集拆分成多个分区来模拟来自多个组织的多个数据集" -"(也称为联邦学习中的 \"跨分区 \"设置)。每个分区代表一个组织的数据。我们这样" -"做纯粹是为了实验目的,在现实世界中不需要拆分数据,因为每个组织都已经有了自己" -"的数据(所以数据是自然分区的)。" +"在本教程中,你将了解什么是联邦学习,用 Flower " +"搭建第一个系统,并逐步对其进行扩展。如果你能完成本教程的所有部分,你就能构建高级的联邦学习系统,从而接近该领域当前的技术水平。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:133 -#, fuzzy +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:15 msgid "" -"Each organization will act as a client in the federated learning system. So " -"having ten organizations participate in a federation means having ten " -"clients connected to the federated learning server." -msgstr "" -"每个组织都将充当联邦学习系统中的客户端。因此,有十个组织参与联邦学习,就意味" -"着有十个客户端连接到联邦学习服务器:" +"🧑‍🏫 This tutorial starts at zero and expects no familiarity with " +"federated learning. Only a basic understanding of data science and Python" +" programming is assumed." +msgstr "🧑‍🏫 本教程从零开始,不要求熟悉联邦学习。仅假定对数据科学和 Python 编程有基本了解。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:144 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:17 msgid "" -"Let's now create the Federated Dataset abstraction that from ``flwr-" -"datasets`` that partitions the CIFAR-10. We will create small training and " -"test set for each edge device and wrap each of them into a PyTorch " -"``DataLoader``:" +"`Star Flower on GitHub `__ ⭐️ and join " +"the open-source Flower community on Slack to connect, ask questions, and " +"get help: `Join Slack `__ 🌼 We'd love to " +"hear from you in the ``#introductions`` channel! And if anything is " +"unclear, head over to the ``#questions`` channel." msgstr "" +"`Star Flower on GitHub `__ ⭐️ 并加入 Slack " +"上的开源 Flower 社区,进行交流、提问并获得帮助: 加入 Slack `__ 🌼" +" 我们希望在 ``#introductions`` 频道听到您的声音!如果有任何不清楚的地方,请访问 ``#questions`` 频道。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:198 -msgid "" -"We now have a list of ten training sets and ten validation sets " -"(``trainloaders`` and ``valloaders``) representing the data of ten different " -"organizations. Each ``trainloader``/``valloader`` pair contains 4500 " -"training examples and 500 validation examples. There's also a single " -"``testloader`` (we did not split the test set). Again, this is only " -"necessary for building research or educational systems, actual federated " -"learning systems have their data naturally distributed across multiple " -"partitions." -msgstr "" -"现在,我们有一个包含十个训练集和十个验证集(`trainloaders`` 和`valloaders``)" -"的列表,代表十个不同组织的数据。每对 ``trainloader``/``valloader`` 都包含 " -"4500 个训练示例和 500 个验证数据。还有一个单独的 ``测试加载器``(我们没有拆分" -"测试集)。同样,这只有在构建研究或教育系统时才有必要,实际的联邦学习系统的数" -"据自然分布在多个分区中。" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:31 +msgid "Classic machine learning" +msgstr "经典机器学习" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:201 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:33 msgid "" -"Let's take a look at the first batch of images and labels in the first " -"training set (i.e., ``trainloaders[0]``) before we move on:" -msgstr "" -"在继续之前,让我们先看看第一个训练集中的第一批图像和标签(即 " -"``trainloaders[0]``):" +"Before we begin to discuss federated learning, let us quickly recap how " +"most machine learning works today." +msgstr "在开始讨论联邦学习之前,让我们先快速回顾一下目前大多数机器学习的工作原理。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:240 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:35 msgid "" -"The output above shows a random batch of images from the first " -"``trainloader`` in our list of ten ``trainloaders``. It also prints the " -"labels associated with each image (i.e., one of the ten possible labels " -"we've seen above). If you run the cell again, you should see another batch " -"of images." -msgstr "" -"上面的输出显示了来自十个 \"trainloader \"列表中第一个 \"trainloader \"的随机" -"图像。它还打印了与每幅图像相关的标签(即我们上面看到的十个可能标签之一)。如" -"果您再次运行该单元,应该会看到另一批图像。" - -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:252 -msgid "Step 1: Centralized Training with PyTorch" -msgstr "步骤 1:使用 PyTorch 进行集中训练" +"In machine learning, we have a model, and we have data. The model could " +"be a neural network (as depicted here), or something else, like classical" +" linear regression." +msgstr "在机器学习中,我们有一个模型和数据。模型可以是一个神经网络(如图所示),也可以是其他东西,比如经典的线性回归。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:263 -msgid "" -"Next, we're going to use PyTorch to define a simple convolutional neural " -"network. This introduction assumes basic familiarity with PyTorch, so it " -"doesn't cover the PyTorch-related aspects in full detail. If you want to " -"dive deeper into PyTorch, we recommend `DEEP LEARNING WITH PYTORCH: A 60 " -"MINUTE BLITZ `__." +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:41 +msgid "|d8bf04f23d9b46d8a23cc6f4887d7873|" msgstr "" -"接下来,我们将使用 PyTorch 来定义一个简单的卷积神经网络。本介绍假定您对 " -"PyTorch 有基本的了解,因此不会详细介绍与 PyTorch 相关的内容。如果你想更深入地" -"了解 PyTorch,我们推荐你阅读 `DEEP LEARNING WITH PYTORCH: a 60 minute blitz " -"`__。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:275 -msgid "Defining the model" -msgstr "定义模型" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:109 +msgid "Model and data" +msgstr "模型和数据" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:277 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:47 msgid "" -"We use the simple CNN described in the `PyTorch tutorial `__:" -msgstr "" -"我们使用` PyTorch 教程 `__ 中描述的简单 " -"CNN:" +"We train the model using the data to perform a useful task. A task could " +"be to detect objects in images, transcribe an audio recording, or play a " +"game like Go." +msgstr "我们使用数据来训练模型,以完成一项有用的任务。任务可以是检测图像中的物体、转录音频或玩围棋等游戏。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:314 -msgid "Let's continue with the usual training and test functions:" -msgstr "让我们继续进行常规的训练和测试功能:" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:53 +msgid "|5aa1711387d74d0f8b9c499e1a51627e|" +msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:374 -msgid "Training the model" -msgstr "训练模型" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:111 +msgid "Train model using data" +msgstr "使用数据训练模型" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:376 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:59 msgid "" -"We now have all the basic building blocks we need: a dataset, a model, a " -"training function, and a test function. Let's put them together to train the " -"model on the dataset of one of our organizations (``trainloaders[0]``). This " -"simulates the reality of most machine learning projects today: each " -"organization has their own data and trains models only on this internal data:" -msgstr "" -"现在我们拥有了所需的所有基本构件:数据集、模型、训练函数和测试函数。让我们把" -"它们放在一起,在我们其中一个组织的数据集(``trainloaders[0]``)上训练模型。这" -"模拟了当今大多数机器学习项目的实际情况:每个组织都有自己的数据,并且只在这些" -"内部数据上训练模型:" +"Now, in practice, the training data we work with doesn't originate on the" +" machine we train the model on. It gets created somewhere else." +msgstr "实际上,我们使用的训练数据并不来自我们训练模型的机器。它是在其他地方创建的。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:406 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:61 msgid "" -"Training the simple CNN on our CIFAR-10 split for 5 epochs should result in " -"a test set accuracy of about 41%, which is not good, but at the same time, " -"it doesn't really matter for the purposes of this tutorial. The intent was " -"just to show a simplistic centralized training pipeline that sets the stage " -"for what comes next - federated learning!" +"It originates on a smartphone by the user interacting with an app, a car " +"collecting sensor data, a laptop receiving input via the keyboard, or a " +"smart speaker listening to someone trying to sing a song." +msgstr "它源于智能手机上用户与应用程序的交互、汽车上传感器数据的收集、笔记本电脑上键盘输入的接收,或者智能扬声器上某人试着唱的歌。" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:67 +msgid "|2bc8e069228d4873804061ff4a95048c|" msgstr "" -"在我们的 CIFAR-10 分片上对简单 CNN 进行 5 个遍历的训练后,测试集的准确率应为 " -"41%,这并不理想,但同时对本教程而言也并不重要。我们只是想展示一个简单的集中式" -"训练流程,为接下来的联邦学习做好铺垫!" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:418 -msgid "Step 2: Federated Learning with Flower" -msgstr "步骤 2:使用 Flower 联邦学习" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:113 +msgid "Data on a phone" +msgstr "手机上的数据" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:420 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:73 msgid "" -"Step 1 demonstrated a simple centralized training pipeline. All data was in " -"one place (i.e., a single ``trainloader`` and a single ``valloader``). Next, " -"we'll simulate a situation where we have multiple datasets in multiple " -"organizations and where we train a model over these organizations using " -"federated learning." +"What's also important to mention, this \"somewhere else\" is usually not " +"just one place, it's many places. It could be several devices all running" +" the same app. But it could also be several organizations, all generating" +" data for the same task." msgstr "" -"步骤 1 演示了一个简单的集中式训练流程。所有数据都在一个地方(即一个 " -"\"trainloader \"和一个 \"valloader\")。接下来,我们将模拟在多个组织中拥有多" -"个数据集的情况,并使用联邦学习在这些组织中训练一个模型。" - -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:432 -msgid "Updating model parameters" -msgstr "更新模型参数" +"值得一提的是,这个 \"其他地方 " +"\"通常不只是一个地方,而是很多地方。它可能是多个运行同一应用程序的设备。但也可能是多个组织,都在为同一任务生成数据。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:434 -msgid "" -"In federated learning, the server sends the global model parameters to the " -"client, and the client updates the local model with the parameters received " -"from the server. It then trains the model on the local data (which changes " -"the model parameters locally) and sends the updated/changed model parameters " -"back to the server (or, alternatively, it sends just the gradients back to " -"the server, not the full model parameters)." +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:79 +msgid "|c258488766324dc9a6807f0e7c4fd5f4|" msgstr "" -"在联邦学习中,服务器将全局模型参数发送给客户端,客户端根据从服务器接收到的参" -"数更新本地模型。然后,客户端根据本地数据对模型进行训练(在本地更改模型参" -"数),并将更新/更改后的模型参数发回服务器(或者,客户端只将梯度参数发回服务" -"器,而不是全部模型参数)。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:436 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:115 +msgid "Data is on many devices" +msgstr "数据存在于多种设备中" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:85 msgid "" -"We need two helper functions to update the local model with parameters " -"received from the server and to get the updated model parameters from the " -"local model: ``set_parameters`` and ``get_parameters``. The following two " -"functions do just that for the PyTorch model above." +"So to use machine learning, or any kind of data analysis, the approach " +"that has been used in the past was to collect all data on a central " +"server. This server can be somewhere in a data center, or somewhere in " +"the cloud." +msgstr "因此,要使用机器学习或任何类型的数据分析,过去使用的方法是在中央服务器上收集所有数据。这个服务器可以在数据中心的某个地方,也可以在云端的某个地方。" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:91 +msgid "|d5f962c3f4ec48529efda980868c14b0|" msgstr "" -"我们需要两个辅助函数,用从服务器接收到的参数更新本地模型,并从本地模型获取更" -"新后的模型参数:`` set_parameters```和`get_parameters``。下面两个函数就是为上" -"面的 PyTorch 模型做这些工作的。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:438 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:117 +msgid "Central data collection" +msgstr "集中数据收集" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:97 msgid "" -"The details of how this works are not really important here (feel free to " -"consult the PyTorch documentation if you want to learn more). In essence, we " -"use ``state_dict`` to access PyTorch model parameter tensors. The parameter " -"tensors are then converted to/from a list of NumPy ndarray's (which Flower " -"knows how to serialize/deserialize):" +"Once all the data is collected in one place, we can finally use machine " +"learning algorithms to train our model on the data. This is the machine " +"learning approach that we've basically always relied on." +msgstr "一旦所有数据都收集到一处,我们最终就可以使用机器学习算法在数据上训练我们的模型。这就是我们基本上一直依赖的机器学习方法。" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:103 +msgid "|a5eccea18d4c43a68b54b65043cabef8|" msgstr "" -"在这里,如何工作的细节并不重要(如果你想了解更多,请随时查阅 PyTorch 文档)。" -"本质上,我们使用 ``state_dict`` 访问 PyTorch 模型参数张量。然后,参数张量会被" -"转换成/转换成 NumPy ndarray 列表(Flower 知道如何序列化/反序列化):" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:466 -msgid "Implementing a Flower client" -msgstr "实现 Flower 客户端" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:119 +msgid "Central model training" +msgstr "集中模型训练" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:468 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:130 +msgid "Challenges of classical machine learning" +msgstr "经典机器学习面临的挑战" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:132 msgid "" -"With that out of the way, let's move on to the interesting part. Federated " -"learning systems consist of a server and multiple clients. In Flower, we " -"create clients by implementing subclasses of ``flwr.client.Client`` or " -"``flwr.client.NumPyClient``. We use ``NumPyClient`` in this tutorial because " -"it is easier to implement and requires us to write less boilerplate." +"The classic machine learning approach we've just seen can be used in some" +" cases. Great examples include categorizing holiday photos, or analyzing " +"web traffic. Cases, where all the data is naturally available on a " +"centralized server." +msgstr "我们刚刚看到的经典机器学习方法可以在某些情况下使用。很好的例子包括对假日照片进行分类或分析网络流量。在这些案例中,所有数据自然都可以在中央服务器上获得。" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:138 +msgid "|f17662f7df2d42f68cac70a1fdeda8a7|" msgstr "" -"说完这些,让我们进入有趣的部分。联邦学习系统由一个服务器和多个客户端组成。在 " -"Flower 中,我们通过实现 ``flwr.client.Client`` 或 ``flwr.client." -"NumPyClient`` 的子类来创建客户端。在本教程中,我们使用``NumPyClient``,因为它" -"更容易实现,需要我们编写的模板也更少。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:470 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:173 +msgid "Centralized possible" +msgstr "可集中管理" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:144 msgid "" -"To implement the Flower client, we create a subclass of ``flwr.client." -"NumPyClient`` and implement the three methods ``get_parameters``, ``fit``, " -"and ``evaluate``:" +"But the approach can not be used in many other cases. Cases, where the " +"data is not available on a centralized server, or cases where the data " +"available on one server is not enough to train a good model." +msgstr "但这种方法并不适用于许多其他情况。例如,集中服务器上没有数据,或者一台服务器上的数据不足以训练出一个好的模型。" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:150 +msgid "|241fc906441a4f038c625a19d30d01b2|" msgstr "" -"为实现 Flower 客户端,我们创建了 ``flwr.client.NumPyClient`` 的子类,并实现" -"了 ``get_parameters``、``fit`` 和``evaluate`` 三个方法:" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:472 -msgid "``get_parameters``: Return the current local model parameters" -msgstr "``get_parameters``: 返回当前本地模型参数" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:175 +msgid "Centralized impossible" +msgstr "无法集中" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:473 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:156 msgid "" -"``fit``: Receive model parameters from the server, train the model " -"parameters on the local data, and return the (updated) model parameters to " -"the server" -msgstr "" -"``fit``: 从服务器接收模型参数,在本地数据上训练模型参数,并将(更新的)模型" -"参数返回服务器" +"There are many reasons why the classic centralized machine learning " +"approach does not work for a large number of highly important real-world " +"use cases. Those reasons include:" +msgstr "传统的集中式机器学习方法无法满足现实世界中大量极为重要的使用案例,原因有很多。这些原因包括:" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:474 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:158 msgid "" -"``evaluate``: Receive model parameters from the server, evaluate the model " -"parameters on the local data, and return the evaluation result to the server" +"**Regulations**: GDPR (Europe), CCPA (California), PIPEDA (Canada), LGPD " +"(Brazil), PDPL (Argentina), KVKK (Turkey), POPI (South Africa), FSS " +"(Russia), CDPR (China), PDPB (India), PIPA (Korea), APPI (Japan), PDP " +"(Indonesia), PDPA (Singapore), APP (Australia), and other regulations " +"protect sensitive data from being moved. In fact, those regulations " +"sometimes even prevent single organizations from combining their own " +"users' data for artificial intelligence training because those users live" +" in different parts of the world, and their data is governed by different" +" data protection regulations." msgstr "" -"``evaluate ``: 从服务器接收模型参数,在本地数据上评估模型参数,并将评估结果" -"返回服务器" +"**法规**: " +"GDPR(欧洲)、CCPA(加利福尼亚)、PIPEDA(加拿大)、LGPD(巴西)、PDPL(阿根廷)、KVKK(土耳其)、POPI(南非)、FSS(俄罗斯)、CDPR(中国)、PDPB(印度)、PIPA(韩国)、APPI(日本)、PDP(印度尼西亚)、PDPA(新加坡)、APP(澳大利亚)等法规保护敏感数据不被移动。事实上,这些法规有时甚至会阻止单个组织将自己的用户数据用于人工智能培训,因为这些用户生活在世界不同地区,他们的数据受不同的数据保护法规管辖。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:476 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:160 msgid "" -"We mentioned that our clients will use the previously defined PyTorch " -"components for model training and evaluation. Let's see a simple Flower " -"client implementation that brings everything together:" +"**User preference**: In addition to regulation, there are use cases where" +" users just expect that no data leaves their device, ever. If you type " +"your passwords and credit card info into the digital keyboard of your " +"phone, you don't expect those passwords to end up on the server of the " +"company that developed that keyboard, do you? In fact, that use case was " +"the reason federated learning was invented in the first place." msgstr "" -"我们提到,我们的客户端将使用之前定义的 PyTorch 组件进行模型训练和评估。让我们" -"来看看一个简单的 Flower 客户端实现,它将一切都整合在一起:" +"**用户偏好**: " +"除了法规之外,在一些使用案例中,用户只是希望数据永远不会离开他们的设备。如果你在手机的数字键盘上输入密码和信用卡信息,你不会希望这些密码最终出现在开发该键盘的公司的服务器上吧?事实上,这种用例正是联邦学习发明的初衷。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:513 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:161 msgid "" -"Our class ``FlowerClient`` defines how local training/evaluation will be " -"performed and allows Flower to call the local training/evaluation through " -"``fit`` and ``evaluate``. Each instance of ``FlowerClient`` represents a " -"*single client* in our federated learning system. Federated learning systems " -"have multiple clients (otherwise, there's not much to federate), so each " -"client will be represented by its own instance of ``FlowerClient``. If we " -"have, for example, three clients in our workload, then we'd have three " -"instances of ``FlowerClient``. Flower calls ``FlowerClient.fit`` on the " -"respective instance when the server selects a particular client for training " -"(and ``FlowerClient.evaluate`` for evaluation)." -msgstr "" -"我们的类 ``FlowerClient`` 定义了本地训练/评估的执行方式,并允许 Flower 通过 " -"``fit`` 和 ``evaluate`` 调用本地训练/评估。每个 ``FlowerClient`` 实例都代表联" -"邦学习系统中的*单个客户端*。联邦学习系统有多个客户端(否则就没有什么可联邦" -"的),因此每个客户端都将由自己的 ``FlowerClient`` 实例来代表。例如,如果我们" -"的工作负载中有三个客户端,那么我们就会有三个 ``FlowerClient`` 实例。当服务器" -"选择特定客户端进行训练时,Flower 会调用相应实例上的 ``FlowerClient.fit`` (评" -"估时调用 ``FlowerClient.evaluate``)。" +"**Data volume**: Some sensors, like cameras, produce such a high data " +"volume that it is neither feasible nor economic to collect all the data " +"(due to, for example, bandwidth or communication efficiency). Think about" +" a national rail service with hundreds of train stations across the " +"country. If each of these train stations is outfitted with a number of " +"security cameras, the volume of raw on-device data they produce requires " +"incredibly powerful and exceedingly expensive infrastructure to process " +"and store. And most of the data isn't even useful." +msgstr "" +"**数据量**: " +"有些传感器(如摄像头)产生的数据量很大,收集所有数据既不可行,也不经济(例如,由于带宽或通信效率的原因)。试想一下全国铁路服务,全国有数百个火车站。如果每个火车站都安装了许多安全摄像头,那么它们所产生的大量原始设备数据就需要功能强大且极其昂贵的基础设施来处理和存储。而大部分数据甚至都是无用的。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:517 -msgid "Using the Virtual Client Engine" -msgstr "使用虚拟客户端引擎" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:164 +msgid "Examples where centralized machine learning does not work include:" +msgstr "集中式机器学习不起作用的例子包括:" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:519 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:166 msgid "" -"In this notebook, we want to simulate a federated learning system with 10 " -"clients on a single machine. This means that the server and all 10 clients " -"will live on a single machine and share resources such as CPU, GPU, and " -"memory. Having 10 clients would mean having 10 instances of ``FlowerClient`` " -"in memory. Doing this on a single machine can quickly exhaust the available " -"memory resources, even if only a subset of these clients participates in a " -"single round of federated learning." -msgstr "" -"在本笔记中,我们要模拟一个联邦学习系统,在一台机器上有 10 个客户端。这意味着" -"服务器和所有 10 个客户端都将位于一台机器上,并共享 CPU、GPU 和内存等资源。有 " -"10 个客户端就意味着内存中有 10 个 ``FlowerClient`` 实例。在单台机器上这样做会" -"很快耗尽可用的内存资源,即使这些客户端中只有一个子集参与了一轮联邦学习。" +"Sensitive healthcare records from multiple hospitals to train cancer " +"detection models" +msgstr "用多家医院的敏感医疗记录训练癌症检测模型" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:521 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:167 msgid "" -"In addition to the regular capabilities where server and clients run on " -"multiple machines, Flower, therefore, provides special simulation " -"capabilities that create ``FlowerClient`` instances only when they are " -"actually necessary for training or evaluation. To enable the Flower " -"framework to create clients when necessary, we need to implement a function " -"called ``client_fn`` that creates a ``FlowerClient`` instance on demand. " -"Flower calls ``client_fn`` whenever it needs an instance of one particular " -"client to call ``fit`` or ``evaluate`` (those instances are usually " -"discarded after use, so they should not keep any local state). Clients are " -"identified by a client ID, or short ``cid``. The ``cid`` can be used, for " -"example, to load different local data partitions for different clients, as " -"can be seen below:" -msgstr "" -"除了服务器和客户端在多台机器上运行的常规功能外,Flower 还提供了特殊的模拟功" -"能,即只有在训练或评估实际需要时才创建 ``FlowerClient`` 实例。为了让 Flower " -"框架能在必要时创建客户端,我们需要实现一个名为 ``client_fn`` 的函数,它能按需" -"创建一个 ``FlowerClient`` 实例。每当 Flower 需要一个特定的客户端实例来调用 " -"``fit`` 或 ``evaluate`` 时,它就会调用 ``client_fn``(这些实例在使用后通常会" -"被丢弃,因此它们不应保留任何本地状态)。客户端由一个客户端 ID 或简短的 " -"``cid`` 标识。例如,可以使用 ``cid`` 为不同的客户端加载不同的本地数据分区,如" -"下所示:" +"Financial information from different organizations to detect financial " +"fraud" +msgstr "不同组织的财务信息,以侦查财务欺诈行为" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:556 -msgid "Starting the training" -msgstr "开始训练" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:168 +msgid "Location data from your electric car to make better range prediction" +msgstr "通过电动汽车的定位数据更好地预测续航里程" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:558 -msgid "" -"We now have the class ``FlowerClient`` which defines client-side training/" -"evaluation and ``client_fn`` which allows Flower to create ``FlowerClient`` " -"instances whenever it needs to call ``fit`` or ``evaluate`` on one " -"particular client. The last step is to start the actual simulation using " -"``flwr.simulation.start_simulation``." -msgstr "" -"现在我们有了定义客户端训练/评估的类 ``FlowerClient`` 和允许 Flower 在需要调用" -"某个客户端的 ``fit` 或 ``evaluate` 时创建 ``FlowerClient`` 实例的 " -"``client_fn` 类。最后一步是使用 ``flwr.simulation.start_simulation`` 启动实际" -"模拟。" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:169 +msgid "End-to-end encrypted messages to train better auto-complete models" +msgstr "端到端加密信息可训练出更好的自动完成模型" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:560 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:171 msgid "" -"The function ``start_simulation`` accepts a number of arguments, amongst " -"them the ``client_fn`` used to create ``FlowerClient`` instances, the number " -"of clients to simulate (``num_clients``), the number of federated learning " -"rounds (``num_rounds``), and the strategy. The strategy encapsulates the " -"federated learning approach/algorithm, for example, *Federated Averaging* " -"(FedAvg)." +"The popularity of privacy-enhancing systems like the `Brave " +"`__ browser or the `Signal `__ " +"messenger shows that users care about privacy. In fact, they choose the " +"privacy-enhancing version over other alternatives, if such an alternative" +" exists. But what can we do to apply machine learning and data science to" +" these cases to utilize private data? After all, these are all areas that" +" would benefit significantly from recent advances in AI." msgstr "" -"函数 ``start_simulation`` 接受许多参数,其中包括用于创建 ``FlowerClient`` 实" -"例的 ``client_fn``、要模拟的客户端数量(``num_clients``)、联邦学习轮数" -"(``num_rounds``)和策略。策略封装了联邦学习方法/算法,例如*联邦平均* " -"(FedAvg)。" +"像 `Brave `__浏览器或 `Signal " +"`__信息管理器这样的隐私增强系统的流行表明,用户关心隐私。事实上,他们会选择隐私性更好的产品。但是,我们能做些什么来将机器学习和数据科学应用到这些情况中,以利用隐私数据呢?毕竟,这些领域都将从人工智能的最新进展中受益匪浅。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:562 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:186 +msgid "Federated learning" +msgstr "联邦学习" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:188 msgid "" -"Flower has a number of built-in strategies, but we can also use our own " -"strategy implementations to customize nearly all aspects of the federated " -"learning approach. For this example, we use the built-in ``FedAvg`` " -"implementation and customize it using a few basic parameters. The last step " -"is the actual call to ``start_simulation`` which - you guessed it - starts " -"the simulation:" -msgstr "" -"Flower 有许多内置策略,但我们也可以使用自己的策略实现来定制联邦学习方法的几乎" -"所有方面。在本例中,我们使用内置的 ``FedAvg`` 实现,并使用一些基本参数对其进" -"行定制。最后一步是实际调用 ``start_simulation``开始模拟:" +"Federated learning simply reverses this approach. It enables machine " +"learning on distributed data by moving the training to the data, instead " +"of moving the data to the training. Here's the single-sentence " +"explanation:" +msgstr "联邦学习简单地颠覆了这种方法。它通过将训练转移到数据上,而不是将数据转移到训练上,在分布式数据上实现机器学习。下面是一句话的解释:" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:608 -msgid "Behind the scenes" -msgstr "幕后" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:190 +msgid "Central machine learning: move the data to the computation" +msgstr "集中式机器学习:将数据转移到计算中心" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:610 -msgid "So how does this work? How does Flower execute this simulation?" -msgstr "那么它是如何工作的呢?Flower 如何进行模拟?" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:191 +msgid "Federated (machine) learning: move the computation to the data" +msgstr "联邦式(机器)学习:将计算转移到数据上" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:612 -#, python-format +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:193 msgid "" -"When we call ``start_simulation``, we tell Flower that there are 10 clients " -"(``num_clients=10``). Flower then goes ahead an asks the ``FedAvg`` strategy " -"to select clients. ``FedAvg`` knows that it should select 100% of the " -"available clients (``fraction_fit=1.0``), so it goes ahead and selects 10 " -"random clients (i.e., 100% of 10)." -msgstr "" -"当我们调用 ``start_simulation`` 时,我们会告诉 Flower 有 10 个客户" -"(`num_clients=10``)。然后,Flower 会要求 ``FedAvg`` 策略选择客户。" -"``FedAvg`` 知道它应该选择 100%的可用客户(``fraction_fit=1.0``),所以它会随" -"机选择 10 个客户(即 10 的 100%)。" +"By doing so, it enables us to use machine learning (and other data " +"science approaches) in areas where it wasn't possible before. We can now " +"train excellent medical AI models by enabling different hospitals to work" +" together. We can solve financial fraud by training AI models on the data" +" of different financial institutions. We can build novel privacy-" +"enhancing applications (such as secure messaging) that have better built-" +"in AI than their non-privacy-enhancing alternatives. And those are just a" +" few of the examples that come to mind. As we deploy federated learning, " +"we discover more and more areas that can suddenly be reinvented because " +"they now have access to vast amounts of previously inaccessible data." +msgstr "这样,我们就能在以前不可能的领域使用机器学习(和其他数据科学方法)。现在,我们可以通过让不同的医院协同工作来训练优秀的医疗人工智能模型。我们可以通过在不同金融机构的数据上训练人工智能模型来解决金融欺诈问题。我们可以构建新颖的隐私增强型应用(如安全信息),其内置的人工智能比非隐私增强型应用更好。以上只是我想到的几个例子。随着联邦学习的部署,我们会发现越来越多的领域可以突然重获新生,因为它们现在可以访问大量以前无法访问的数据。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:614 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:196 msgid "" -"Flower then asks the selected 10 clients to train the model. When the server " -"receives the model parameter updates from the clients, it hands those " -"updates over to the strategy (*FedAvg*) for aggregation. The strategy " -"aggregates those updates and returns the new global model, which then gets " -"used in the next round of federated learning." -msgstr "" -"然后,Flower 会要求选定的 10 个客户端对模型进行训练。服务器收到客户端的模型参" -"数更新后,会将这些更新交给策略(*FedAvg*)进行聚合。策略会聚合这些更新并返回" -"新的全局模型,然后将其用于下一轮联邦学习。" +"So how does federated learning work, exactly? Let's start with an " +"intuitive explanation." +msgstr "那么,联邦学习究竟是如何运作的呢?让我们从直观的解释开始。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:626 -msgid "Where's the accuracy?" -msgstr "准确度在哪里找?" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:199 +msgid "Federated learning in five steps" +msgstr "联邦学习的五个步骤" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:628 -msgid "" -"You may have noticed that all metrics except for ``losses_distributed`` are " -"empty. Where did the ``{\"accuracy\": float(accuracy)}`` go?" -msgstr "" -"您可能已经注意到,除了 ``losses_distributed`` 以外,所有指标都是空的。{\"准确" -"度\": float(准确度)}``去哪儿了?" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:202 +msgid "Step 0: Initialize global model" +msgstr "步骤 0:初始化全局模型" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:630 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:204 msgid "" -"Flower can automatically aggregate losses returned by individual clients, " -"but it cannot do the same for metrics in the generic metrics dictionary (the " -"one with the ``accuracy`` key). Metrics dictionaries can contain very " -"different kinds of metrics and even key/value pairs that are not metrics at " -"all, so the framework does not (and can not) know how to handle these " -"automatically." +"We start by initializing the model on the server. This is exactly the " +"same in classic centralized learning: we initialize the model parameters," +" either randomly or from a previously saved checkpoint." +msgstr "我们首先在服务器上初始化模型。这与经典的集中式学习完全相同:我们随机或从先前保存的检查点初始化模型参数。" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:210 +msgid "|0aa5aa05810b44b6a835cecce28f3137|" msgstr "" -"Flower 可以自动汇总单个客户端返回的损失值,但无法对通用度量字典中的度量进行同" -"样的处理(即带有 \"准确度 \"键的度量字典)。度量值字典可以包含非常不同种类的" -"度量值,甚至包含根本不是度量值的键/值对,因此框架不知道(也无法知道)如何自动" -"处理这些度量值。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:632 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:307 +msgid "Initialize global model" +msgstr "初始化全局模型" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:217 msgid "" -"As users, we need to tell the framework how to handle/aggregate these custom " -"metrics, and we do so by passing metric aggregation functions to the " -"strategy. The strategy will then call these functions whenever it receives " -"fit or evaluate metrics from clients. The two possible functions are " -"``fit_metrics_aggregation_fn`` and ``evaluate_metrics_aggregation_fn``." -msgstr "" -"作为用户,我们需要告诉框架如何处理/聚合这些自定义指标,为此,我们将指标聚合函" -"数传递给策略。然后,只要从客户端接收到拟合或评估指标,策略就会调用这些函数。" -"两个可能的函数是 ``fit_metrics_aggregation_fn`` 和 " -"``evaluate_metrics_aggregation_fn``。" +"Step 1: Send model to a number of connected organizations/devices (client" +" nodes)" +msgstr "第 1 步:将模型发送到多个连接的组织/设备(客户节点)" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:634 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:219 msgid "" -"Let's create a simple weighted averaging function to aggregate the " -"``accuracy`` metric we return from ``evaluate``:" +"Next, we send the parameters of the global model to the connected client " +"nodes (think: edge devices like smartphones or servers belonging to " +"organizations). This is to ensure that each participating node starts " +"their local training using the same model parameters. We often use only a" +" few of the connected nodes instead of all nodes. The reason for this is " +"that selecting more and more client nodes has diminishing returns." +msgstr "接下来,我们会将全局模型的参数发送到连接的客户端节点(如智能手机等边缘设备或企业的服务器)。这是为了确保每个参与节点都使用相同的模型参数开始本地训练。我们通常只使用几个连接节点,而不是所有节点。这样做的原因是,选择越来越多的客户端节点会导致收益递减。" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:225 +msgid "|c742940dd4bf4de09d8d0d5e8d179638|" msgstr "" -"让我们创建一个简单的加权平均函数来汇总从 ``evaluate`` 返回的 ``accuracy`` 指" -"标:" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:660 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:309 +msgid "Send global model" +msgstr "发送全局模型" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:232 msgid "" -"The only thing left to do is to tell the strategy to call this function " -"whenever it receives evaluation metric dictionaries from the clients:" -msgstr "" -"剩下要做的就是告诉策略,每当它从客户端接收到评估度量字典时,都要调用这个函" -"数:" +"Step 2: Train model locally on the data of each organization/device " +"(client node)" +msgstr "步骤 2:在本地对每个机构/设备(客户端节点)的数据进行模型训练" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:697 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:234 msgid "" -"We now have a full system that performs federated training and federated " -"evaluation. It uses the ``weighted_average`` function to aggregate custom " -"evaluation metrics and calculates a single ``accuracy`` metric across all " -"clients on the server side." +"Now that all (selected) client nodes have the latest version of the " +"global model parameters, they start the local training. They use their " +"own local dataset to train their own local model. They don't train the " +"model until full convergence, but they only train for a little while. " +"This could be as little as one epoch on the local data, or even just a " +"few steps (mini-batches)." msgstr "" -"我们现在有了一个完整的系统,可以执行联邦训练和联邦评估。它使用 " -"``weighted_average`` 函数汇总自定义评估指标,并在服务器端计算所有客户端的单" -"一 ``accuracy`` 指标。" +"现在,所有(选定的)客户端节点都有了最新版本的全局模型参数,它们开始进行本地训练。它们使用自己的本地数据集来训练自己的本地模型。它们不会一直训练到模型完全收敛为止,而只是训练一小段时间。这可能只是本地数据上的一个遍历,甚至只是几个步骤" +"(mini-batches)。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:699 -msgid "" -"The other two categories of metrics (``losses_centralized`` and " -"``metrics_centralized``) are still empty because they only apply when " -"centralized evaluation is being used. Part two of the Flower tutorial will " -"cover centralized evaluation." +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:240 +msgid "|1f169ab4601a47e1a226f1628f4ebddb|" msgstr "" -"其他两类指标(`losses_centralized`` 和 `metrics_centralized`)仍然是空的,因" -"为它们只适用于集中评估。Flower 教程的第二部分将介绍集中式评估。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:711 -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:351 -msgid "Final remarks" -msgstr "结束语" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:311 +msgid "Train on local data" +msgstr "根据本地数据进行训练" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:713 -msgid "" -"Congratulations, you just trained a convolutional neural network, federated " -"over 10 clients! With that, you understand the basics of federated learning " -"with Flower. The same approach you've seen can be used with other machine " -"learning frameworks (not just PyTorch) and tasks (not just CIFAR-10 images " -"classification), for example NLP with Hugging Face Transformers or speech " -"with SpeechBrain." -msgstr "" -"恭喜您,你刚刚训练了一个由 10 个客户端组成的卷积神经网络!这样,你就了解了使" -"用 Flower 进行联邦学习的基础知识。你所看到的方法同样适用于其他机器学习框架" -"(不只是 PyTorch)和任务(不只是 CIFAR-10 图像分类),例如使用 Hugging Face " -"Transformers 的 NLP 或使用 SpeechBrain 的语音。" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:247 +msgid "Step 3: Return model updates back to the server" +msgstr "步骤 3:将模型参数更新返回服务器" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:715 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:249 msgid "" -"In the next notebook, we're going to cover some more advanced concepts. Want " -"to customize your strategy? Initialize parameters on the server side? Or " -"evaluate the aggregated model on the server side? We'll cover all this and " -"more in the next tutorial." -msgstr "" -"在下一个笔记中,我们将介绍一些更先进的概念。想定制你的策略吗?在服务器端初始" -"化参数?或者在服务器端评估聚合模型?我们将在下一个教程中介绍所有这些内容以及" -"更多。" +"After local training, each client node has a slightly different version " +"of the model parameters they originally received. The parameters are all " +"different because each client node has different examples in its local " +"dataset. The client nodes then send those model updates back to the " +"server. The model updates they send can either be the full model " +"parameters or just the gradients that were accumulated during local " +"training." +msgstr "经过本地训练后,每个客户节点最初收到的模型参数都会略有不同。参数之所以不同,是因为每个客户端节点的本地数据集中都有不同的数据。然后,客户端节点将这些模型更新发回服务器。它们发送的模型更新既可以是完整的模型参数,也可以只是本地训练过程中积累的梯度。" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:733 -msgid "" -"The `Flower Federated Learning Tutorial - Part 2 `__ goes " -"into more depth about strategies and all the advanced things you can build " -"with them." +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:255 +msgid "|12cfa9cde14440ecb8c8f6c1d7185bec|" msgstr "" -"`Flower 联邦学习教程 - 第 2 部分 `__ 更深入地介绍了策略以及可以" -"使用策略构建的所有高级功能。" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:9 -msgid "Use a federated learning strategy" -msgstr "使用联邦学习策略" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:313 +msgid "Send model updates" +msgstr "发送模型参数更新" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:11 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:262 +msgid "Step 4: Aggregate model updates into a new global model" +msgstr "步骤 4:将模型更新聚合到新的全局模型中" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:264 msgid "" -"Welcome to the next part of the federated learning tutorial. In previous " -"parts of this tutorial, we introduced federated learning with PyTorch and " -"Flower (`part 1 `__)." +"The server receives model updates from the selected client nodes. If it " +"selected 100 client nodes, it now has 100 slightly different versions of " +"the original global model, each trained on the local data of one client. " +"But didn't we want to have one model that contains the learnings from the" +" data of all 100 client nodes?" msgstr "" -"欢迎来到联邦学习教程的下一部分。在本教程的前几部分,我们介绍了使用 PyTorch " -"和 Flower 进行联邦学习(`第 1 部分 `___)。" +"服务器从选定的客户端节点接收模型更新。如果服务器选择了 100 个客户端节点,那么它现在就拥有 100 " +"个略有不同的原始全局模型版本,每个版本都是根据一个客户端的本地数据训练出来的。难道我们不希望有一个包含所有 100 个客户节点数据的模型吗?" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:13 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:266 msgid "" -"In this notebook, we'll begin to customize the federated learning system we " -"built in the introductory notebook (again, using `Flower `__ and `PyTorch `__)." +"In order to get one single model, we have to combine all the model " +"updates we received from the client nodes. This process is called " +"*aggregation*, and there are many different ways to do it. The most basic" +" way to do it is called *Federated Averaging* (`McMahan et al., 2016 " +"`__), often abbreviated as *FedAvg*. " +"*FedAvg* takes the 100 model updates and, as the name suggests, averages " +"them. To be more precise, it takes the *weighted average* of the model " +"updates, weighted by the number of examples each client used for " +"training. The weighting is important to make sure that each data example " +"has the same \"influence\" on the resulting global model. If one client " +"has 10 examples, and another client has 100 examples, then - without " +"weighting - each of the 10 examples would influence the global model ten " +"times as much as each of the 100 examples." +msgstr "" +"为了得到一个单一的模型,我们必须将从客户端节点收到的所有模型更新合并起来。这个过程称为*聚合*,有许多不同的方法。最基本的方法称为 " +"*Federated Averaging* (`McMahan等人,2016 " +"`__),通常缩写为*FedAvg*。*FedAvg* 可以把100 " +"个模型更新进行平均。更准确地说,它取的是模型更新的*加权平均值*,根据每个客户端用于训练的数据数量进行加权。加权对于确保每个数据示例对生成的全局模型具有相同的" +" \"影响 \"非常重要。如果一个客户端有 10 个数据点,而另一个客户有 100 个数据点,那么在不加权的情况下,10 个示例对全局模型的影响是" +" 100 个示例的 10 倍。" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:273 +msgid "|72939caf6e294b0986fee6dde96614d7|" msgstr "" -"在本笔记中,我们将开始定制在入门笔记中构建的联邦学习系统(再次使用 `Flower " -"`__ 和 `PyTorch `__)。" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:17 -msgid "Let's move beyond FedAvg with Flower strategies!" -msgstr "让我们超越 FedAvg,采用Flower策略!" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:315 +msgid "Aggregate model updates" +msgstr "聚合模型参数更新" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:309 -msgid "Strategy customization" -msgstr "策略定制" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:280 +msgid "Step 5: Repeat steps 1 to 4 until the model converges" +msgstr "步骤 5:重复步骤 1 至 4,直至模型收敛" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:311 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:282 msgid "" -"So far, everything should look familiar if you've worked through the " -"introductory notebook. With that, we're ready to introduce a number of new " -"features." +"Steps 1 to 4 are what we call a single round of federated learning. The " +"global model parameters get sent to the participating client nodes (step " +"1), the client nodes train on their local data (step 2), they send their " +"updated models to the server (step 3), and the server then aggregates the" +" model updates to get a new version of the global model (step 4)." msgstr "" -"到目前为止,如果您已经阅读过入门笔记本,那么一切都应该很熟悉了。接下来,我们" -"将介绍一些新功能。" +"步骤 1 至 4 就是我们所说的单轮联邦学习。全局模型参数被发送到参与的客户端节点(第 1 步),客户端节点对其本地数据进行训练(第 2 " +"步),然后将更新后的模型发送到服务器(第 3 步),服务器汇总模型更新,得到新版本的全局模型(第 4 步)。" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:323 -msgid "Server-side parameter **initialization**" -msgstr "服务器端参数 **初始化**" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:284 +msgid "" +"During a single round, each client node that participates in that " +"iteration only trains for a little while. This means that after the " +"aggregation step (step 4), we have a model that has been trained on all " +"the data of all participating client nodes, but only for a little while. " +"We then have to repeat this training process over and over again to " +"eventually arrive at a fully trained model that performs well across the " +"data of all client nodes." +msgstr "" +"在一轮迭代中,每个参与迭代的客户节点只训练一小段时间。这意味着,在聚合步骤(步骤 " +"4)之后,我们的模型已经在所有参与的客户节点的所有数据上训练过了,但只训练了一小会儿。然后,我们必须一次又一次地重复这一训练过程,最终得到一个经过全面训练的模型,该模型在所有客户节点的数据中都表现良好。" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:325 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:289 msgid "" -"Flower, by default, initializes the global model by asking one random client " -"for the initial parameters. In many cases, we want more control over " -"parameter initialization though. Flower therefore allows you to directly " -"pass the initial parameters to the Strategy:" +"Congratulations, you now understand the basics of federated learning. " +"There's a lot more to discuss, of course, but that was federated learning" +" in a nutshell. In later parts of this tutorial, we will go into more " +"detail. Interesting questions include: How can we select the best client " +"nodes that should participate in the next round? What's the best way to " +"aggregate model updates? How can we handle failing client nodes " +"(stragglers)?" msgstr "" -"默认情况下,Flower 会通过向一个随机客户端询问初始参数来初始化全局模型。但在许" -"多情况下,我们需要对参数初始化进行更多控制。因此,Flower 允许您直接将初始参数" -"传递给策略:" +"恭喜您,现在您已经了解了联邦学习的基础知识。当然,要讨论的内容还有很多,但这只是联邦学习的一个缩影。在本教程的后半部分,我们将进行更详细的介绍。有趣的问题包括" +" 我们如何选择最好的客户端节点参与下一轮学习?聚合模型更新的最佳方法是什么?如何处理失败的客户端节点(落伍者)?" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:370 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:294 msgid "" -"Passing ``initial_parameters`` to the ``FedAvg`` strategy prevents Flower " -"from asking one of the clients for the initial parameters. If we look " -"closely, we can see that the logs do not show any calls to the " -"``FlowerClient.get_parameters`` method." +"Just like we can train a model on the decentralized data of different " +"client nodes, we can also evaluate the model on that data to receive " +"valuable metrics. This is called federated evaluation, sometimes " +"abbreviated as FE. In fact, federated evaluation is an integral part of " +"most federated learning systems." msgstr "" -"向 ``FedAvg`` 策略传递 ``initial_parameters`` 可以防止 Flower 向其中一个客户" -"端询问初始参数。如果我们仔细观察,就会发现日志中没有显示对 ``FlowerClient." -"get_parameters`` 方法的任何调用。" +"就像我们可以在不同客户节点的分散数据上训练一个模型一样,我们也可以在这些数据上对模型进行评估,以获得有价值的指标。这就是所谓的联邦评估,有时简称为" +" FE。事实上,联邦评估是大多数联邦学习系统不可或缺的一部分。" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:382 -msgid "Starting with a customized strategy" -msgstr "从定制战略开始" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:297 +msgid "Federated analytics" +msgstr "联邦分析" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:384 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:299 msgid "" -"We've seen the function ``start_simulation`` before. It accepts a number of " -"arguments, amongst them the ``client_fn`` used to create ``FlowerClient`` " -"instances, the number of clients to simulate ``num_clients``, the number of " -"rounds ``num_rounds``, and the strategy." -msgstr "" -"我们以前见过函数 ``start_simulation``。它接受许多参数,其中包括用于创建 " -"``FlowerClient`` 实例的 ``client_fn``、要模拟的客户数量 ``num_clients``、回合" -"数 ``num_rounds``和策略。" +"In many cases, machine learning isn't necessary to derive value from " +"data. Data analysis can yield valuable insights, but again, there's often" +" not enough data to get a clear answer. What's the average age at which " +"people develop a certain type of health condition? Federated analytics " +"enables such queries over multiple client nodes. It is usually used in " +"conjunction with other privacy-enhancing technologies like secure " +"aggregation to prevent the server from seeing the results submitted by " +"individual client nodes." +msgstr "在很多情况下,机器学习并不是从数据中获取价值的必要条件。数据分析可以产生有价值的见解,但同样,往往没有足够的数据来获得明确的答案。人们患某种健康疾病的平均年龄是多少?联邦分析可以通过多个客户端节点进行此类查询。它通常与安全聚合等其他隐私增强技术结合使用,以防止服务器看到单个客户端节点提交的结果。" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:386 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:305 msgid "" -"The strategy encapsulates the federated learning approach/algorithm, for " -"example, ``FedAvg`` or ``FedAdagrad``. Let's try to use a different strategy " -"this time:" +"Differential privacy (DP) is often mentioned in the context of Federated " +"Learning. It is a privacy-preserving method used when analyzing and " +"sharing statistical data, ensuring the privacy of individual " +"participants. DP achieves this by adding statistical noise to the model " +"updates, ensuring any individual participants’ information cannot be " +"distinguished or re-identified. This technique can be considered an " +"optimization that provides a quantifiable privacy protection measure." msgstr "" -"该策略封装了联邦学习方法/算法,例如`FedAvg``或`FedAdagrad``。这次让我们尝试使" -"用不同的策略:" +"差分隐私(DP)经常在联邦学习中被提及。这是一种在分析和共享统计数据时使用的隐私保护方法,可确保单个参与者的隐私。DP " +"通过在模型更新中添加统计噪声来实现这一目的,确保任何个体参与者的信息都无法被区分或重新识别。这种技术可被视为一种优化,提供了一种可量化的隐私保护措施。" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:424 -msgid "Server-side parameter **evaluation**" -msgstr "服务器端参数**评估**" +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:326 +msgid "Flower" +msgstr "Flower" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:426 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:328 msgid "" -"Flower can evaluate the aggregated model on the server-side or on the client-" -"side. Client-side and server-side evaluation are similar in some ways, but " -"different in others." +"Federated learning, federated evaluation, and federated analytics require" +" infrastructure to move machine learning models back and forth, train and" +" evaluate them on local data, and then aggregate the updated models. " +"Flower provides the infrastructure to do exactly that in an easy, " +"scalable, and secure way. In short, Flower presents a unified approach to" +" federated learning, analytics, and evaluation. It allows the user to " +"federate any workload, any ML framework, and any programming language." msgstr "" -"Flower 可以在服务器端或客户端评估聚合模型。客户端和服务器端评估在某些方面相" -"似,但也有不同之处。" +"联邦学习、联邦评估和联邦分析需要基础框架来来回移动机器学习模型,在本地数据上对其进行训练和评估,然后汇总更新的模型。Flower " +"提供的基础架构正是以简单、可扩展和安全的方式实现这些目标的。简而言之,Flower " +"为联邦学习、分析和评估提供了一种统一的方法。它允许用户联邦化任何工作负载、任何 ML 框架和任何编程语言。" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:428 -msgid "" -"**Centralized Evaluation** (or *server-side evaluation*) is conceptually " -"simple: it works the same way that evaluation in centralized machine " -"learning does. If there is a server-side dataset that can be used for " -"evaluation purposes, then that's great. We can evaluate the newly aggregated " -"model after each round of training without having to send the model to " -"clients. We're also fortunate in the sense that our entire evaluation " -"dataset is available at all times." +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:334 +msgid "|83a8daee45da4a98b8d6f24ae098fc50|" msgstr "" -"**集中评估**(或*服务器端评估*)在概念上很简单:它的工作方式与集中式机器学习" -"中的评估方式相同。如果有一个服务器端数据集可用于评估目的,那就太好了。我们可" -"以在每一轮训练后对新聚合的模型进行评估,而无需将模型发送给客户端。我们也很幸" -"运,因为我们的整个评估数据集随时可用。" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:430 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:340 msgid "" -"**Federated Evaluation** (or *client-side evaluation*) is more complex, but " -"also more powerful: it doesn't require a centralized dataset and allows us " -"to evaluate models over a larger set of data, which often yields more " -"realistic evaluation results. In fact, many scenarios require us to use " -"**Federated Evaluation** if we want to get representative evaluation results " -"at all. But this power comes at a cost: once we start to evaluate on the " -"client side, we should be aware that our evaluation dataset can change over " -"consecutive rounds of learning if those clients are not always available. " -"Moreover, the dataset held by each client can also change over consecutive " -"rounds. This can lead to evaluation results that are not stable, so even if " -"we would not change the model, we'd see our evaluation results fluctuate " -"over consecutive rounds." -msgstr "" -"**联邦评估**(或*客户端评估*)更为复杂,但也更为强大:它不需要集中的数据集," -"允许我们在更大的数据集上对模型进行评估,这通常会产生更真实的评估结果。事实" -"上,如果我们想得到有代表性的评估结果,很多情况下都需要使用**联邦评估**。但" -"是,这种能力是有代价的:一旦我们开始在客户端进行评估,我们就应该意识到,如果" -"这些客户端并不总是可用,我们的评估数据集可能会在连续几轮学习中发生变化。此" -"外,每个客户端所拥有的数据集也可能在连续几轮学习中发生变化。这可能会导致评估" -"结果不稳定,因此即使我们不改变模型,也会看到评估结果在连续几轮中波动。" +"Flower federated learning server and client nodes (car, scooter, personal" +" computer, roomba, and phone)" +msgstr "Flower联邦学习服务器和客户端节点(汽车、滑板车、个人电脑、roomba 和电话)" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:433 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:353 msgid "" -"We've seen how federated evaluation works on the client side (i.e., by " -"implementing the ``evaluate`` method in ``FlowerClient``). Now let's see how " -"we can evaluate aggregated model parameters on the server-side:" -msgstr "" -"我们已经了解了联邦评估如何在客户端工作(即通过在 ``FlowerClient`` 中实现 " -"``evaluate`` 方法)。现在让我们看看如何在服务器端评估聚合模型参数:" - -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:490 -msgid "Sending/receiving arbitrary values to/from clients" -msgstr "向/从客户端发送/接收任意值" +"Congratulations, you just learned the basics of federated learning and " +"how it relates to the classic (centralized) machine learning!" +msgstr "恭喜您,您刚刚了解了联邦学习的基础知识,以及它与传统(集中式)机器学习的关系!" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:492 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:355 msgid "" -"In some situations, we want to configure client-side execution (training, " -"evaluation) from the server-side. One example for that is the server asking " -"the clients to train for a certain number of local epochs. Flower provides a " -"way to send configuration values from the server to the clients using a " -"dictionary. Let's look at an example where the clients receive values from " -"the server through the ``config`` parameter in ``fit`` (``config`` is also " -"available in ``evaluate``). The ``fit`` method receives the configuration " -"dictionary through the ``config`` parameter and can then read values from " -"this dictionary. In this example, it reads ``server_round`` and " -"``local_epochs`` and uses those values to improve the logging and configure " -"the number of local training epochs:" -msgstr "" -"在某些情况下,我们希望从服务器端配置客户端的执行(训练、评估)。其中一个例子" -"就是服务器要求客户端训练一定数量的本地遍历。Flower 提供了一种使用字典从服务器" -"向客户端发送配置值的方法。让我们来看一个例子:客户端通过 ``fit`` 中的 " -"``config`` 参数从服务器接收配置值(``evaluate`` 中也有 ``config`` 参数)。" -"``fit`` 方法通过 ``config`` 参数接收配置字典,然后从字典中读取值。在本例中," -"它读取了 ``server_round`` 和 ``local_epochs``,并使用这些值来改进日志记录和配" -"置本地训练遍历的数量:" +"In the next part of this tutorial, we are going to build a first " +"federated learning system with Flower." +msgstr "在本教程的下一部分,我们将用 Flower 建立第一个联邦学习系统。" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:546 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:373 msgid "" -"So how can we send this config dictionary from server to clients? The built-" -"in Flower Strategies provide way to do this, and it works similarly to the " -"way server-side evaluation works. We provide a function to the strategy, and " -"the strategy calls this function for every round of federated learning:" +"The `Flower Federated Learning Tutorial - Part 1 " +"`__ shows how to build a simple federated learning system " +"with PyTorch and Flower." msgstr "" -"那么,如何将配置字典从服务器发送到客户端呢?内置的 \"Flower策略\"(Flower " -"Strategies)提供了这样的方法,其工作原理与服务器端评估的工作原理类似。我们为" -"策略提供一个函数,策略会在每一轮联邦学习中调用这个函数:" +"`Flower 联邦学习教程 - 第 1 部分 `__ 展示了如何使用 PyTorch 和 Flower " +"构建一个简单的联邦学习系统。" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:576 -msgid "" -"Next, we'll just pass this function to the FedAvg strategy before starting " -"the simulation:" -msgstr "接下来,我们只需在开始模拟前将此函数传递给 FedAvg 策略即可:" +#~ msgid "Before the release" +#~ msgstr "发布前" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:613 -msgid "" -"As we can see, the client logs now include the current round of federated " -"learning (which they read from the ``config`` dictionary). We can also " -"configure local training to run for one epoch during the first and second " -"round of federated learning, and then for two epochs during the third round." -msgstr "" -"我们可以看到,客户端日志现在包含了当前一轮的联邦学习(从 ``config`` 字典中读" -"取)。我们还可以将本地训练配置为在第一轮和第二轮联邦学习期间运行一个遍历,然" -"后在第三轮联邦学习期间运行两个遍历。" +#~ msgid "" +#~ "Update the changelog (``changelog.md``) with" +#~ " all relevant changes that happened " +#~ "after the last release. If the " +#~ "last release was tagged ``v1.2.0``, you" +#~ " can use the following URL to " +#~ "see all commits that got merged " +#~ "into ``main`` since then:" +#~ msgstr "" +#~ "更新更新日志 (``changelog.md``),加入上次发布后发生的所有相关变更。如果上次发布的版本被标记为 " +#~ "``v1.2.0``,则可以使用以下 URL 查看此后合并到 ``main`` 的所有提交:" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:615 -msgid "" -"Clients can also return arbitrary values to the server. To do so, they " -"return a dictionary from ``fit`` and/or ``evaluate``. We have seen and used " -"this concept throughout this notebook without mentioning it explicitly: our " -"``FlowerClient`` returns a dictionary containing a custom key/value pair as " -"the third return value in ``evaluate``." -msgstr "" -"客户端还可以向服务器返回任意值。为此,它们会从 ``fit`` 和/或 ``evaluate`` 返" -"回一个字典。我们在本笔记中看到并使用了这一概念,但并未明确提及:我们的 " -"``FlowerClient`` 返回一个包含自定义键/值对的字典,作为 ``evaluate`` 中的第三" -"个返回值。" +#~ msgid "" +#~ "`GitHub: Compare v1.2.0...main " +#~ "`_" +#~ msgstr "" +#~ "`GitHub: Compare v1.2.0...main " +#~ "`_" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:627 -msgid "Scaling federated learning" -msgstr "扩大联邦学习的规模" +#~ msgid "" +#~ "Thank the authors who contributed since" +#~ " the last release. This can be " +#~ "done by running the ``./dev/add-" +#~ "shortlog.sh`` convenience script (it can " +#~ "be ran multiple times and will " +#~ "update the names in the list if" +#~ " new contributors were added in the" +#~ " meantime)." +#~ msgstr "" +#~ "感谢自上次发布以来做出贡献的作者。可以通过运行 ``./dev/add-shortlog.sh`` " +#~ "方便脚本来完成(可以多次运行,如果在此期间有新的贡献者加入,则会更新列表中的名字)。" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:629 -msgid "" -"As a last step in this notebook, let's see how we can use Flower to " -"experiment with a large number of clients." -msgstr "作为本笔记的最后一步,让我们看看如何使用 Flower 对大量客户端进行实验。" +#~ msgid "" +#~ "Update the ``changelog.md`` section header " +#~ "``Unreleased`` to contain the version " +#~ "number and date for the release " +#~ "you are building. Create a pull " +#~ "request with the change." +#~ msgstr "" +#~ "更新 ``changelog.md`` 部分的标题 ``Unreleased`` " +#~ "以包含你正在构建的版本的版本号和日期。创建一个包含更改的拉取请求。" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:651 -#, python-format -msgid "" -"We now have 1000 partitions, each holding 45 training and 5 validation " -"examples. Given that the number of training examples on each client is quite " -"small, we should probably train the model a bit longer, so we configure the " -"clients to perform 3 local training epochs. We should also adjust the " -"fraction of clients selected for training during each round (we don't want " -"all 1000 clients participating in every round), so we adjust " -"``fraction_fit`` to ``0.05``, which means that only 5% of available clients " -"(so 50 clients) will be selected for training each round:" -msgstr "" -"现在我们有 1000 个分区,每个分区有 45 个训练数据和 5 个验证数据。鉴于每个客户" -"端上的训练示例数量较少,我们可能需要对模型进行更长时间的训练,因此我们将客户" -"端配置为执行 3 个本地训练遍历。我们还应该调整每轮训练中被选中的客户端的比例" -"(我们不希望每轮训练都有 1000 个客户端参与),因此我们将 ``fraction_fit`` 调" -"整为 ``0.05``,这意味着每轮训练只选中 5%的可用客户端(即 50 个客户端):" +#~ msgid "" +#~ "Second, create a virtual environment " +#~ "(and activate it). If you chose to" +#~ " use :code:`pyenv` (with the :code" +#~ ":`pyenv-virtualenv` plugin) and already " +#~ "have it installed , you can use" +#~ " the following convenience script (by " +#~ "default it will use :code:`Python " +#~ "3.8.17`, but you can change it by" +#~ " providing a specific :code:``)::" +#~ msgstr "" +#~ "其次,创建虚拟环境(并激活它)。如果您选择使用 :code:`pyenv`(使用 :code:`pyenv-" +#~ "virtualenv`插件),并且已经安装了该插件,则可以使用下面的便捷脚本(默认情况下使用 " +#~ ":code:`Python3.8.17`,但您可以通过提供特定的 :code:`<版本>`来更改)::" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:699 -msgid "" -"In this notebook, we've seen how we can gradually enhance our system by " -"customizing the strategy, initializing parameters on the server side, " -"choosing a different strategy, and evaluating models on the server-side. " -"That's quite a bit of flexibility with so little code, right?" -msgstr "" -"在本笔记中,我们看到了如何通过自定义策略、在服务器端初始化参数、选择不同的策" -"略以及在服务器端评估模型来逐步增强我们的系统。用这么少的代码就能实现这么大的" -"灵活性,不是吗?" +#~ msgid "flwr (Python API reference)" +#~ msgstr "flwr(Python API 参考)" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:701 -msgid "" -"In the later sections, we've seen how we can communicate arbitrary values " -"between server and clients to fully customize client-side execution. With " -"that capability, we built a large-scale Federated Learning simulation using " -"the Flower Virtual Client Engine and ran an experiment involving 1000 " -"clients in the same workload - all in a Jupyter Notebook!" -msgstr "" -"在后面的章节中,我们将看到如何在服务器和客户端之间传递任意值,以完全自定义客" -"户端执行。有了这种能力,我们使用 Flower 虚拟客户端引擎构建了一个大规模的联邦" -"学习模拟,并在 Jupyter Notebook 中进行了一次实验,在相同的工作负载中运行了 " -"1000 个客户端!" +#~ msgid "..." +#~ msgstr "..." -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:719 -msgid "" -"The `Flower Federated Learning Tutorial - Part 3 `__ shows how " -"to build a fully custom ``Strategy`` from scratch." -msgstr "" -"`Flower 联邦学习教程 - 第 3 部分 `__ 展示了如何从头开始构建完全自定" -"义的 \"策略\"。" +#~ msgid "Starting a client with an insecure server connection:" +#~ msgstr "使用不安全的服务器连接启动客户端:" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:9 -msgid "What is Federated Learning?" -msgstr "什么是联邦学习?" +#~ msgid "server.strategy.FedAvg" +#~ msgstr "server.strategy.FedAvg" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:13 -msgid "" -"In this tutorial, you will learn what federated learning is, build your " -"first system in Flower, and gradually extend it. If you work through all " -"parts of the tutorial, you will be able to build advanced federated learning " -"systems that approach the current state of the art in the field." -msgstr "" -"在本教程中,你将了解什么是联邦学习,用 Flower 搭建第一个系统,并逐步对其进行" -"扩展。如果你能完成本教程的所有部分,你就能构建高级的联邦学习系统,从而接近该" -"领域当前的技术水平。" +#~ msgid "server.strategy.FedAvgM" +#~ msgstr "server.strategy.FedAvgM" + +#~ msgid "Configurable FedAvg with Momentum strategy implementation." +#~ msgstr "可配置的 FedAvg 动量策略实施。" + +#~ msgid "Fraction of clients used during training. Defaults to 0.1." +#~ msgstr "训练期间使用客户的比例。默认为 0.1。" + +#~ msgid "Fraction of clients used during validation. Defaults to 0.1." +#~ msgstr "验证过程中使用的客户端比例。默认为 0.1。" + +#~ msgid "server.strategy.FedMedian" +#~ msgstr "server.strategy.FedMedian" + +#~ msgid "server.strategy.QFedAvg" +#~ msgstr "server.strategy.QFedAvg" + +#~ msgid "server.strategy.FedOpt" +#~ msgstr "server.strategy.FedOpt" + +#~ msgid "Configurable FedAdagrad strategy implementation." +#~ msgstr "可配置的 FedAdagrad 策略实施。" + +#~ msgid "Federated Optim strategy interface." +#~ msgstr "Federated Optim 策略界面。" + +#~ msgid "server.strategy.FedProx" +#~ msgstr "server.strategy.FedProx" + +#~ msgid "Configurable FedProx strategy implementation." +#~ msgstr "可配置的 FedProx 策略实施。" + +#~ msgid "server.strategy.FedAdagrad" +#~ msgstr "server.strategy.FedAdagrad" + +#~ msgid "Paper: https://arxiv.org/abs/2003.00295" +#~ msgstr "论文: https://arxiv.org/abs/2003.00295" + +#~ msgid "Federated learning strategy using Adagrad on server-side." +#~ msgstr "在服务器端使用 Adagrad 的联邦学习策略。" + +#~ msgid "server.strategy.FedAdam" +#~ msgstr "server.strategy.FedAdam" + +#~ msgid "server.strategy.FedYogi" +#~ msgstr "server.strategy.FedYogi" + +#~ msgid "Adaptive Federated Optimization using Yogi." +#~ msgstr "使用 Yogi 的自适应联合优化。" + +#~ msgid "Federated learning strategy using Yogi on server-side." +#~ msgstr "在服务器端使用 Yogi 的联邦学习策略。" + +#~ msgid "Paper: https://arxiv.org/abs/1803.01498" +#~ msgstr "论文:https://arxiv.org/abs/1803.01498" + +#~ msgid "server.strategy.Krum" +#~ msgstr "server.strategy.Krum" + +#~ msgid "Configurable Krum strategy implementation." +#~ msgstr "可配置的 Krum 策略实施。" + +#~ msgid "server.strategy.Bulyan" +#~ msgstr "server.strategy.Bulyan" + +#~ msgid "Bulyan strategy implementation." +#~ msgstr "Bulyan策略的实施。" + +#~ msgid "server.strategy.FedXgbNnAvg" +#~ msgstr "server.strategy.FedXgbNnAvg" + +#~ msgid "Federated XGBoost [Ma et al., 2023] strategy." +#~ msgstr "Federated XGBoost [Ma 等人,2023] 策略。" + +#~ msgid "server.strategy.DPFedAvgAdaptive" +#~ msgstr "server.strategy.DPFedAvgAdaptive" + +#~ msgid "" +#~ "**Fix the incorrect return types of " +#~ "Strategy** " +#~ "([#2432](https://github.com/adap/flower/pull/2432/files))" +#~ msgstr "" +#~ "**修复策略的错误返回类型** " +#~ "([#2432](https://github.com/adap/flower/pull/2432/files))" + +#~ msgid "" +#~ "The types of the return values in" +#~ " the docstrings in two methods " +#~ "(`aggregate_fit` and `aggregate_evaluate`) now " +#~ "match the hint types in the code." +#~ msgstr "" +#~ "两个方法(\"aggregate_fit \"和 " +#~ "\"aggregate_evaluate\")的文档说明中的返回值类型现在与代码中的提示类型一致。" + +#~ msgid "" +#~ "**Update Flower Examples** " +#~ "([#2384](https://github.com/adap/flower/pull/2384),[#2425](https://github.com/adap/flower/pull/2425)," +#~ " [#2526](https://github.com/adap/flower/pull/2526))" +#~ msgstr "" +#~ "** 更新 Flower Examples** " +#~ "([#2384](https://github.com/adap/flower/pull/2384),[#2425](https://github.com/adap/flower/pull/2425)," +#~ " [#2526](https://github.com/adap/flower/pull/2526))" + +#~ msgid "" +#~ "That's it for the client. We only" +#~ " have to implement :code:`Client` or " +#~ ":code:`NumPyClient` and call " +#~ ":code:`fl.client.start_client()`. The string " +#~ ":code:`\"0.0.0.0:8080\"` tells the client " +#~ "which server to connect to. In our" +#~ " case we can run the server and" +#~ " the client on the same machine, " +#~ "therefore we use :code:`\"0.0.0.0:8080\"`. If" +#~ " we run a truly federated workload" +#~ " with the server and clients running" +#~ " on different machines, all that " +#~ "needs to change is the " +#~ ":code:`server_address` we pass to the " +#~ "client." +#~ msgstr "" +#~ "对于客户端就需要做这么多。我们仅需要实现 " +#~ ":code:`Client`或者:code:`NumPyClient`然后调用:code:`fl.client.start_client()`。字符串" +#~ " :code:`\"0.0.0.0:8080\"` " +#~ "告诉客户端要连接到哪个服务器。在我们的例子中,我们可以在同一台机器上运行服务器和客户端,因此我们使用:code:`\"0.0.0.0:8080\"`。如果我们运行真正联邦学习的工作负载,服务器和客户端在不同的机器上运行,则需要更改的只是我们传递给客户端的" +#~ " server_address 。" + +#~ msgid "" +#~ "That's it for the client. We only" +#~ " have to implement :code:`Client` or " +#~ ":code:`NumPyClient` and call " +#~ ":code:`fl.client.start_client()`. The string " +#~ ":code:`\"[::]:8080\"` tells the client which" +#~ " server to connect to. In our " +#~ "case we can run the server and " +#~ "the client on the same machine, " +#~ "therefore we use :code:`\"[::]:8080\"`. If " +#~ "we run a truly federated workload " +#~ "with the server and clients running " +#~ "on different machines, all that needs" +#~ " to change is the :code:`server_address`" +#~ " we point the client at." +#~ msgstr "" +#~ "对于客户来说就是这样了。我们只需实现 :code:`Client` 或 " +#~ ":code:`NumPyClient` 并调用:code:`fl.client.start_client()` " +#~ "即可。字符串 :code:`\"[::]:8080\"` " +#~ "告诉客户端要连接到哪个服务器。在我们的例子中,我们可以在同一台机器上运行服务器和客户端,因此我们使用 " +#~ ":code:`\"[::]:8080\"`。如果我们运行真正联邦的工作负载,服务器和客户端运行在不同的机器上,则需要更改的只是我们指向客户端的" +#~ " server_address 。" + +#~ msgid "" +#~ "Let's now load the CIFAR-10 training " +#~ "and test set, partition them into " +#~ "ten smaller datasets (each split into" +#~ " training and validation set), and " +#~ "wrap the resulting partitions by " +#~ "creating a PyTorch ``DataLoader`` for " +#~ "each of them:" +#~ msgstr "" +#~ "现在,让我们加载 CIFAR-10 训练集和测试集,将它们分割成 10 " +#~ "个较小的数据集(每个数据集又分为训练集和验证集),并通过为每个数据集创建 PyTorch " +#~ "``DataLoader`` 来包装由此产生的分割集:" + +#~ msgid "|e1dd4b4129b040bea23a894266227080|" +#~ msgstr "|e1dd4b4129b040bea23a894266227080|" + +#~ msgid "|c0d4cc6a442948dca8da40d2440068d9|" +#~ msgstr "|c0d4cc6a442948dca8da40d2440068d9|" + +#~ msgid "|174e1e4fa1f149a19bfbc8bc1126f46a|" +#~ msgstr "|174e1e4fa1f149a19bfbc8bc1126f46a|" + +#~ msgid "|4e021a3dc08249d2a89daa3ab03c2714|" +#~ msgstr "|4e021a3dc08249d2a89daa3ab03c2714|" + +#~ msgid "|e74a1d5ce7eb49688651f2167a59065b|" +#~ msgstr "|e74a1d5ce7eb49688651f2167a59065b|" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:15 -msgid "" -"🧑‍🏫 This tutorial starts at zero and expects no familiarity with federated " -"learning. Only a basic understanding of data science and Python programming " -"is assumed." -msgstr "" -"🧑‍🏫 本教程从零开始,不要求熟悉联邦学习。仅假定对数据科学和 Python 编程有基本" -"了解。" +#~ msgid "|eb29ec4c7aef4e93976795ed72df647e|" +#~ msgstr "|eb29ec4c7aef4e93976795ed72df647e|" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:17 -msgid "" -"`Star Flower on GitHub `__ ⭐️ and join the " -"open-source Flower community on Slack to connect, ask questions, and get " -"help: `Join Slack `__ 🌼 We'd love to hear " -"from you in the ``#introductions`` channel! And if anything is unclear, head " -"over to the ``#questions`` channel." -msgstr "" -"`Star Flower on GitHub `__ ⭐️ 并加入 Slack 上" -"的开源 Flower 社区,进行交流、提问并获得帮助: 加入 Slack `__ 🌼 我们希望在 ``#introductions`` 频道听到您的声音!如果有任何" -"不清楚的地方,请访问 ``#questions`` 频道。" +#~ msgid "|c2f699d8ac484f5081721a6f1511f70d|" +#~ msgstr "|c2f699d8ac484f5081721a6f1511f70d|" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:19 -msgid "Let's get started!" -msgstr "让我们开始吧!" +#~ msgid "|cf42accdacbf4e5eb4fa0503108ba7a7|" +#~ msgstr "|cf42accdacbf4e5eb4fa0503108ba7a7|" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:31 -msgid "Classic machine learning" -msgstr "经典机器学习" +#~ msgid "|5ec8356bc2564fa09178b1ceed5beccc|" +#~ msgstr "|5ec8356bc2564fa09178b1ceed5beccc|" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:33 -msgid "" -"Before we begin to discuss federated learning, let us quickly recap how most " -"machine learning works today." -msgstr "" -"在开始讨论联邦学习之前,让我们先快速回顾一下目前大多数机器学习的工作原理。" +#~ msgid "|7c9329e97bd0430bad335ab605a897a7|" +#~ msgstr "|7c9329e97bd0430bad335ab605a897a7|" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:35 -msgid "" -"In machine learning, we have a model, and we have data. The model could be a " -"neural network (as depicted here), or something else, like classical linear " -"regression." -msgstr "" -"在机器学习中,我们有一个模型和数据。模型可以是一个神经网络(如图所示),也可" -"以是其他东西,比如经典的线性回归。" +#~ msgid "|88002bbce1094ba1a83c9151df18f707|" +#~ msgstr "|88002bbce1094ba1a83c9151df18f707|" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:41 -msgid "|2b5c62c529f6416f840c594cce062fbb|" -msgstr "" +#~ msgid "|391766aee87c482c834c93f7c22225e2|" +#~ msgstr "|391766aee87c482c834c93f7c22225e2|" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:109 -msgid "Model and data" -msgstr "模型和数据" +#~ msgid "|93b9a15bd27f4e91b40f642c253dfaac|" +#~ msgstr "|93b9a15bd27f4e91b40f642c253dfaac|" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:47 -msgid "" -"We train the model using the data to perform a useful task. A task could be " -"to detect objects in images, transcribe an audio recording, or play a game " -"like Go." -msgstr "" -"我们使用数据来训练模型,以完成一项有用的任务。任务可以是检测图像中的物体、转" -"录音频或玩围棋等游戏。" +#~ msgid "|a23d9638f96342ef9d25209951e2d564|" +#~ msgstr "|a23d9638f96342ef9d25209951e2d564|" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:53 -msgid "|90b334680cb7467d9a04d39b8e8dca9f|" -msgstr "" +#~ msgid "Upload the whl (e.g., ``flwr-1.6.0-py3-none-any.whl``)" +#~ msgstr "上传 whl(例如 ``flwr-1.6.0-py3-none-any.whl``)" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:111 -msgid "Train model using data" -msgstr "使用数据训练模型" +#~ msgid "" +#~ "Change ``!pip install -q 'flwr[simulation]'" +#~ " torch torchvision matplotlib`` to ``!pip" +#~ " install -q 'flwr-1.6.0-py3-none-" +#~ "any.whl[simulation]' torch torchvision matplotlib``" +#~ msgstr "" +#~ "将``!pip install -q 'flwr[simulation]' torch" +#~ " torchvision matplotlib``更改为``!pip install -q " +#~ "'flwr-1.6.0-py3-none-any.whl[simulation]' torch " +#~ "torch torchvision matplotlib``" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:59 -msgid "" -"Now, in practice, the training data we work with doesn't originate on the " -"machine we train the model on. It gets created somewhere else." -msgstr "" -"实际上,我们使用的训练数据并不来自我们训练模型的机器。它是在其他地方创建的。" +#~ msgid "" +#~ "All that's left to do it to " +#~ "define a function that loads both " +#~ "model and data, creates a " +#~ ":code:`CifarClient`, and starts this client." +#~ " You load your data and model " +#~ "by using :code:`cifar.py`. Start " +#~ ":code:`CifarClient` with the function " +#~ ":code:`fl.client.start_numpy_client()` by pointing " +#~ "it at the same IP address we " +#~ "used in :code:`server.py`:" +#~ msgstr "" +#~ "剩下要做的就是定义一个加载模型和数据的函数,创建一个 :code:`CifarClient` 并启动该客户端。使用" +#~ " :code:`cifar.py` 加载数据和模型。使用函数 " +#~ ":code:`fl.client.start_numpy_client()` 启动 " +#~ ":code:`CifarClient`,将其指向我们在 :code:`server.py` 中使用的相同 " +#~ "IP 地址:" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:61 -msgid "" -"It originates on a smartphone by the user interacting with an app, a car " -"collecting sensor data, a laptop receiving input via the keyboard, or a " -"smart speaker listening to someone trying to sing a song." -msgstr "" -"它源于智能手机上用户与应用程序的交互、汽车上传感器数据的收集、笔记本电脑上键" -"盘输入的接收,或者智能扬声器上某人试着唱的歌。" +#~ msgid "" +#~ "The :code:`VirtualClientEngine` schedules, launches" +#~ " and manages `virtual` clients. These " +#~ "clients are identical to `non-virtual`" +#~ " clients (i.e. the ones you launch" +#~ " via the command `flwr.client.start_numpy_client" +#~ " `_)" +#~ " in the sense that they can be" +#~ " configure by creating a class " +#~ "inheriting, for example, from " +#~ "`flwr.client.NumPyClient `_ and therefore " +#~ "behave in an identical way. In " +#~ "addition to that, clients managed by " +#~ "the :code:`VirtualClientEngine` are:" +#~ msgstr "" +#~ "代码:`VirtualClientEngine`调度、启动和管理`虚拟`客户端。这些客户端与 \"非虚拟 " +#~ "\"客户端(即通过命令 `flwr.client.start_numpy_client `_启动的客户端)完全相同,它们可以通过创建一个继承自 \"flwr.client.NumPyClient " +#~ "`_\" " +#~ "的类来配置,因此行为方式也完全相同。除此之外,由 :code:`VirtualClientEngine` " +#~ "管理的客户端还包括:" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:67 -msgid "|65764ceee89f4335bfd93fd0b115e831|" -msgstr "" +#~ msgid "Example: Walk-Through PyTorch & MNIST" +#~ msgstr "实例: PyTorch 和 MNIST 的演练" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:113 -msgid "Data on a phone" -msgstr "手机上的数据" +#~ msgid "" +#~ "In this tutorial we will learn, " +#~ "how to train a Convolutional Neural " +#~ "Network on MNIST using Flower and " +#~ "PyTorch." +#~ msgstr "在本教程中,我们将学习如何使用 Flower 和 PyTorch 在 MNIST 上训练卷积神经网络。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:73 -msgid "" -"What's also important to mention, this \"somewhere else\" is usually not " -"just one place, it's many places. It could be several devices all running " -"the same app. But it could also be several organizations, all generating " -"data for the same task." -msgstr "" -"值得一提的是,这个 \"其他地方 \"通常不只是一个地方,而是很多地方。它可能是多" -"个运行同一应用程序的设备。但也可能是多个组织,都在为同一任务生成数据。" +#~ msgid "" +#~ "Since we want to use PyTorch to" +#~ " solve a computer vision task, let's" +#~ " go ahead an install PyTorch and " +#~ "the **torchvision** library:" +#~ msgstr "我们想用 PyTorch 来做计算机视觉任务,需要先安装 PyTorch 和 **torchvision** 库:" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:79 -msgid "|d97319ec28bb407ea0ab9705e38f3bcf|" -msgstr "" +#~ msgid "Ready... Set... Train!" +#~ msgstr "准备...设置...训练!" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:115 -msgid "Data is on many devices" -msgstr "数据存在于多种设备中" +#~ msgid "" +#~ "Now that we have all our " +#~ "dependencies installed, let's run a " +#~ "simple distributed training with two " +#~ "clients and one server. Our training " +#~ "procedure and network architecture are " +#~ "based on PyTorch's `Basic MNIST Example" +#~ " `_. " +#~ "This will allow you see how easy" +#~ " it is to wrap your code with" +#~ " Flower and begin training in a " +#~ "federated way. We provide you with " +#~ "two helper scripts, namely *run-" +#~ "server.sh*, and *run-clients.sh*. Don't " +#~ "be afraid to look inside, they are" +#~ " simple enough =)." +#~ msgstr "" +#~ "现在我们已经安装了所有的依赖包,让我们用两个客户端和一个服务器来运行一个简单的分布式训练。我们的训练过程和网络架构基于 " +#~ "PyTorch 的 `Basic MNIST Example " +#~ "`_。您会发现用 " +#~ "Flower 来封装您的代码并进行联邦学习训练是多么容易。我们为您提供了两个辅助脚本,即 *run-" +#~ "server.sh* 和 *run-clients.sh*。别害怕,它们很简单 =)。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:85 -msgid "" -"So to use machine learning, or any kind of data analysis, the approach that " -"has been used in the past was to collect all data on a central server. This " -"server can be somewhere in a data center, or somewhere in the cloud." -msgstr "" -"因此,要使用机器学习或任何类型的数据分析,过去使用的方法是在中央服务器上收集" -"所有数据。这个服务器可以在数据中心的某个地方,也可以在云端的某个地方。" +#~ msgid "" +#~ "Go ahead and launch on a terminal" +#~ " the *run-server.sh* script first as" +#~ " follows:" +#~ msgstr "首先在终端上启动 *run-server.sh* 脚本,如下所示:" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:91 -msgid "|11e95ac83a8548d8b3505b4663187d07|" -msgstr "" +#~ msgid "Now that the server is up and running, go ahead and launch the clients." +#~ msgstr "现在服务器已经启动并运行,请继续启动客户端。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:117 -msgid "Central data collection" -msgstr "集中数据收集" +#~ msgid "" +#~ "Et voilà! You should be seeing the" +#~ " training procedure and, after a few" +#~ " iterations, the test accuracy for " +#~ "each client." +#~ msgstr "然后就可以了!您应该能看到训练过程,以及经过几次反复后,每个客户端的测试准确率。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:97 -msgid "" -"Once all the data is collected in one place, we can finally use machine " -"learning algorithms to train our model on the data. This is the machine " -"learning approach that we've basically always relied on." -msgstr "" -"一旦所有数据都收集到一处,我们最终就可以使用机器学习算法在数据上训练我们的模" -"型。这就是我们基本上一直依赖的机器学习方法。" +#~ msgid "Now, let's see what is really happening inside." +#~ msgstr "现在,让我们看看里面到底发生了什么。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:103 -msgid "|1dab2f3a23674abc8a6731f20fa10730|" -msgstr "" +#~ msgid "" +#~ "Inside the server helper script *run-" +#~ "server.sh* you will find the following" +#~ " code that basically runs the " +#~ ":code:`server.py`" +#~ msgstr "在服务器辅助脚本 *run-server.sh* 中,你可以找到以下代码,这些代码基本上都是运行 :code:`server.py` 的代码" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:119 -msgid "Central model training" -msgstr "集中模型训练" +#~ msgid "" +#~ "We can go a bit deeper and " +#~ "see that :code:`server.py` simply launches " +#~ "a server that will coordinate three " +#~ "rounds of training. Flower Servers are" +#~ " very customizable, but for simple " +#~ "workloads, we can start a server " +#~ "using the :ref:`start_server ` function and leave " +#~ "all the configuration possibilities at " +#~ "their default values, as seen below." +#~ msgstr "" +#~ "我们可以再深入一点,:code:`server.py` 只是启动了一个服务器,该服务器将协调三轮训练。Flower " +#~ "服务器是非常容易修改的,但对于简单的工作,我们可以使用 :ref:`start_server `函数启动服务器,并将所有可能的配置保留为默认值,如下所示。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:130 -msgid "Challenges of classical machine learning" -msgstr "经典机器学习面临的挑战" +#~ msgid "" +#~ "Next, let's take a look at the " +#~ "*run-clients.sh* file. You will see " +#~ "that it contains the main loop " +#~ "that starts a set of *clients*." +#~ msgstr "接下来,让我们看看 *run-clients.sh* 文件。您会看到它包含了用来启动多个 *客户端* 的代码。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:132 -msgid "" -"The classic machine learning approach we've just seen can be used in some " -"cases. Great examples include categorizing holiday photos, or analyzing web " -"traffic. Cases, where all the data is naturally available on a centralized " -"server." -msgstr "" -"我们刚刚看到的经典机器学习方法可以在某些情况下使用。很好的例子包括对假日照片" -"进行分类或分析网络流量。在这些案例中,所有数据自然都可以在中央服务器上获得。" +#~ msgid "" +#~ "**cid**: is the client ID. It is" +#~ " an integer that uniquely identifies " +#~ "client identifier." +#~ msgstr "**cid**:是客户 ID。它是一个整数,可唯一标识客户标识符。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:138 -msgid "|7f0ee162da38450788493a21627306f7|" -msgstr "" +#~ msgid "**sever_address**: String that identifies IP and port of the server." +#~ msgstr "**sever_address**: 标识服务器 IP 和端口的字符串。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:173 -msgid "Centralized possible" -msgstr "可集中管理" +#~ msgid "" +#~ "**nb_clients**: This defines the number " +#~ "of clients being created. This piece " +#~ "of information is not required by " +#~ "the client, but it helps us " +#~ "partition the original MNIST dataset to" +#~ " make sure that every client is " +#~ "working on unique subsets of both " +#~ "*training* and *test* sets." +#~ msgstr "" +#~ "**nb_clients**: 这定义了正在创建的客户端数量。客户端并不需要这一信息,但它有助于我们对原始 " +#~ "MNIST 数据集进行划分,以确保每个客户端都在 *training* 和 *test*" +#~ " 数据集上有独立的数据。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:144 -msgid "" -"But the approach can not be used in many other cases. Cases, where the data " -"is not available on a centralized server, or cases where the data available " -"on one server is not enough to train a good model." -msgstr "" -"但这种方法并不适用于许多其他情况。例如,集中服务器上没有数据,或者一台服务器" -"上的数据不足以训练出一个好的模型。" +#~ msgid "" +#~ "Again, we can go deeper and look" +#~ " inside :code:`flwr_example/quickstart-" +#~ "pytorch/client.py`. After going through the" +#~ " argument parsing code at the " +#~ "beginning of our :code:`main` function, " +#~ "you will find a call to " +#~ ":code:`mnist.load_data`. This function is " +#~ "responsible for partitioning the original " +#~ "MNIST datasets (*training* and *test*) " +#~ "and returning a :code:`torch.utils.data.DataLoader`" +#~ " s for each of them. We then" +#~ " instantiate a :code:`PytorchMNISTClient` object" +#~ " with our client ID, our DataLoaders," +#~ " the number of epochs in each " +#~ "round, and which device we want to" +#~ " use for training (CPU or GPU)." +#~ msgstr "" +#~ "我们可以深入看一下 :code:`flwr_example/quickstart-" +#~ "pytorch/client.py`。查看 :code:`main` 函数开头的参数解析代码后,你会发现一个对" +#~ " :code:`mnist.load_data` 的调用。该函数负责分割原始 MNIST " +#~ "数据集(*training* 和 *test*),并为每个数据集返回一个 " +#~ ":code:`torch.utils.data.DataLoader` 。然后,我们实例化一个 " +#~ ":code:`PytorchMNISTClient` 对象,其中包含我们的客户端 ID、 " +#~ "DataLoader、每一轮中的遍历数,以及我们希望用于训练的设备(CPU 或 GPU)。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:150 -msgid "|296a1fb72c514b23b3d8905ff0ff98c6|" -msgstr "" +#~ msgid "" +#~ "The :code:`PytorchMNISTClient` object when " +#~ "finally passed to :code:`fl.client.start_client` " +#~ "along with the server's address as " +#~ "the training process begins." +#~ msgstr "" +#~ "当训练过程开始时,:code:`PytorchMNISTClient` 对象会连同服务器地址一起传递给 " +#~ ":code:`fl.client.start_client`。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:175 -msgid "Centralized impossible" -msgstr "无法集中" +#~ msgid "A Closer Look" +#~ msgstr "仔细看一下" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:156 -msgid "" -"There are many reasons why the classic centralized machine learning approach " -"does not work for a large number of highly important real-world use cases. " -"Those reasons include:" -msgstr "" -"传统的集中式机器学习方法无法满足现实世界中大量极为重要的使用案例,原因有很" -"多。这些原因包括:" +#~ msgid "" +#~ "Now, let's look closely into the " +#~ ":code:`PytorchMNISTClient` inside :code:`flwr_example" +#~ ".quickstart-pytorch.mnist` and see what it" +#~ " is doing:" +#~ msgstr "" +#~ "现在,让我们仔细研究一下 :code:`flwr_example.quickstart-pytorch.mnist`" +#~ " 中的 :code:`PytorchMNISTClient`,看看它在做什么:" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:158 -msgid "" -"**Regulations**: GDPR (Europe), CCPA (California), PIPEDA (Canada), LGPD " -"(Brazil), PDPL (Argentina), KVKK (Turkey), POPI (South Africa), FSS " -"(Russia), CDPR (China), PDPB (India), PIPA (Korea), APPI (Japan), PDP " -"(Indonesia), PDPA (Singapore), APP (Australia), and other regulations " -"protect sensitive data from being moved. In fact, those regulations " -"sometimes even prevent single organizations from combining their own users' " -"data for artificial intelligence training because those users live in " -"different parts of the world, and their data is governed by different data " -"protection regulations." -msgstr "" -"**法规**: GDPR(欧洲)、CCPA(加利福尼亚)、PIPEDA(加拿大)、LGPD(巴西)、" -"PDPL(阿根廷)、KVKK(土耳其)、POPI(南非)、FSS(俄罗斯)、CDPR(中国)、" -"PDPB(印度)、PIPA(韩国)、APPI(日本)、PDP(印度尼西亚)、PDPA(新加坡)、" -"APP(澳大利亚)等法规保护敏感数据不被移动。事实上,这些法规有时甚至会阻止单个" -"组织将自己的用户数据用于人工智能培训,因为这些用户生活在世界不同地区,他们的" -"数据受不同的数据保护法规管辖。" +#~ msgid "" +#~ "The first thing to notice is that" +#~ " :code:`PytorchMNISTClient` instantiates a CNN" +#~ " model inside its constructor" +#~ msgstr "首先要注意的是 :code:`PytorchMNISTClient` 在其构造函数中实例化了一个 CNN 模型" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:160 -msgid "" -"**User preference**: In addition to regulation, there are use cases where " -"users just expect that no data leaves their device, ever. If you type your " -"passwords and credit card info into the digital keyboard of your phone, you " -"don't expect those passwords to end up on the server of the company that " -"developed that keyboard, do you? In fact, that use case was the reason " -"federated learning was invented in the first place." -msgstr "" -"**用户偏好**: 除了法规之外,在一些使用案例中,用户只是希望数据永远不会离开他" -"们的设备。如果你在手机的数字键盘上输入密码和信用卡信息,你不会希望这些密码最" -"终出现在开发该键盘的公司的服务器上吧?事实上,这种用例正是联邦学习发明的初" -"衷。" +#~ msgid "" +#~ "The code for the CNN is available" +#~ " under :code:`quickstart-pytorch.mnist` and " +#~ "it is reproduced below. It is the" +#~ " same network found in `Basic MNIST" +#~ " Example " +#~ "`_." +#~ msgstr "" +#~ "CNN 的代码可在 :code:`quickstart-pytorch.mnist` " +#~ "下找到,现复制如下。它与 `Basic MNIST Example " +#~ "`_中的网络相同。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:161 -msgid "" -"**Data volume**: Some sensors, like cameras, produce such a high data volume " -"that it is neither feasible nor economic to collect all the data (due to, " -"for example, bandwidth or communication efficiency). Think about a national " -"rail service with hundreds of train stations across the country. If each of " -"these train stations is outfitted with a number of security cameras, the " -"volume of raw on-device data they produce requires incredibly powerful and " -"exceedingly expensive infrastructure to process and store. And most of the " -"data isn't even useful." -msgstr "" -"**数据量**: 有些传感器(如摄像头)产生的数据量很大,收集所有数据既不可行,也" -"不经济(例如,由于带宽或通信效率的原因)。试想一下全国铁路服务,全国有数百个" -"火车站。如果每个火车站都安装了许多安全摄像头,那么它们所产生的大量原始设备数" -"据就需要功能强大且极其昂贵的基础设施来处理和存储。而大部分数据甚至都是无用" -"的。" +#~ msgid "" +#~ "The second thing to notice is that" +#~ " :code:`PytorchMNISTClient` class inherits from" +#~ " the :code:`fl.client.Client`, and hence it" +#~ " must implement the following methods:" +#~ msgstr "" +#~ "第二件要注意的事是 :code:`PytorchMNISTClient` 类继承自 " +#~ ":code:`fl.client.Client`,因此它必须实现以下方法:" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:164 -msgid "Examples where centralized machine learning does not work include:" -msgstr "集中式机器学习不起作用的例子包括:" +#~ msgid "" +#~ "When comparing the abstract class to " +#~ "its derived class :code:`PytorchMNISTClient` " +#~ "you will notice that :code:`fit` calls" +#~ " a :code:`train` function and that " +#~ ":code:`evaluate` calls a :code:`test`: " +#~ "function." +#~ msgstr "" +#~ "将抽象类与其派生类 :code:`PytorchMNISTClient` 进行比较时,您会发现 " +#~ ":code:`fit` 调用了一个 :code:`train` 函数,而 " +#~ ":code:`evaluate` 则调用了一个 :code:`test`: 函数。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:166 -msgid "" -"Sensitive healthcare records from multiple hospitals to train cancer " -"detection models" -msgstr "用多家医院的敏感医疗记录训练癌症检测模型" +#~ msgid "" +#~ "These functions can both be found " +#~ "inside the same :code:`quickstart-" +#~ "pytorch.mnist` module:" +#~ msgstr "这些函数都可以在同一个 :code:`quickstart-pytorch.mnist` 模块中找到:" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:167 -msgid "" -"Financial information from different organizations to detect financial fraud" -msgstr "不同组织的财务信息,以侦查财务欺诈行为" +#~ msgid "" +#~ "Observe that these functions encapsulate " +#~ "regular training and test loops and " +#~ "provide :code:`fit` and :code:`evaluate` with" +#~ " final statistics for each round. You" +#~ " could substitute them with your " +#~ "custom train and test loops and " +#~ "change the network architecture, and the" +#~ " entire example would still work " +#~ "flawlessly. As a matter of fact, " +#~ "why not try and modify the code" +#~ " to an example of your liking?" +#~ msgstr "" +#~ "请注意,这些函数封装了常规的训练和测试循环,并为 :code:`fit` 和 " +#~ ":code:`evaluate` " +#~ "提供了每轮的最终统计数据。您可以用自定义的训练和测试循环来替代它们,并改变网络结构,整个示例仍然可以完美运行。事实上,为什么不按照自己的喜好修改代码呢?" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:168 -msgid "Location data from your electric car to make better range prediction" -msgstr "通过电动汽车的定位数据更好地预测续航里程" +#~ msgid "Give It a Try" +#~ msgstr "试试看" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:169 -msgid "End-to-end encrypted messages to train better auto-complete models" -msgstr "端到端加密信息可训练出更好的自动完成模型" +#~ msgid "" +#~ "Looking through the quickstart code " +#~ "description above will have given a " +#~ "good understanding of how *clients* and" +#~ " *servers* work in Flower, how to " +#~ "run a simple experiment, and the " +#~ "internals of a client wrapper. Here " +#~ "are a few things you could try " +#~ "on your own and get more " +#~ "experience with Flower:" +#~ msgstr "" +#~ "通过上面的快速入门代码描述,你将对 Flower " +#~ "中*客户端*和*服务器*的工作方式、如何运行一个简单的实验以及客户端封装器的内部结构有一个很好的了解。您可以自己尝试以下内容,以获得更多使用" +#~ " Flower 的经验:" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:171 -msgid "" -"The popularity of privacy-enhancing systems like the `Brave `__ browser or the `Signal `__ messenger shows " -"that users care about privacy. In fact, they choose the privacy-enhancing " -"version over other alternatives, if such an alternative exists. But what can " -"we do to apply machine learning and data science to these cases to utilize " -"private data? After all, these are all areas that would benefit " -"significantly from recent advances in AI." -msgstr "" -"像 `Brave `__浏览器或 `Signal `__信" -"息管理器这样的隐私增强系统的流行表明,用户关心隐私。事实上,他们会选择隐私性" -"更好的产品。但是,我们能做些什么来将机器学习和数据科学应用到这些情况中,以利" -"用隐私数据呢?毕竟,这些领域都将从人工智能的最新进展中受益匪浅。" +#~ msgid "" +#~ "Try and change :code:`PytorchMNISTClient` so" +#~ " it can accept different architectures." +#~ msgstr "尝试修改 :code:`PytorchMNISTClient`,使其可以接受不同的架构。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:186 -msgid "Federated learning" -msgstr "联邦学习" +#~ msgid "" +#~ "Modify the :code:`train` function so " +#~ "that it accepts different optimizers" +#~ msgstr "修改 :code:`train` 函数,使其接受不同的优化器" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:188 -msgid "" -"Federated learning simply reverses this approach. It enables machine " -"learning on distributed data by moving the training to the data, instead of " -"moving the data to the training. Here's the single-sentence explanation:" -msgstr "" -"联邦学习简单地颠覆了这种方法。它通过将训练转移到数据上,而不是将数据转移到训" -"练上,在分布式数据上实现机器学习。下面是一句话的解释:" +#~ msgid "" +#~ "Modify the :code:`test` function so that" +#~ " it proves not only the top-1 " +#~ "(regular accuracy) but also the top-5" +#~ " accuracy?" +#~ msgstr "修改 :code:`test` 函数,使其不仅能输出前 1 名(常规精确度),还能证明前 5 名的精确度?" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:190 -msgid "Central machine learning: move the data to the computation" -msgstr "集中式机器学习:将数据转移到计算中心" +#~ msgid "" +#~ "Go larger! Try to adapt the code" +#~ " to larger images and datasets. Why" +#~ " not try training on ImageNet with" +#~ " a ResNet-50?" +#~ msgstr "让我们尝试让代码适应更大的图像和数据集。为什么不尝试使用 ResNet-50 在 ImageNet 上进行训练呢?" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:191 -msgid "Federated (machine) learning: move the computation to the data" -msgstr "联邦式(机器)学习:将计算转移到数据上" +#~ msgid "You are ready now. Enjoy learning in a federated way!" +#~ msgstr "您现在已经准备就绪。尽情享受联邦学习的乐趣吧!" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:193 -msgid "" -"By doing so, it enables us to use machine learning (and other data science " -"approaches) in areas where it wasn't possible before. We can now train " -"excellent medical AI models by enabling different hospitals to work " -"together. We can solve financial fraud by training AI models on the data of " -"different financial institutions. We can build novel privacy-enhancing " -"applications (such as secure messaging) that have better built-in AI than " -"their non-privacy-enhancing alternatives. And those are just a few of the " -"examples that come to mind. As we deploy federated learning, we discover " -"more and more areas that can suddenly be reinvented because they now have " -"access to vast amounts of previously inaccessible data." -msgstr "" -"这样,我们就能在以前不可能的领域使用机器学习(和其他数据科学方法)。现在,我" -"们可以通过让不同的医院协同工作来训练优秀的医疗人工智能模型。我们可以通过在不" -"同金融机构的数据上训练人工智能模型来解决金融欺诈问题。我们可以构建新颖的隐私" -"增强型应用(如安全信息),其内置的人工智能比非隐私增强型应用更好。以上只是我" -"想到的几个例子。随着联邦学习的部署,我们会发现越来越多的领域可以突然重获新" -"生,因为它们现在可以访问大量以前无法访问的数据。" +#~ msgid "Differential privacy" +#~ msgstr "差别隐私" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:196 -msgid "" -"So how does federated learning work, exactly? Let's start with an intuitive " -"explanation." -msgstr "那么,联邦学习究竟是如何运作的呢?让我们从直观的解释开始。" +#~ msgid "" +#~ "Flower provides differential privacy (DP) " +#~ "wrapper classes for the easy integration" +#~ " of the central DP guarantees " +#~ "provided by DP-FedAvg into training " +#~ "pipelines defined in any of the " +#~ "various ML frameworks that Flower is " +#~ "compatible with." +#~ msgstr "" +#~ "Flower 提供了差分隐私 (DP) 封装类,可将 DP-FedAvg " +#~ "提供的核心 DP 轻松集成到 Flower 兼容的各种 ML " +#~ "框架中定义的训练模式中。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:199 -msgid "Federated learning in five steps" -msgstr "联邦学习的五个步骤" +#~ msgid "" +#~ "Please note that these components are" +#~ " still experimental; the correct " +#~ "configuration of DP for a specific " +#~ "task is still an unsolved problem." +#~ msgstr "请注意,这些组件仍处于试验阶段,如何为特定任务正确配置 DP 仍是一个尚未解决的问题。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:202 -msgid "Step 0: Initialize global model" -msgstr "步骤 0:初始化全局模型" +#~ msgid "" +#~ "The name DP-FedAvg is misleading " +#~ "since it can be applied on top " +#~ "of any FL algorithm that conforms " +#~ "to the general structure prescribed by" +#~ " the FedOpt family of algorithms." +#~ msgstr "DP-FedAvg 这个名称容易引起误解,因为它可以应用于任何符合 FedOpt 系列算法规定的一般结构的 FL 算法之上。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:204 -msgid "" -"We start by initializing the model on the server. This is exactly the same " -"in classic centralized learning: we initialize the model parameters, either " -"randomly or from a previously saved checkpoint." -msgstr "" -"我们首先在服务器上初始化模型。这与经典的集中式学习完全相同:我们随机或从先前" -"保存的检查点初始化模型参数。" +#~ msgid "DP-FedAvg" +#~ msgstr "DP-FedAvg" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:210 -msgid "|5b1408eec0d746cdb91162a9107b6089|" -msgstr "" +#~ msgid "" +#~ "DP-FedAvg, originally proposed by " +#~ "McMahan et al. [mcmahan]_ and extended" +#~ " by Andrew et al. [andrew]_, is " +#~ "essentially FedAvg with the following " +#~ "modifications." +#~ msgstr "DP-FedAvg 最初由McMahan等人提出,并由Andrew等人加以扩展。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:307 -msgid "Initialize global model" -msgstr "初始化全局模型" +#~ msgid "" +#~ "**Clipping** : The influence of each " +#~ "client's update is bounded by clipping" +#~ " it. This is achieved by enforcing" +#~ " a cap on the L2 norm of " +#~ "the update, scaling it down if " +#~ "needed." +#~ msgstr "**裁剪** : 裁剪会影响到每个客户端的模型参数。具体做法是对参数的 L2 准则设置上限,必要时将其缩减。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:217 -msgid "" -"Step 1: Send model to a number of connected organizations/devices (client " -"nodes)" -msgstr "第 1 步:将模型发送到多个连接的组织/设备(客户节点)" +#~ msgid "" +#~ "**Noising** : Gaussian noise, calibrated " +#~ "to the clipping threshold, is added " +#~ "to the average computed at the " +#~ "server." +#~ msgstr "**噪声** : 在服务器计算出的平均值中加入高斯噪声,该噪声根据剪切阈值进行校准。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:219 -msgid "" -"Next, we send the parameters of the global model to the connected client " -"nodes (think: edge devices like smartphones or servers belonging to " -"organizations). This is to ensure that each participating node starts their " -"local training using the same model parameters. We often use only a few of " -"the connected nodes instead of all nodes. The reason for this is that " -"selecting more and more client nodes has diminishing returns." -msgstr "" -"接下来,我们会将全局模型的参数发送到连接的客户端节点(如智能手机等边缘设备或" -"企业的服务器)。这是为了确保每个参与节点都使用相同的模型参数开始本地训练。我" -"们通常只使用几个连接节点,而不是所有节点。这样做的原因是,选择越来越多的客户" -"端节点会导致收益递减。" +#~ msgid "" +#~ "The distribution of the update norm " +#~ "has been shown to vary from " +#~ "task-to-task and to evolve as " +#~ "training progresses. This variability is " +#~ "crucial in understanding its impact on" +#~ " differential privacy guarantees, emphasizing " +#~ "the need for an adaptive approach " +#~ "[andrew]_ that continuously adjusts the " +#~ "clipping threshold to track a " +#~ "prespecified quantile of the update norm" +#~ " distribution." +#~ msgstr "事实证明,参数更新准则的分布会随着任务的不同而变化,并随着训练的进展而演变。因此,我们采用了一种自适应方法,该方法会不断调整剪切阈值,以跟踪参数更新准则分布的预设量化值。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:225 -msgid "|aef19f4b122c4e8d9f4c57f99bcd5dd2|" -msgstr "" +#~ msgid "Simplifying Assumptions" +#~ msgstr "简化假设" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:309 -msgid "Send global model" -msgstr "发送全局模型" +#~ msgid "" +#~ "We make (and attempt to enforce) a" +#~ " number of assumptions that must be" +#~ " satisfied to ensure that the " +#~ "training process actually realizes the " +#~ ":math:`(\\epsilon, \\delta)` guarantees the " +#~ "user has in mind when configuring " +#~ "the setup." +#~ msgstr "" +#~ "我们提出(并试图执行)了一系列必须满足的假设,以确保训练过程真正实现用户在配置设置时所定的 " +#~ ":math:`(\\epsilon,\\delta)` 。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:232 -msgid "" -"Step 2: Train model locally on the data of each organization/device (client " -"node)" -msgstr "步骤 2:在本地对每个机构/设备(客户端节点)的数据进行模型训练" +#~ msgid "" +#~ "**Fixed-size subsampling** :Fixed-size " +#~ "subsamples of the clients must be " +#~ "taken at each round, as opposed to" +#~ " variable-sized Poisson subsamples." +#~ msgstr "** 固定大小的子样本** :与可变大小的泊松分布子样本相比,每轮必须抽取固定大小的客户端子样本。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:234 -msgid "" -"Now that all (selected) client nodes have the latest version of the global " -"model parameters, they start the local training. They use their own local " -"dataset to train their own local model. They don't train the model until " -"full convergence, but they only train for a little while. This could be as " -"little as one epoch on the local data, or even just a few steps (mini-" -"batches)." -msgstr "" -"现在,所有(选定的)客户端节点都有了最新版本的全局模型参数,它们开始进行本地" -"训练。它们使用自己的本地数据集来训练自己的本地模型。它们不会一直训练到模型完" -"全收敛为止,而只是训练一小段时间。这可能只是本地数据上的一个遍历,甚至只是几" -"个步骤(mini-batches)。" +#~ msgid "" +#~ "**Unweighted averaging** : The contributions" +#~ " from all the clients must weighted" +#~ " equally in the aggregate to " +#~ "eliminate the requirement for the server" +#~ " to know in advance the sum of" +#~ " the weights of all clients available" +#~ " for selection." +#~ msgstr "**非加权平均**: 所有客户端的贡献必须加权相等,这样服务器就不需要事先知道所有客户的权重总和。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:240 -msgid "|2881a86d8fc54ba29d96b29fc2819f4a|" -msgstr "" +#~ msgid "" +#~ "**No client failures** : The set " +#~ "of available clients must stay constant" +#~ " across all rounds of training. In" +#~ " other words, clients cannot drop out" +#~ " or fail." +#~ msgstr "**没有失败的客户端** : 在各轮训练中,可用客户端的数量必须保持不变。换句话说,客户端不能退出或失败。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:311 -msgid "Train on local data" -msgstr "根据本地数据进行训练" +#~ msgid "" +#~ "The first two are useful for " +#~ "eliminating a multitude of complications " +#~ "associated with calibrating the noise to" +#~ " the clipping threshold, while the " +#~ "third one is required to comply " +#~ "with the assumptions of the privacy " +#~ "analysis." +#~ msgstr "前两种方法有助于消除将噪声校准为削波阈值所带来的诸多复杂问题,而第三种方法则需要符合隐私分析的假设。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:247 -msgid "Step 3: Return model updates back to the server" -msgstr "步骤 3:将模型参数更新返回服务器" +#~ msgid "" +#~ "These restrictions are in line with " +#~ "constraints imposed by Andrew et al. " +#~ "[andrew]_." +#~ msgstr "这些限制与 Andrew 等人所施加的限制一致。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:249 -msgid "" -"After local training, each client node has a slightly different version of " -"the model parameters they originally received. The parameters are all " -"different because each client node has different examples in its local " -"dataset. The client nodes then send those model updates back to the server. " -"The model updates they send can either be the full model parameters or just " -"the gradients that were accumulated during local training." -msgstr "" -"经过本地训练后,每个客户节点最初收到的模型参数都会略有不同。参数之所以不同," -"是因为每个客户端节点的本地数据集中都有不同的数据。然后,客户端节点将这些模型" -"更新发回服务器。它们发送的模型更新既可以是完整的模型参数,也可以只是本地训练" -"过程中积累的梯度。" +#~ msgid "Customizable Responsibility for Noise injection" +#~ msgstr "可定制的噪声注入" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:255 -msgid "|ec1fe880237247e0975f52766775ab84|" -msgstr "" +#~ msgid "" +#~ "In contrast to other implementations " +#~ "where the addition of noise is " +#~ "performed at the server, you can " +#~ "configure the site of noise injection" +#~ " to better match your threat model." +#~ " We provide users with the " +#~ "flexibility to set up the training " +#~ "such that each client independently adds" +#~ " a small amount of noise to the" +#~ " clipped update, with the result that" +#~ " simply aggregating the noisy updates " +#~ "is equivalent to the explicit addition" +#~ " of noise to the non-noisy " +#~ "aggregate at the server." +#~ msgstr "与其他在服务器上添加噪声的实现方法不同,您可以配置噪声注入的位置,以便更好地匹配您的威胁模型。我们为用户提供了设置训练的灵活性,使每个客户端都能独立地为剪切参数更新添加少量噪声,这样,只需聚合噪声更新,就相当于在服务器上为非噪声聚合添加噪声了。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:313 -msgid "Send model updates" -msgstr "发送模型参数更新" +#~ msgid "" +#~ "To be precise, if we let :math:`m`" +#~ " be the number of clients sampled " +#~ "each round and :math:`\\sigma_\\Delta` be " +#~ "the scale of the total Gaussian " +#~ "noise that needs to be added to" +#~ " the sum of the model updates, " +#~ "we can use simple maths to show" +#~ " that this is equivalent to each " +#~ "client adding noise with scale " +#~ ":math:`\\sigma_\\Delta/\\sqrt{m}`." +#~ msgstr "" +#~ "准确地说,我们假设每轮采样的客户端数量为:math:`m`,:math:`\\sigma_\\Delta` " +#~ "为需要添加到模型更新总和中的总高斯噪声的规模,我们就可以用简单的数学方法证明了,这相当于每个客户端都添加了规模为 " +#~ ":math:`\\sigma_\\Delta/\\sqrt{m}` 的噪声。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:262 -msgid "Step 4: Aggregate model updates into a new global model" -msgstr "步骤 4:将模型更新聚合到新的全局模型中" +#~ msgid "Wrapper-based approach" +#~ msgstr "基于封装的方法" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:264 -msgid "" -"The server receives model updates from the selected client nodes. If it " -"selected 100 client nodes, it now has 100 slightly different versions of the " -"original global model, each trained on the local data of one client. But " -"didn't we want to have one model that contains the learnings from the data " -"of all 100 client nodes?" -msgstr "" -"服务器从选定的客户端节点接收模型更新。如果服务器选择了 100 个客户端节点,那么" -"它现在就拥有 100 个略有不同的原始全局模型版本,每个版本都是根据一个客户端的本" -"地数据训练出来的。难道我们不希望有一个包含所有 100 个客户节点数据的模型吗?" +#~ msgid "" +#~ "Introducing DP to an existing workload" +#~ " can be thought of as adding an" +#~ " extra layer of security around it." +#~ " This inspired us to provide the " +#~ "additional server and client-side logic" +#~ " needed to make the training process" +#~ " differentially private as wrappers for " +#~ "instances of the :code:`Strategy` and " +#~ ":code:`NumPyClient` abstract classes respectively." +#~ " This wrapper-based approach has the" +#~ " advantage of being easily composable " +#~ "with other wrappers that someone might" +#~ " contribute to the Flower library in" +#~ " the future, e.g., for secure " +#~ "aggregation. Using Inheritance instead can " +#~ "be tedious because that would require" +#~ " the creation of new sub- classes " +#~ "every time a new class implementing " +#~ ":code:`Strategy` or :code:`NumPyClient` is " +#~ "defined." +#~ msgstr "" +#~ "在现有工作负载中引入 DP " +#~ "可以被认为是在其周围增加了一层额外的安全性。受此启发,我们提供了额外的服务器端和客户端逻辑,分别作为 " +#~ ":code:`Strategy` 和 :code:`NumPyClient` " +#~ "抽象类实例的封装器,使训练过程具有不同的隐私性。这种基于封装器的方法的优点是可以很容易地与将来有人贡献给 Flower " +#~ "的其他封装器(例如用于安全聚合的封装器)进行组合。使用继承可能会比较繁琐,因为每次定义实现 :code:`Strategy`" +#~ " 或 :code:`NumPyClient` 的新类时,都需要创建新的子类。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:266 -msgid "" -"In order to get one single model, we have to combine all the model updates " -"we received from the client nodes. This process is called *aggregation*, and " -"there are many different ways to do it. The most basic way to do it is " -"called *Federated Averaging* (`McMahan et al., 2016 `__), often abbreviated as *FedAvg*. *FedAvg* takes the 100 " -"model updates and, as the name suggests, averages them. To be more precise, " -"it takes the *weighted average* of the model updates, weighted by the number " -"of examples each client used for training. The weighting is important to " -"make sure that each data example has the same \"influence\" on the resulting " -"global model. If one client has 10 examples, and another client has 100 " -"examples, then - without weighting - each of the 10 examples would influence " -"the global model ten times as much as each of the 100 examples." -msgstr "" -"为了得到一个单一的模型,我们必须将从客户端节点收到的所有模型更新合并起来。这" -"个过程称为*聚合*,有许多不同的方法。最基本的方法称为 *Federated Averaging* " -"(`McMahan等人,2016 `__),通常缩写为" -"*FedAvg*。*FedAvg* 可以把100 个模型更新进行平均。更准确地说,它取的是模型更新" -"的*加权平均值*,根据每个客户端用于训练的数据数量进行加权。加权对于确保每个数" -"据示例对生成的全局模型具有相同的 \"影响 \"非常重要。如果一个客户端有 10 个数" -"据点,而另一个客户有 100 个数据点,那么在不加权的情况下,10 个示例对全局模型" -"的影响是 100 个示例的 10 倍。" +#~ msgid "" +#~ "The first version of our solution " +#~ "was to define a decorator whose " +#~ "constructor accepted, among other things, " +#~ "a boolean-valued variable indicating " +#~ "whether adaptive clipping was to be " +#~ "enabled or not. We quickly realized " +#~ "that this would clutter its " +#~ ":code:`__init__()` function with variables " +#~ "corresponding to hyperparameters of adaptive" +#~ " clipping that would remain unused " +#~ "when it was disabled. A cleaner " +#~ "implementation could be achieved by " +#~ "splitting the functionality into two " +#~ "decorators, :code:`DPFedAvgFixed` and " +#~ ":code:`DPFedAvgAdaptive`, with the latter sub-" +#~ " classing the former. The constructors " +#~ "for both classes accept a boolean " +#~ "parameter :code:`server_side_noising`, which, as " +#~ "the name suggests, determines where " +#~ "noising is to be performed." +#~ msgstr "" +#~ "我们的第一版解决方案是定义一个装饰器,其构造函数接受一个布尔值变量,表示是否启用自适应剪裁。我们很快意识到,这样会使其 " +#~ ":code:`__init__()` " +#~ "函数中与自适应裁剪超参数相对应的变量变得杂乱无章,而这些变量在自适应裁剪被禁用时将保持未使用状态。要实现更简洁的功能,可以将该功能拆分为两个装饰器,即" +#~ " :code:`DPFedAvgFixed` 和 " +#~ ":code:`DPFedAvgAdaptive`,后者是前者的子类。这两个类的构造函数都接受一个布尔参数 " +#~ ":code:`server_side_noising`,顾名思义,它决定了在哪里加噪声。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:273 -msgid "|9fdf048ed58d4467b2718cdf4aaf1ec3|" -msgstr "" +#~ msgid "" +#~ "The server-side capabilities required " +#~ "for the original version of DP-" +#~ "FedAvg, i.e., the one which performed" +#~ " fixed clipping, can be completely " +#~ "captured with the help of wrapper " +#~ "logic for just the following two " +#~ "methods of the :code:`Strategy` abstract " +#~ "class." +#~ msgstr "" +#~ "只需对 :code:`Strategy` 抽象类的以下两个方法进行封装,就能完全捕获 DP-" +#~ "FedAvg 原始版本(即执行固定剪裁的版本)所需的服务器端功能。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:315 -msgid "Aggregate model updates" -msgstr "聚合模型参数更新" +#~ msgid "" +#~ ":code:`configure_fit()` : The config " +#~ "dictionary being sent by the wrapped " +#~ ":code:`Strategy` to each client needs to" +#~ " be augmented with an additional " +#~ "value equal to the clipping threshold" +#~ " (keyed under :code:`dpfedavg_clip_norm`) and," +#~ " if :code:`server_side_noising=true`, another one" +#~ " equal to the scale of the " +#~ "Gaussian noise that needs to be " +#~ "added at the client (keyed under " +#~ ":code:`dpfedavg_noise_stddev`). This entails " +#~ "*post*-processing of the results returned " +#~ "by the wrappee's implementation of " +#~ ":code:`configure_fit()`." +#~ msgstr "" +#~ ":code:`configure_fit()` :由封装的 :code:`Strategy` " +#~ "发送到每个客户端的配置字典需要使用等于裁剪阈值的附加值(在 :code:`dpfedavg_clip_norm` " +#~ "下键入)进行扩充。并且,如果 " +#~ "server_side_noising=true,则另一个值等于需要在客户端添加的高斯噪声的大小(在 " +#~ "dpfedavg_noise_stddev 下键入)。这需要对封装后的configure_fit() " +#~ "所返回的结果进行后处理。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:280 -msgid "Step 5: Repeat steps 1 to 4 until the model converges" -msgstr "步骤 5:重复步骤 1 至 4,直至模型收敛" +#~ msgid "" +#~ ":code:`aggregate_fit()`: We check whether any" +#~ " of the sampled clients dropped out" +#~ " or failed to upload an update " +#~ "before the round timed out. In " +#~ "that case, we need to abort the" +#~ " current round, discarding any successful" +#~ " updates that were received, and move" +#~ " on to the next one. On the " +#~ "other hand, if all clients responded " +#~ "successfully, we must force the " +#~ "averaging of the updates to happen " +#~ "in an unweighted manner by intercepting" +#~ " the :code:`parameters` field of " +#~ ":code:`FitRes` for each received update " +#~ "and setting it to 1. Furthermore, " +#~ "if :code:`server_side_noising=true`, each update " +#~ "is perturbed with an amount of " +#~ "noise equal to what it would have" +#~ " been subjected to had client-side" +#~ " noising being enabled. This entails " +#~ "*pre*-processing of the arguments to " +#~ "this method before passing them on " +#~ "to the wrappee's implementation of " +#~ ":code:`aggregate_fit()`." +#~ msgstr "" +#~ ":code:`aggregate_fit()`: " +#~ "我们会检查是否有任何客户端在本轮超时前退出或未能上传参数更新。在这种情况下,我们需要中止当前一轮,丢弃已收到的所有参数更新,然后继续下一轮。另一方面,如果所有客户端都成功响应,我们就必须通过拦截" +#~ " :code:`FitRes` 的 :code:`parameters` 字段并将其设置为 " +#~ "1,强制以不加权的方式平均更新。此外,如果 " +#~ ":code:`server_side_noising=true`,每次更新都会受到一定量的噪声扰动,其扰动量相当于启用客户端噪声时的扰动量。" +#~ " 这就需要在将本方法的参数传递给封装的 :code:`aggregate_fit()` " +#~ "之前,对参数进行*预*处理。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:282 -msgid "" -"Steps 1 to 4 are what we call a single round of federated learning. The " -"global model parameters get sent to the participating client nodes (step 1), " -"the client nodes train on their local data (step 2), they send their updated " -"models to the server (step 3), and the server then aggregates the model " -"updates to get a new version of the global model (step 4)." -msgstr "" -"步骤 1 至 4 就是我们所说的单轮联邦学习。全局模型参数被发送到参与的客户端节点" -"(第 1 步),客户端节点对其本地数据进行训练(第 2 步),然后将更新后的模型发" -"送到服务器(第 3 步),服务器汇总模型更新,得到新版本的全局模型(第 4 步)。" +#~ msgid "" +#~ "We can't directly change the aggregation" +#~ " function of the wrapped strategy to" +#~ " force it to add noise to the" +#~ " aggregate, hence we simulate client-" +#~ "side noising to implement server-side" +#~ " noising." +#~ msgstr "我们无法直接改变封装策略的聚合函数,迫使它在聚合中添加噪声,因此我们模拟客户端噪声来实现服务器端噪声。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:284 -msgid "" -"During a single round, each client node that participates in that iteration " -"only trains for a little while. This means that after the aggregation step " -"(step 4), we have a model that has been trained on all the data of all " -"participating client nodes, but only for a little while. We then have to " -"repeat this training process over and over again to eventually arrive at a " -"fully trained model that performs well across the data of all client nodes." -msgstr "" -"在一轮迭代中,每个参与迭代的客户节点只训练一小段时间。这意味着,在聚合步骤" -"(步骤 4)之后,我们的模型已经在所有参与的客户节点的所有数据上训练过了,但只" -"训练了一小会儿。然后,我们必须一次又一次地重复这一训练过程,最终得到一个经过" -"全面训练的模型,该模型在所有客户节点的数据中都表现良好。" +#~ msgid "" +#~ "These changes have been put together " +#~ "into a class called :code:`DPFedAvgFixed`, " +#~ "whose constructor accepts the strategy " +#~ "being decorated, the clipping threshold " +#~ "and the number of clients sampled " +#~ "every round as compulsory arguments. The" +#~ " user is expected to specify the " +#~ "clipping threshold since the order of" +#~ " magnitude of the update norms is " +#~ "highly dependent on the model being " +#~ "trained and providing a default value" +#~ " would be misleading. The number of" +#~ " clients sampled at every round is" +#~ " required to calculate the amount of" +#~ " noise that must be added to " +#~ "each individual update, either by the" +#~ " server or the clients." +#~ msgstr "" +#~ "这些变化被整合到一个名为 :code:`DPFedAvgFixed` " +#~ "的类中,其构造函数接受被装饰的策略、剪切阈值和每轮采样的客户数作为必选参数。用户需要指定剪切阈值,因为参数更新规范的数量级在很大程度上取决于正在训练的模型,提供默认值会产生误导。每轮采样的客户端数量是计算服务器或客户在每次参数更新时添加的噪音量所必需的。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:289 -msgid "" -"Congratulations, you now understand the basics of federated learning. " -"There's a lot more to discuss, of course, but that was federated learning in " -"a nutshell. In later parts of this tutorial, we will go into more detail. " -"Interesting questions include: How can we select the best client nodes that " -"should participate in the next round? What's the best way to aggregate model " -"updates? How can we handle failing client nodes (stragglers)?" -msgstr "" -"恭喜您,现在您已经了解了联邦学习的基础知识。当然,要讨论的内容还有很多,但这" -"只是联邦学习的一个缩影。在本教程的后半部分,我们将进行更详细的介绍。有趣的问" -"题包括 我们如何选择最好的客户端节点参与下一轮学习?聚合模型更新的最佳方法是什" -"么?如何处理失败的客户端节点(落伍者)?" +#~ msgid "" +#~ "The additional functionality required to " +#~ "facilitate adaptive clipping has been " +#~ "provided in :code:`DPFedAvgAdaptive`, a " +#~ "subclass of :code:`DPFedAvgFixed`. It " +#~ "overrides the above-mentioned methods to" +#~ " do the following." +#~ msgstr "" +#~ "自适应剪裁所需的附加功能在 :code:`DPFedAvgAdaptive` 中提供,其是 " +#~ ":code:`DPFedAvgFixed` 的子类。它重写了上述方法,以实现以下功能。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:294 -msgid "" -"Just like we can train a model on the decentralized data of different client " -"nodes, we can also evaluate the model on that data to receive valuable " -"metrics. This is called federated evaluation, sometimes abbreviated as FE. " -"In fact, federated evaluation is an integral part of most federated learning " -"systems." -msgstr "" -"就像我们可以在不同客户节点的分散数据上训练一个模型一样,我们也可以在这些数据" -"上对模型进行评估,以获得有价值的指标。这就是所谓的联邦评估,有时简称为 FE。事" -"实上,联邦评估是大多数联邦学习系统不可或缺的一部分。" +#~ msgid "" +#~ ":code:`configure_fit()` : It intercepts the" +#~ " config dict returned by " +#~ ":code:`super.configure_fit()` to add the " +#~ "key-value pair " +#~ ":code:`dpfedavg_adaptive_clip_enabled:True` to it, " +#~ "which the client interprets as an " +#~ "instruction to include an indicator bit" +#~ " (1 if update norm <= clipping " +#~ "threshold, 0 otherwise) in the results" +#~ " returned by it." +#~ msgstr "" +#~ ":code:`configure_fit()`:它截取由 :code:`super.configure_fit()` " +#~ "返回的 config 字典,并在其中添加键-值对 " +#~ ":code:`dpfedavg_adaptive_clip_enabled:True\",客户端将其解释为在返回结果中包含一个指示位(如果参数更新范式" +#~ " <= 剪裁阈值,则为 1,否则为 0)的指令。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:297 -msgid "Federated analytics" -msgstr "联邦分析" +#~ msgid "" +#~ ":code:`aggregate_fit()` : It follows a " +#~ "call to :code:`super.aggregate_fit()` with one" +#~ " to :code:`__update_clip_norm__()`, a procedure" +#~ " which adjusts the clipping threshold " +#~ "on the basis of the indicator bits" +#~ " received from the sampled clients." +#~ msgstr ":code:`aggregate_fit()`:在调用:code:`super.aggregate_fit()`后,再调用:code:`__update_clip_norm__()`,该过程根据从采样客户端接收到的指示位调整裁剪阈值。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:299 -msgid "" -"In many cases, machine learning isn't necessary to derive value from data. " -"Data analysis can yield valuable insights, but again, there's often not " -"enough data to get a clear answer. What's the average age at which people " -"develop a certain type of health condition? Federated analytics enables such " -"queries over multiple client nodes. It is usually used in conjunction with " -"other privacy-enhancing technologies like secure aggregation to prevent the " -"server from seeing the results submitted by individual client nodes." -msgstr "" -"在很多情况下,机器学习并不是从数据中获取价值的必要条件。数据分析可以产生有价" -"值的见解,但同样,往往没有足够的数据来获得明确的答案。人们患某种健康疾病的平" -"均年龄是多少?联邦分析可以通过多个客户端节点进行此类查询。它通常与安全聚合等" -"其他隐私增强技术结合使用,以防止服务器看到单个客户端节点提交的结果。" +#~ msgid "" +#~ "The client-side capabilities required " +#~ "can be completely captured through " +#~ "wrapper logic for just the :code:`fit()`" +#~ " method of the :code:`NumPyClient` abstract" +#~ " class. To be precise, we need " +#~ "to *post-process* the update computed" +#~ " by the wrapped client to clip " +#~ "it, if necessary, to the threshold " +#~ "value supplied by the server as " +#~ "part of the config dictionary. In " +#~ "addition to this, it may need to" +#~ " perform some extra work if either" +#~ " (or both) of the following keys " +#~ "are also present in the dict." +#~ msgstr "" +#~ "客户端所需的功能完全可以通过 :code:`NumPyClient` 抽象类的 " +#~ ":code:`fit()` " +#~ "方法的封装逻辑来实现。准确地说,我们需要对封装客户端计算的参数更新进行处理,以便在必要时将其剪切到服务器作为配置字典的一部分提供的阈值。除此之外,如果配置字典中还存在以下任一(或两个)键,客户端可能还需要执行一些额外的工作。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:305 -msgid "" -"Differential privacy (DP) is often mentioned in the context of Federated " -"Learning. It is a privacy-preserving method used when analyzing and sharing " -"statistical data, ensuring the privacy of individual participants. DP " -"achieves this by adding statistical noise to the model updates, ensuring any " -"individual participants’ information cannot be distinguished or re-" -"identified. This technique can be considered an optimization that provides a " -"quantifiable privacy protection measure." -msgstr "" -"差分隐私(DP)经常在联邦学习中被提及。这是一种在分析和共享统计数据时使用的隐" -"私保护方法,可确保单个参与者的隐私。DP 通过在模型更新中添加统计噪声来实现这一" -"目的,确保任何个体参与者的信息都无法被区分或重新识别。这种技术可被视为一种优" -"化,提供了一种可量化的隐私保护措施。" +#~ msgid "" +#~ ":code:`dpfedavg_noise_stddev` : Generate and " +#~ "add the specified amount of noise " +#~ "to the clipped update." +#~ msgstr "code:`dpfedavg_noise_stddev`:生成并在剪切参数更新中添加指定数量的噪声。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:326 -msgid "Flower" -msgstr "Flower" +#~ msgid "" +#~ ":code:`dpfedavg_adaptive_clip_enabled` : Augment the" +#~ " metrics dict in the :code:`FitRes` " +#~ "object being returned to the server " +#~ "with an indicator bit, calculated as " +#~ "described earlier." +#~ msgstr "" +#~ ":code:`dpfedavg_adaptive_clip_enabled`:在返回给服务器的 :code:`FitRes`" +#~ " 对象中的度量值字典中增加一个指标位,计算方法如前所述。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:328 -msgid "" -"Federated learning, federated evaluation, and federated analytics require " -"infrastructure to move machine learning models back and forth, train and " -"evaluate them on local data, and then aggregate the updated models. Flower " -"provides the infrastructure to do exactly that in an easy, scalable, and " -"secure way. In short, Flower presents a unified approach to federated " -"learning, analytics, and evaluation. It allows the user to federate any " -"workload, any ML framework, and any programming language." -msgstr "" -"联邦学习、联邦评估和联邦分析需要基础框架来来回移动机器学习模型,在本地数据上" -"对其进行训练和评估,然后汇总更新的模型。Flower 提供的基础架构正是以简单、可扩" -"展和安全的方式实现这些目标的。简而言之,Flower 为联邦学习、分析和评估提供了一" -"种统一的方法。它允许用户联邦化任何工作负载、任何 ML 框架和任何编程语言。" +#~ msgid "Performing the :math:`(\\epsilon, \\delta)` analysis" +#~ msgstr "进行 :math:`(epsilon, \\delta)` 分析" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:334 -msgid "|ff726bc5505e432388ee2fdd6ef420b9|" -msgstr "" +#~ msgid "" +#~ "Assume you have trained for :math:`n`" +#~ " rounds with sampling fraction :math:`q`" +#~ " and noise multiplier :math:`z`. In " +#~ "order to calculate the :math:`\\epsilon` " +#~ "value this would result in for a" +#~ " particular :math:`\\delta`, the following " +#~ "script may be used." +#~ msgstr "" +#~ "假设您已经训练了 :math:`n` 轮,采样比例为 :math:`q`,噪声乘数为 " +#~ ":math:`z`。为了计算特定 :math:`\\delta` 的 :math:`epsilon`" +#~ " 值,可以使用下面的脚本。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:340 -msgid "" -"Flower federated learning server and client nodes (car, scooter, personal " -"computer, roomba, and phone)" -msgstr "" -"Flower联邦学习服务器和客户端节点(汽车、滑板车、个人电脑、roomba 和电话)" +#~ msgid "Flower driver SDK." +#~ msgstr "Flower 服务器。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:353 -msgid "" -"Congratulations, you just learned the basics of federated learning and how " -"it relates to the classic (centralized) machine learning!" -msgstr "" -"恭喜您,您刚刚了解了联邦学习的基础知识,以及它与传统(集中式)机器学习的关" -"系!" +#~ msgid "driver" +#~ msgstr "服务器" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:355 -msgid "" -"In the next part of this tutorial, we are going to build a first federated " -"learning system with Flower." -msgstr "在本教程的下一部分,我们将用 Flower 建立第一个联邦学习系统。" +#~ msgid "Get task results." +#~ msgstr "汇总训练结果。" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:373 -msgid "" -"The `Flower Federated Learning Tutorial - Part 1 `__ shows how to " -"build a simple federated learning system with PyTorch and Flower." -msgstr "" -"`Flower 联邦学习教程 - 第 1 部分 `__ 展示了如何使用 PyTorch 和 Flower 构" -"建一个简单的联邦学习系统。" +#~ msgid "Request for run ID." +#~ msgstr "Flower 基线申请" -#~ msgid "Before the release" -#~ msgstr "发布前" +#~ msgid "Get client IDs." +#~ msgstr "返回客户端(本身)。" #~ msgid "" -#~ "Update the changelog (``changelog.md``) with all relevant changes that " -#~ "happened after the last release. If the last release was tagged " -#~ "``v1.2.0``, you can use the following URL to see all commits that got " -#~ "merged into ``main`` since then:" +#~ "Flower usage examples used to be " +#~ "bundled with Flower in a package " +#~ "called ``flwr_example``. We are migrating " +#~ "those examples to standalone projects to" +#~ " make them easier to use. All " +#~ "new examples are based in the " +#~ "directory `examples " +#~ "`_." #~ msgstr "" -#~ "更新更新日志 (``changelog.md``),加入上次发布后发生的所有相关变更。如果上" -#~ "次发布的版本被标记为 ``v1.2.0``,则可以使用以下 URL 查看此后合并到 " -#~ "``main`` 的所有提交:" +#~ "Flower 的使用示例曾与 Flower 捆绑在一个名为 ``flwr_example``" +#~ " 的软件包中。我们正在将这些示例迁移到独立项目中,以使它们更易于使用。所有新示例都位于目录 `examples " +#~ "`_。" + +#~ msgid "Quickstart TensorFlow/Keras" +#~ msgstr "快速入门 TensorFlow/Keras" + +#~ msgid "Legacy Examples (`flwr_example`)" +#~ msgstr "传统示例 (`flwr_example`)" #~ msgid "" -#~ "`GitHub: Compare v1.2.0...main `_" +#~ "The useage examples in `flwr_example` " +#~ "are deprecated and will be removed " +#~ "in the future. New examples are " +#~ "provided as standalone projects in " +#~ "`examples `_." #~ msgstr "" -#~ "`GitHub: Compare v1.2.0...main `_" +#~ "在 `flwr_example` 中的使用示例已被弃用,今后将被移除。新示例将作为独立项目在 " +#~ "`examples `_" +#~ " 中提供。" + +#~ msgid "Extra Dependencies" +#~ msgstr "额外依赖" #~ msgid "" -#~ "Thank the authors who contributed since the last release. This can be " -#~ "done by running the ``./dev/add-shortlog.sh`` convenience script (it can " -#~ "be ran multiple times and will update the names in the list if new " -#~ "contributors were added in the meantime)." +#~ "The core Flower framework keeps a " +#~ "minimal set of dependencies. The " +#~ "examples demonstrate Flower in the " +#~ "context of different machine learning " +#~ "frameworks, so additional dependencies need" +#~ " to be installed before an example" +#~ " can be run." #~ msgstr "" -#~ "感谢自上次发布以来做出贡献的作者。可以通过运行 ``./dev/add-shortlog.sh`` " -#~ "方便脚本来完成(可以多次运行,如果在此期间有新的贡献者加入,则会更新列表中" -#~ "的名字)。" +#~ "Flower 核心框架只保留了最低限度的依赖项。这些示例在不同机器学习框架的背景下演示了 " +#~ "Flower,因此在运行示例之前需要安装额外的依赖项。" + +#~ msgid "For PyTorch examples::" +#~ msgstr "PyTorch 示例::" + +#~ msgid "For TensorFlow examples::" +#~ msgstr "TensorFlow 示例::" + +#~ msgid "For both PyTorch and TensorFlow examples::" +#~ msgstr "PyTorch 和 TensorFlow 示例::" #~ msgid "" -#~ "Update the ``changelog.md`` section header ``Unreleased`` to contain the " -#~ "version number and date for the release you are building. Create a pull " -#~ "request with the change." +#~ "Please consult :code:`pyproject.toml` for a" +#~ " full list of possible extras " +#~ "(section :code:`[tool.poetry.extras]`)." #~ msgstr "" -#~ "更新 ``changelog.md`` 部分的标题 ``Unreleased`` 以包含你正在构建的版本的版" -#~ "本号和日期。创建一个包含更改的拉取请求。" +#~ "请参阅 :code:`pyproject.toml`,了解可能的 extras 的完整列表(章节 " +#~ ":code:`[tool.poems.extras]`)。" + +#~ msgid "PyTorch Examples" +#~ msgstr "PyTorch 示例" + +#~ msgid "" +#~ "Our PyTorch examples are based on " +#~ "PyTorch 1.7. They should work with " +#~ "other releases as well. So far, we" +#~ " provide the following examples." +#~ msgstr "我们的 PyTorch 示例基于 PyTorch 1.7。它们应该也能在其他版本中使用。到目前为止,我们提供了以下示例。" + +#~ msgid "CIFAR-10 Image Classification" +#~ msgstr "CIFAR-10 图像分类" #~ msgid "" -#~ "Second, create a virtual environment (and activate it). If you chose to " -#~ "use :code:`pyenv` (with the :code:`pyenv-virtualenv` plugin) and already " -#~ "have it installed , you can use the following convenience script (by " -#~ "default it will use :code:`Python 3.8.17`, but you can change it by " -#~ "providing a specific :code:``)::" +#~ "`CIFAR-10 and CIFAR-100 " +#~ "`_ are " +#~ "popular RGB image datasets. The Flower" +#~ " CIFAR-10 example uses PyTorch to " +#~ "train a simple CNN classifier in a" +#~ " federated learning setup with two " +#~ "clients." #~ msgstr "" -#~ "其次,创建虚拟环境(并激活它)。如果您选择使用 :code:`pyenv`(使用 :code:" -#~ "`pyenv-virtualenv`插件),并且已经安装了该插件,则可以使用下面的便捷脚本" -#~ "(默认情况下使用 :code:`Python3.8.17`,但您可以通过提供特定的 :code:`<版本" -#~ ">`来更改)::" +#~ "CIFAR-10 和 CIFAR-100 " +#~ "``_ 是流行的 RGB" +#~ " 图像数据集。Flower CIFAR-10 示例使用 PyTorch " +#~ "在有两个客户端的联邦学习设置中训练一个简单的 CNN 分类器。" -#~ msgid "flwr (Python API reference)" -#~ msgstr "flwr(Python API 参考)" +#~ msgid "First, start a Flower server:" +#~ msgstr "首先,启动 Flower 服务器:" -#~ msgid "..." -#~ msgstr "..." +#~ msgid "$ ./src/py/flwr_example/pytorch_cifar/run-server.sh" +#~ msgstr "$ ./src/py/flwr_example/pytorch_cifar/run-server.sh" -#~ msgid "Starting a client with an insecure server connection:" -#~ msgstr "使用不安全的服务器连接启动客户端:" +#~ msgid "Then, start the two clients in a new terminal window:" +#~ msgstr "然后,在新的终端窗口中启动两个客户端:" -#~ msgid "server.strategy.FedAvg" -#~ msgstr "server.strategy.FedAvg" +#~ msgid "$ ./src/py/flwr_example/pytorch_cifar/run-clients.sh" +#~ msgstr "$ ./src/py/flwr_example/pytorch_cifar/run-clients.sh" -#~ msgid "server.strategy.FedAvgM" -#~ msgstr "server.strategy.FedAvgM" +#~ msgid "For more details, see :code:`src/py/flwr_example/pytorch_cifar`." +#~ msgstr "更多详情,请参阅 :code:`src/py/flwr_example/pytorch_cifar`。" -#~ msgid "Configurable FedAvg with Momentum strategy implementation." -#~ msgstr "可配置的 FedAvg 动量策略实施。" +#~ msgid "ImageNet-2012 Image Classification" +#~ msgstr "ImageNet-2012 图像分类" -#~ msgid "Fraction of clients used during training. Defaults to 0.1." -#~ msgstr "训练期间使用客户的比例。默认为 0.1。" +#~ msgid "" +#~ "`ImageNet-2012 `_ is " +#~ "one of the major computer vision " +#~ "datasets. The Flower ImageNet example " +#~ "uses PyTorch to train a ResNet-18 " +#~ "classifier in a federated learning setup" +#~ " with ten clients." +#~ msgstr "" +#~ "ImageNet-2012 `_ " +#~ "是主要的计算机视觉数据集之一。Flower ImageNet 示例使用 PyTorch " +#~ "在有十个客户端的联邦学习设置中训练 ResNet-18 分类器。" -#~ msgid "Fraction of clients used during validation. Defaults to 0.1." -#~ msgstr "验证过程中使用的客户端比例。默认为 0.1。" +#~ msgid "$ ./src/py/flwr_example/pytorch_imagenet/run-server.sh" +#~ msgstr "$ ./src/py/flwr_example/pytorch_imagenet/run-server.sh" -#~ msgid "server.strategy.FedMedian" -#~ msgstr "server.strategy.FedMedian" +#~ msgid "$ ./src/py/flwr_example/pytorch_imagenet/run-clients.sh" +#~ msgstr "$ ./src/py/flwr_example/pytorch_imagenet/run-clients.sh" -#~ msgid "server.strategy.QFedAvg" -#~ msgstr "server.strategy.QFedAvg" +#~ msgid "For more details, see :code:`src/py/flwr_example/pytorch_imagenet`." +#~ msgstr "更多详情,请参阅 :code:`src/py/flwr_example/pytorch_imagenet`。" -#~ msgid "server.strategy.FedOpt" -#~ msgstr "server.strategy.FedOpt" +#~ msgid "TensorFlow Examples" +#~ msgstr "TensorFlow 示例" -#~ msgid "Configurable FedAdagrad strategy implementation." -#~ msgstr "可配置的 FedAdagrad 策略实施。" +#~ msgid "" +#~ "Our TensorFlow examples are based on " +#~ "TensorFlow 2.0 or newer. So far, " +#~ "we provide the following examples." +#~ msgstr "我们的 TensorFlow 示例基于 TensorFlow 2.0 或更新版本。到目前为止,我们提供了以下示例。" -#~ msgid "Federated Optim strategy interface." -#~ msgstr "Federated Optim 策略界面。" +#~ msgid "Fashion-MNIST Image Classification" +#~ msgstr "Fashion-MNIST 图像分类" -#~ msgid "server.strategy.FedProx" -#~ msgstr "server.strategy.FedProx" +#~ msgid "" +#~ "`Fashion-MNIST `_ is often used as " +#~ "the \"Hello, world!\" of machine " +#~ "learning. We follow this tradition and" +#~ " provide an example which samples " +#~ "random local datasets from Fashion-MNIST" +#~ " and trains a simple image " +#~ "classification model over those partitions." +#~ msgstr "" +#~ "`Fashion-MNIST `_ 经常被用作机器学习的 \"你好,世界!\"。我们遵循这一传统" +#~ ",提供了一个从Fashion-MNIST 中随机抽样本地数据集的示例,并在这些分区上训练一个简单的图像分类模型。" -#~ msgid "Configurable FedProx strategy implementation." -#~ msgstr "可配置的 FedProx 策略实施。" +#~ msgid "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-server.sh" +#~ msgstr "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-server.sh" + +#~ msgid "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-clients.sh" +#~ msgstr "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-clients.sh" + +#~ msgid "" +#~ "For more details, see " +#~ ":code:`src/py/flwr_example/tensorflow_fashion_mnist`." +#~ msgstr "更多详情,请参阅 :code:`src/py/flwr_example/tensorflow_fashion_mnist`。" + +#~ msgid "``BASE_IMAGE_TAG``" +#~ msgstr "基本图像标签" -#~ msgid "server.strategy.FedAdagrad" -#~ msgstr "server.strategy.FedAdagrad" +#~ msgid "The image tag of the base image." +#~ msgstr "基础图像的图像标记。" -#~ msgid "Paper: https://arxiv.org/abs/2003.00295" -#~ msgstr "论文: https://arxiv.org/abs/2003.00295" +#~ msgid "" +#~ "It is important to follow the " +#~ "instructions described in comments. For " +#~ "instance, in order to not break " +#~ "how our changelog system works, you " +#~ "should read the information above the" +#~ " ``Changelog entry`` section carefully. You" +#~ " can also checkout some examples and" +#~ " details in the :ref:`changelogentry` " +#~ "appendix." +#~ msgstr "" +#~ "请务必遵守注释中的说明。例如,为了不破坏我们的更新日志系统,你应该仔细阅读\"`更新日志条目``\"部分上面的信息。您还可以查看 " +#~ ":ref:`changelogentry` 附录中的一些示例和细节。" -#~ msgid "Federated learning strategy using Adagrad on server-side." -#~ msgstr "在服务器端使用 Adagrad 的联邦学习策略。" +#~ msgid "Open a PR (as shown above)" +#~ msgstr "打开 PR(如上图所示)" -#~ msgid "server.strategy.FedAdam" -#~ msgstr "server.strategy.FedAdam" +#~ msgid "How to write a good PR title" +#~ msgstr "如何撰写好的公关标题" -#~ msgid "server.strategy.FedYogi" -#~ msgstr "server.strategy.FedYogi" +#~ msgid "" +#~ "A well-crafted PR title helps team" +#~ " members quickly understand the purpose " +#~ "and scope of the changes being " +#~ "proposed. Here's a guide to help " +#~ "you write a good GitHub PR title:" +#~ msgstr "一个精心撰写的公关标题能帮助团队成员迅速了解所提修改的目的和范围。以下指南可帮助您撰写一个好的 GitHub PR 标题:" -#~ msgid "Adaptive Federated Optimization using Yogi." -#~ msgstr "使用 Yogi 的自适应联合优化。" +#~ msgid "" +#~ "1. Be Clear and Concise: Provide a" +#~ " clear summary of the changes in " +#~ "a concise manner. 1. Use Actionable " +#~ "Verbs: Start with verbs like \"Add,\"" +#~ " \"Update,\" or \"Fix\" to indicate " +#~ "the purpose. 1. Include Relevant " +#~ "Information: Mention the affected feature " +#~ "or module for context. 1. Keep it" +#~ " Short: Avoid lengthy titles for easy" +#~ " readability. 1. Use Proper Capitalization" +#~ " and Punctuation: Follow grammar rules " +#~ "for clarity." +#~ msgstr "" +#~ "1. 简明扼要: 以简明扼要的方式清楚地概述变化。1. 使用可操作的动词: 使用 " +#~ "\"添加\"、\"更新 \"或 \"修复 \"等动词来表明目的。1. 包含相关信息: " +#~ "提及受影响的功能或模块以了解上下文。1. 简短:避免冗长的标题,以方便阅读。1. 使用正确的大小写和标点符号:" +#~ " 遵守语法规则,以确保清晰。" -#~ msgid "Federated learning strategy using Yogi on server-side." -#~ msgstr "在服务器端使用 Yogi 的联邦学习策略。" +#~ msgid "" +#~ "Let's start with a few examples " +#~ "for titles that should be avoided " +#~ "because they do not provide meaningful" +#~ " information:" +#~ msgstr "让我们先举例说明几个应该避免使用的标题,因为它们不能提供有意义的信息:" -#~ msgid "Paper: https://arxiv.org/abs/1803.01498" -#~ msgstr "论文:https://arxiv.org/abs/1803.01498" +#~ msgid "Implement Algorithm" +#~ msgstr "执行算法" -#~ msgid "server.strategy.Krum" -#~ msgstr "server.strategy.Krum" +#~ msgid "Add my_new_file.py to codebase" +#~ msgstr "在代码库中添加 my_new_file.py" -#~ msgid "Configurable Krum strategy implementation." -#~ msgstr "可配置的 Krum 策略实施。" +#~ msgid "Improve code in module" +#~ msgstr "改进模块中的代码" -#~ msgid "server.strategy.Bulyan" -#~ msgstr "server.strategy.Bulyan" +#~ msgid "Change SomeModule" +#~ msgstr "更改 SomeModule" -#~ msgid "Bulyan strategy implementation." -#~ msgstr "Bulyan策略的实施。" +#~ msgid "" +#~ "Here are a few positive examples " +#~ "which provide helpful information without " +#~ "repeating how they do it, as that" +#~ " is already visible in the \"Files" +#~ " changed\" section of the PR:" +#~ msgstr "这里有几个正面的例子,提供了有用的信息,但没有重复他们是如何做的,因为在 PR 的 \"已更改文件 \"部分已经可以看到:" -#~ msgid "server.strategy.FedXgbNnAvg" -#~ msgstr "server.strategy.FedXgbNnAvg" +#~ msgid "Update docs banner to mention Flower Summit 2023" +#~ msgstr "更新文件横幅,提及 2023 年 Flower 峰会" -#~ msgid "Federated XGBoost [Ma et al., 2023] strategy." -#~ msgstr "Federated XGBoost [Ma 等人,2023] 策略。" +#~ msgid "Remove unnecessary XGBoost dependency" +#~ msgstr "移除不必要的 XGBoost 依赖性" -#~ msgid "server.strategy.DPFedAvgAdaptive" -#~ msgstr "server.strategy.DPFedAvgAdaptive" +#~ msgid "Remove redundant attributes in strategies subclassing FedAvg" +#~ msgstr "删除 FedAvg 子类化策略中的多余属性" #~ msgid "" -#~ "**Fix the incorrect return types of Strategy** ([#2432](https://github." -#~ "com/adap/flower/pull/2432/files))" -#~ msgstr "" -#~ "**修复策略的错误返回类型** ([#2432](https://github.com/adap/flower/" -#~ "pull/2432/files))" +#~ "Add CI job to deploy the staging" +#~ " system when the ``main`` branch " +#~ "changes" +#~ msgstr "添加 CI 作业,以便在 \"主 \"分支发生变化时部署暂存系统" #~ msgid "" -#~ "The types of the return values in the docstrings in two methods " -#~ "(`aggregate_fit` and `aggregate_evaluate`) now match the hint types in " -#~ "the code." -#~ msgstr "" -#~ "两个方法(\"aggregate_fit \"和 \"aggregate_evaluate\")的文档说明中的返回" -#~ "值类型现在与代码中的提示类型一致。" +#~ "Add new amazing library which will " +#~ "be used to improve the simulation " +#~ "engine" +#~ msgstr "添加新的惊人库,用于改进模拟引擎" -#~ msgid "" -#~ "**Update Flower Examples** ([#2384](https://github.com/adap/flower/" -#~ "pull/2384),[#2425](https://github.com/adap/flower/pull/2425), [#2526]" -#~ "(https://github.com/adap/flower/pull/2526))" -#~ msgstr "" -#~ "** 更新 Flower Examples** ([#2384](https://github.com/adap/flower/" -#~ "pull/2384),[#2425](https://github.com/adap/flower/pull/2425), [#2526]" -#~ "(https://github.com/adap/flower/pull/2526))" +#~ msgid "Changelog entry" +#~ msgstr "更新日志" #~ msgid "" -#~ "That's it for the client. We only have to implement :code:`Client` or :" -#~ "code:`NumPyClient` and call :code:`fl.client.start_client()`. The string :" -#~ "code:`\"0.0.0.0:8080\"` tells the client which server to connect to. In " -#~ "our case we can run the server and the client on the same machine, " -#~ "therefore we use :code:`\"0.0.0.0:8080\"`. If we run a truly federated " -#~ "workload with the server and clients running on different machines, all " -#~ "that needs to change is the :code:`server_address` we pass to the client." -#~ msgstr "" -#~ "对于客户端就需要做这么多。我们仅需要实现 :code:`Client`或者:code:" -#~ "`NumPyClient`然后调用:code:`fl.client.start_client()`。字符串 :code:" -#~ "`\"0.0.0.0:8080\"` 告诉客户端要连接到哪个服务器。在我们的例子中,我们可以" -#~ "在同一台机器上运行服务器和客户端,因此我们使用:code:`\"0.0.0.0:8080\"`。如" -#~ "果我们运行真正联邦学习的工作负载,服务器和客户端在不同的机器上运行,则需要" -#~ "更改的只是我们传递给客户端的 server_address 。" +#~ "When opening a new PR, inside its" +#~ " description, there should be a " +#~ "``Changelog entry`` header." +#~ msgstr "打开一个新 PR 时,在其描述中应有一个 ``Changelog entry`` 标头。" #~ msgid "" -#~ "That's it for the client. We only have to implement :code:`Client` or :" -#~ "code:`NumPyClient` and call :code:`fl.client.start_client()`. The string :" -#~ "code:`\"[::]:8080\"` tells the client which server to connect to. In our " -#~ "case we can run the server and the client on the same machine, therefore " -#~ "we use :code:`\"[::]:8080\"`. If we run a truly federated workload with " -#~ "the server and clients running on different machines, all that needs to " -#~ "change is the :code:`server_address` we point the client at." -#~ msgstr "" -#~ "对于客户来说就是这样了。我们只需实现 :code:`Client` 或 :code:" -#~ "`NumPyClient` 并调用:code:`fl.client.start_client()` 即可。字符串 :code:" -#~ "`\"[::]:8080\"` 告诉客户端要连接到哪个服务器。在我们的例子中,我们可以在同" -#~ "一台机器上运行服务器和客户端,因此我们使用 :code:`\"[::]:8080\"`。如果我们" -#~ "运行真正联邦的工作负载,服务器和客户端运行在不同的机器上,则需要更改的只是" -#~ "我们指向客户端的 server_address 。" +#~ "Above this header you should see " +#~ "the following comment that explains how" +#~ " to write your changelog entry:" +#~ msgstr "在页眉上方,你会看到以下注释,说明如何编写更新日志条目:" #~ msgid "" -#~ "Let's now load the CIFAR-10 training and test set, partition them into " -#~ "ten smaller datasets (each split into training and validation set), and " -#~ "wrap the resulting partitions by creating a PyTorch ``DataLoader`` for " -#~ "each of them:" -#~ msgstr "" -#~ "现在,让我们加载 CIFAR-10 训练集和测试集,将它们分割成 10 个较小的数据集" -#~ "(每个数据集又分为训练集和验证集),并通过为每个数据集创建 PyTorch " -#~ "``DataLoader`` 来包装由此产生的分割集:" - -#~ msgid "|e1dd4b4129b040bea23a894266227080|" -#~ msgstr "|e1dd4b4129b040bea23a894266227080|" +#~ "Inside the following 'Changelog entry' " +#~ "section, you should put the description" +#~ " of your changes that will be " +#~ "added to the changelog alongside your" +#~ " PR title." +#~ msgstr "在下面的 \"更新日志条目 \"部分中,您应该在 PR 标题旁边写上将添加到更新日志中的更改描述。" -#~ msgid "|c0d4cc6a442948dca8da40d2440068d9|" -#~ msgstr "|c0d4cc6a442948dca8da40d2440068d9|" +#~ msgid "" +#~ "If the section is completely empty " +#~ "(without any token) or non-existent, " +#~ "the changelog will just contain the " +#~ "title of the PR for the changelog" +#~ " entry, without any description." +#~ msgstr "如果该部分完全为空(没有任何标记)或不存在,更新日志将只包含更新日志条目的 PR 标题,而不包含任何描述。" -#~ msgid "|174e1e4fa1f149a19bfbc8bc1126f46a|" -#~ msgstr "|174e1e4fa1f149a19bfbc8bc1126f46a|" +#~ msgid "" +#~ "If the section contains some text " +#~ "other than tokens, it will use it" +#~ " to add a description to the " +#~ "change." +#~ msgstr "如果该部分包含标记以外的文本,它将使用这些文本为更改添加说明。" -#~ msgid "|4e021a3dc08249d2a89daa3ab03c2714|" -#~ msgstr "|4e021a3dc08249d2a89daa3ab03c2714|" +#~ msgid "" +#~ "If the section contains one of the" +#~ " following tokens it will ignore any" +#~ " other text and put the PR " +#~ "under the corresponding section of the" +#~ " changelog:" +#~ msgstr "如果该部分包含以下标记之一,它将忽略任何其他文本,并将 PR 放在更新日志的相应部分下:" -#~ msgid "|e74a1d5ce7eb49688651f2167a59065b|" -#~ msgstr "|e74a1d5ce7eb49688651f2167a59065b|" +#~ msgid " is for classifying a PR as a general improvement." +#~ msgstr " 用于将 PR 划分为一般改进。" -#~ msgid "|eb29ec4c7aef4e93976795ed72df647e|" -#~ msgstr "|eb29ec4c7aef4e93976795ed72df647e|" +#~ msgid " is to not add the PR to the changelog" +#~ msgstr "表示不将 PR 添加到更新日志中" -#~ msgid "|c2f699d8ac484f5081721a6f1511f70d|" -#~ msgstr "|c2f699d8ac484f5081721a6f1511f70d|" +#~ msgid " is to add a general baselines change to the PR" +#~ msgstr " 是指在 PR 中添加一般基线更改" -#~ msgid "|cf42accdacbf4e5eb4fa0503108ba7a7|" -#~ msgstr "|cf42accdacbf4e5eb4fa0503108ba7a7|" +#~ msgid " is to add a general examples change to the PR" +#~ msgstr " 是在 PR 中添加对一般示例的修改" -#~ msgid "|5ec8356bc2564fa09178b1ceed5beccc|" -#~ msgstr "|5ec8356bc2564fa09178b1ceed5beccc|" +#~ msgid " is to add a general sdk change to the PR" +#~ msgstr " 是指在 PR 中添加一般的 sdk 更改" -#~ msgid "|7c9329e97bd0430bad335ab605a897a7|" -#~ msgstr "|7c9329e97bd0430bad335ab605a897a7|" +#~ msgid " is to add a general simulations change to the PR" +#~ msgstr "(模拟)是在 PR 中添加一般模拟变更" -#~ msgid "|88002bbce1094ba1a83c9151df18f707|" -#~ msgstr "|88002bbce1094ba1a83c9151df18f707|" +#~ msgid "Note that only one token should be used." +#~ msgstr "请注意,只能使用一个标记。" -#~ msgid "|391766aee87c482c834c93f7c22225e2|" -#~ msgstr "|391766aee87c482c834c93f7c22225e2|" +#~ msgid "" +#~ "Its content must have a specific " +#~ "format. We will break down what " +#~ "each possibility does:" +#~ msgstr "其内容必须有特定的格式。我们将分析每种可能性的作用:" -#~ msgid "|93b9a15bd27f4e91b40f642c253dfaac|" -#~ msgstr "|93b9a15bd27f4e91b40f642c253dfaac|" +#~ msgid "" +#~ "If the ``### Changelog entry`` section" +#~ " contains nothing or doesn't exist, " +#~ "the following text will be added " +#~ "to the changelog::" +#~ msgstr "如果 ``#### Changelog entry`` 部分不包含任何内容或不存在,则会在更新日志中添加以下文本::" -#~ msgid "|a23d9638f96342ef9d25209951e2d564|" -#~ msgstr "|a23d9638f96342ef9d25209951e2d564|" +#~ msgid "" +#~ "If the ``### Changelog entry`` section" +#~ " contains a description (and no " +#~ "token), the following text will be " +#~ "added to the changelog::" +#~ msgstr "如果 ``#### Changelog entry`` 部分包含描述(但没有标记),则会在更新日志中添加以下文本::" -#~ msgid "Upload the whl (e.g., ``flwr-1.6.0-py3-none-any.whl``)" -#~ msgstr "上传 whl(例如 ``flwr-1.6.0-py3-none-any.whl``)" +#~ msgid "" +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, nothing will change" +#~ " in the changelog." +#~ msgstr "如果 ``#### Changelog entry`` 部分包含 ````,更新日志中将不会有任何更改。" #~ msgid "" -#~ "Change ``!pip install -q 'flwr[simulation]' torch torchvision " -#~ "matplotlib`` to ``!pip install -q 'flwr-1.6.0-py3-none-any." -#~ "whl[simulation]' torch torchvision matplotlib``" -#~ msgstr "" -#~ "将``!pip install -q 'flwr[simulation]' torch torchvision matplotlib``更改" -#~ "为``!pip install -q 'flwr-1.6.0-py3-none-any.whl[simulation]' torch torch " -#~ "torchvision matplotlib``" +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, the following text" +#~ " will be added to the changelog::" +#~ msgstr "如果 ``### Changelog entry`` 部分包含 ````,则会在更新日志中添加以下文本::" #~ msgid "" -#~ "All that's left to do it to define a function that loads both model and " -#~ "data, creates a :code:`CifarClient`, and starts this client. You load " -#~ "your data and model by using :code:`cifar.py`. Start :code:`CifarClient` " -#~ "with the function :code:`fl.client.start_numpy_client()` by pointing it " -#~ "at the same IP address we used in :code:`server.py`:" -#~ msgstr "" -#~ "剩下要做的就是定义一个加载模型和数据的函数,创建一个 :code:`CifarClient` " -#~ "并启动该客户端。使用 :code:`cifar.py` 加载数据和模型。使用函数 :code:`fl." -#~ "client.start_numpy_client()` 启动 :code:`CifarClient`,将其指向我们在 :" -#~ "code:`server.py` 中使用的相同 IP 地址:" +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, the following " +#~ "text will be added to the " +#~ "changelog::" +#~ msgstr "如果``### 更新日志条目``部分包含``<基准线>``,则会在更新日志中添加以下文本::" #~ msgid "" -#~ "The :code:`VirtualClientEngine` schedules, launches and manages `virtual` " -#~ "clients. These clients are identical to `non-virtual` clients (i.e. the " -#~ "ones you launch via the command `flwr.client.start_numpy_client `_) in the sense that they can be configure " -#~ "by creating a class inheriting, for example, from `flwr.client." -#~ "NumPyClient `_ and therefore " -#~ "behave in an identical way. In addition to that, clients managed by the :" -#~ "code:`VirtualClientEngine` are:" -#~ msgstr "" -#~ "代码:`VirtualClientEngine`调度、启动和管理`虚拟`客户端。这些客户端与 \"非" -#~ "虚拟 \"客户端(即通过命令 `flwr.client.start_numpy_client `_启动的客户端)完全相同,它们可以通过创建一个继承" -#~ "自 \"flwr.client.NumPyClient `_\" 的类来配置,因此行为方式也完全相同。除此之外,由 :code:" -#~ "`VirtualClientEngine` 管理的客户端还包括:" +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, the following " +#~ "text will be added to the " +#~ "changelog::" +#~ msgstr "如果``### 更新日志条目``部分包含``<示例>``,则会在更新日志中添加以下文本::" -#~ msgid "Example: Walk-Through PyTorch & MNIST" -#~ msgstr "实例: PyTorch 和 MNIST 的演练" +#~ msgid "" +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, the following text " +#~ "will be added to the changelog::" +#~ msgstr "如果``### 更新日志条目``部分包含````,则会在更新日志中添加以下文本::" #~ msgid "" -#~ "In this tutorial we will learn, how to train a Convolutional Neural " -#~ "Network on MNIST using Flower and PyTorch." -#~ msgstr "" -#~ "在本教程中,我们将学习如何使用 Flower 和 PyTorch 在 MNIST 上训练卷积神经网" -#~ "络。" +#~ "If the ``### Changelog entry`` section" +#~ " contains ````, the following " +#~ "text will be added to the " +#~ "changelog::" +#~ msgstr "如果 ``### Changelog entry`` 部分包含 ````,则会在更新日志中添加以下文本::" #~ msgid "" -#~ "Since we want to use PyTorch to solve a computer vision task, let's go " -#~ "ahead an install PyTorch and the **torchvision** library:" -#~ msgstr "" -#~ "我们想用 PyTorch 来做计算机视觉任务,需要先安装 PyTorch 和 " -#~ "**torchvision** 库:" +#~ "Note that only one token must be" +#~ " provided, otherwise, only the first " +#~ "action (in the order listed above), " +#~ "will be performed." +#~ msgstr "请注意,必须只提供一个标记,否则将只执行第一个操作(按上述顺序)。" -#~ msgid "Ready... Set... Train!" -#~ msgstr "准备...设置...训练!" +#~ msgid "Example: MXNet - Run MXNet Federated" +#~ msgstr "示例: MXNet - 运行联邦式 MXNet" #~ msgid "" -#~ "Now that we have all our dependencies installed, let's run a simple " -#~ "distributed training with two clients and one server. Our training " -#~ "procedure and network architecture are based on PyTorch's `Basic MNIST " -#~ "Example `_. This " -#~ "will allow you see how easy it is to wrap your code with Flower and begin " -#~ "training in a federated way. We provide you with two helper scripts, " -#~ "namely *run-server.sh*, and *run-clients.sh*. Don't be afraid to look " -#~ "inside, they are simple enough =)." +#~ "This tutorial will show you how to" +#~ " use Flower to build a federated " +#~ "version of an existing MXNet workload." +#~ " We are using MXNet to train a" +#~ " Sequential model on the MNIST " +#~ "dataset. We will structure the example" +#~ " similar to our `PyTorch - From " +#~ "Centralized To Federated " +#~ "`_ walkthrough. " +#~ "MXNet and PyTorch are very similar " +#~ "and a very good comparison between " +#~ "MXNet and PyTorch is given `here " +#~ "`_. First, " +#~ "we build a centralized training approach" +#~ " based on the `Handwritten Digit " +#~ "Recognition " +#~ "`_" +#~ " tutorial. Then, we build upon the" +#~ " centralized training code to run the" +#~ " training in a federated fashion." #~ msgstr "" -#~ "现在我们已经安装了所有的依赖包,让我们用两个客户端和一个服务器来运行一个简" -#~ "单的分布式训练。我们的训练过程和网络架构基于 PyTorch 的 `Basic MNIST " -#~ "Example `_。您会发" -#~ "现用 Flower 来封装您的代码并进行联邦学习训练是多么容易。我们为您提供了两个" -#~ "辅助脚本,即 *run-server.sh* 和 *run-clients.sh*。别害怕,它们很简单 =)。" +#~ "本教程将向您展示如何使用 Flower 构建现有 MXNet 的联学习版本。我们将使用" +#~ " MXNet 在 MNIST 数据集上训练一个序列模型。另外,我们将采用与我们的 " +#~ "`PyTorch - 从集中式到联邦式 " +#~ "`_ 教程类似的示例结构。MXNet" +#~ " 和 PyTorch 非常相似,参考 `此处 " +#~ "`_对 MXNet " +#~ "和 PyTorch 进行了详细的比较。首先,我们根据 `手写数字识别 " +#~ "`" +#~ " 教程 建立了集中式训练方法。然后,我们在集中式训练代码的基础上,以联邦方式运行训练。" #~ msgid "" -#~ "Go ahead and launch on a terminal the *run-server.sh* script first as " -#~ "follows:" -#~ msgstr "首先在终端上启动 *run-server.sh* 脚本,如下所示:" +#~ "Before we start setting up our " +#~ "MXNet example, we install the " +#~ ":code:`mxnet` and :code:`flwr` packages:" +#~ msgstr "在开始设置 MXNet 示例之前,我们先安装 :code:`mxnet` 和 :code:`flwr` 软件包:" -#~ msgid "" -#~ "Now that the server is up and running, go ahead and launch the clients." -#~ msgstr "现在服务器已经启动并运行,请继续启动客户端。" +#~ msgid "MNIST Training with MXNet" +#~ msgstr "使用 MXNet 进行 MNIST 训练" #~ msgid "" -#~ "Et voilà! You should be seeing the training procedure and, after a few " -#~ "iterations, the test accuracy for each client." +#~ "We begin with a brief description " +#~ "of the centralized training code based" +#~ " on a :code:`Sequential` model. If " +#~ "you want a more in-depth " +#~ "explanation of what's going on then " +#~ "have a look at the official `MXNet" +#~ " tutorial " +#~ "`_." #~ msgstr "" -#~ "然后就可以了!您应该能看到训练过程,以及经过几次反复后,每个客户端的测试准" -#~ "确率。" - -#~ msgid "Now, let's see what is really happening inside." -#~ msgstr "现在,让我们看看里面到底发生了什么。" +#~ "首先,我们将简要介绍基于 :code:`Sequential` " +#~ "模型的集中式训练代码。如果您想获得更深入的解释,请参阅官方的 `MXNet教程 " +#~ "`_。" #~ msgid "" -#~ "Inside the server helper script *run-server.sh* you will find the " -#~ "following code that basically runs the :code:`server.py`" +#~ "Let's create a new file " +#~ "called:code:`mxnet_mnist.py` with all the " +#~ "components required for a traditional " +#~ "(centralized) MNIST training. First, the " +#~ "MXNet package :code:`mxnet` needs to be" +#~ " imported. You can see that we " +#~ "do not yet import the :code:`flwr` " +#~ "package for federated learning. This " +#~ "will be done later." #~ msgstr "" -#~ "在服务器辅助脚本 *run-server.sh* 中,你可以找到以下代码,这些代码基本上都" -#~ "是运行 :code:`server.py` 的代码" +#~ "让我们创建一个名为:code:`mxnet_mnist.py`的新文件,其中包含传统(集中式)MNIST " +#~ "训练所需的所有组件。首先,需要导入 MXNet 包 " +#~ ":code:`mxnet`。您可以看到,我们尚未导入用于联合学习的 :code:`flwr` 包,这将在稍后完成。" #~ msgid "" -#~ "We can go a bit deeper and see that :code:`server.py` simply launches a " -#~ "server that will coordinate three rounds of training. Flower Servers are " -#~ "very customizable, but for simple workloads, we can start a server using " -#~ "the :ref:`start_server ` function and " -#~ "leave all the configuration possibilities at their default values, as " -#~ "seen below." -#~ msgstr "" -#~ "我们可以再深入一点,:code:`server.py` 只是启动了一个服务器,该服务器将协调" -#~ "三轮训练。Flower 服务器是非常容易修改的,但对于简单的工作,我们可以使用 :" -#~ "ref:`start_server `函数启动服务器,并将所" -#~ "有可能的配置保留为默认值,如下所示。" +#~ "The :code:`load_data()` function loads the " +#~ "MNIST training and test sets." +#~ msgstr ":code:`load_data()` 函数加载 MNIST 训练集和测试集。" #~ msgid "" -#~ "Next, let's take a look at the *run-clients.sh* file. You will see that " -#~ "it contains the main loop that starts a set of *clients*." +#~ "As already mentioned, we will use " +#~ "the MNIST dataset for this machine " +#~ "learning workload. The model architecture " +#~ "(a very simple :code:`Sequential` model) " +#~ "is defined in :code:`model()`." #~ msgstr "" -#~ "接下来,让我们看看 *run-clients.sh* 文件。您会看到它包含了用来启动多个 *客" -#~ "户端* 的代码。" +#~ "如前所述,我们将使用 MNIST 数据集进行机器学习。模型架构(一个非常简单的 " +#~ ":code:`Sequential` 模型)在 :code:`model()` 中定义。" #~ msgid "" -#~ "**cid**: is the client ID. It is an integer that uniquely identifies " -#~ "client identifier." -#~ msgstr "**cid**:是客户 ID。它是一个整数,可唯一标识客户标识符。" - -#~ msgid "**sever_address**: String that identifies IP and port of the server." -#~ msgstr "**sever_address**: 标识服务器 IP 和端口的字符串。" +#~ "We now need to define the training" +#~ " (function :code:`train()`) which loops " +#~ "over the training set and measures " +#~ "the loss for each batch of " +#~ "training examples." +#~ msgstr "现在,我们需要定义训练函数( :code:`train()`),该函数在训练集上循环训练,并计算每批训练示例的损失值。" #~ msgid "" -#~ "**nb_clients**: This defines the number of clients being created. This " -#~ "piece of information is not required by the client, but it helps us " -#~ "partition the original MNIST dataset to make sure that every client is " -#~ "working on unique subsets of both *training* and *test* sets." -#~ msgstr "" -#~ "**nb_clients**: 这定义了正在创建的客户端数量。客户端并不需要这一信息,但" -#~ "它有助于我们对原始 MNIST 数据集进行划分,以确保每个客户端都在 *training* " -#~ "和 *test* 数据集上有独立的数据。" +#~ "The evaluation of the model is " +#~ "defined in function :code:`test()`. The " +#~ "function loops over all test samples " +#~ "and measures the loss and accuracy " +#~ "of the model based on the test " +#~ "dataset." +#~ msgstr "模型的评估在函数 :code:`test()` 中定义。该函数循环遍历所有测试样本,并根据测试数据集计算模型的损失值和准确度。" #~ msgid "" -#~ "Again, we can go deeper and look inside :code:`flwr_example/quickstart-" -#~ "pytorch/client.py`. After going through the argument parsing code at the " -#~ "beginning of our :code:`main` function, you will find a call to :code:" -#~ "`mnist.load_data`. This function is responsible for partitioning the " -#~ "original MNIST datasets (*training* and *test*) and returning a :code:" -#~ "`torch.utils.data.DataLoader` s for each of them. We then instantiate a :" -#~ "code:`PytorchMNISTClient` object with our client ID, our DataLoaders, the " -#~ "number of epochs in each round, and which device we want to use for " -#~ "training (CPU or GPU)." +#~ "Having defined the data loading, model" +#~ " architecture, training, and evaluation we" +#~ " can put everything together and " +#~ "train our model on MNIST. Note " +#~ "that the GPU/CPU device for the " +#~ "training and testing is defined within" +#~ " the :code:`ctx` (context)." #~ msgstr "" -#~ "我们可以深入看一下 :code:`flwr_example/quickstart-pytorch/client.py`。查" -#~ "看 :code:`main` 函数开头的参数解析代码后,你会发现一个对 :code:`mnist." -#~ "load_data` 的调用。该函数负责分割原始 MNIST 数据集(*training* 和 " -#~ "*test*),并为每个数据集返回一个 :code:`torch.utils.data.DataLoader` 。然" -#~ "后,我们实例化一个 :code:`PytorchMNISTClient` 对象,其中包含我们的客户端 " -#~ "ID、 DataLoader、每一轮中的遍历数,以及我们希望用于训练的设备(CPU 或 " -#~ "GPU)。" +#~ "在定义了数据加载、模型架构、训练和评估之后,我们就可以把所有放在一起,在 MNIST " +#~ "上训练我们的模型了。请注意,用于训练和测试的 GPU/CPU 设备是在 :code:`ctx`中定义的。" + +#~ msgid "You can now run your (centralized) MXNet machine learning workload:" +#~ msgstr "现在,您可以运行(集中式)MXNet 机器学习工作:" #~ msgid "" -#~ "The :code:`PytorchMNISTClient` object when finally passed to :code:`fl." -#~ "client.start_client` along with the server's address as the training " -#~ "process begins." +#~ "So far this should all look fairly" +#~ " familiar if you've used MXNet (or" +#~ " even PyTorch) before. Let's take the" +#~ " next step and use what we've " +#~ "built to create a simple federated " +#~ "learning system consisting of one server" +#~ " and two clients." #~ msgstr "" -#~ "当训练过程开始时,:code:`PytorchMNISTClient` 对象会连同服务器地址一起传递" -#~ "给 :code:`fl.client.start_client`。" +#~ "到目前为止,如果你以前使用过 MXNet(甚至 " +#~ "PyTorch),这一切看起来应该相当熟悉。下一步,让我们利用已构建的内容创建一个简单联邦学习系统(由一个服务器和两个客户端组成)。" -#~ msgid "A Closer Look" -#~ msgstr "仔细看一下" +#~ msgid "MXNet meets Flower" +#~ msgstr "MXNet 结合 Flower" #~ msgid "" -#~ "Now, let's look closely into the :code:`PytorchMNISTClient` inside :code:" -#~ "`flwr_example.quickstart-pytorch.mnist` and see what it is doing:" +#~ "So far, it was not easily possible" +#~ " to use MXNet workloads for federated" +#~ " learning because federated learning is " +#~ "not supported in MXNet. Since Flower " +#~ "is fully agnostic towards the underlying" +#~ " machine learning framework, it can " +#~ "be used to federated arbitrary machine" +#~ " learning workloads. This section will " +#~ "show you how Flower can be used" +#~ " to federate our centralized MXNet " +#~ "workload." #~ msgstr "" -#~ "现在,让我们仔细研究一下 :code:`flwr_example.quickstart-pytorch.mnist` 中" -#~ "的 :code:`PytorchMNISTClient`,看看它在做什么:" +#~ "由于 MXNet 目前不支持联邦学习,因此无法轻松地直接将 MXNet " +#~ "用于联邦学习之中。Flower 与底层机器学习框架完全无关,因此它可用于任意联邦式机器学习工作。本节将向你展示如何使用 " +#~ "Flower 将我们的集中式 MXNet 改为联邦式训练。" #~ msgid "" -#~ "The first thing to notice is that :code:`PytorchMNISTClient` instantiates " -#~ "a CNN model inside its constructor" +#~ "The concept to federate an existing " +#~ "workload is always the same and " +#~ "easy to understand. We have to " +#~ "start a *server* and then use the" +#~ " code in :code:`mxnet_mnist.py` for the " +#~ "*clients* that are connected to the " +#~ "*server*. The *server* sends model " +#~ "parameters to the clients. The *clients*" +#~ " run the training and update the " +#~ "parameters. The updated parameters are " +#~ "sent back to the *server* which " +#~ "averages all received parameter updates. " +#~ "This describes one round of the " +#~ "federated learning process and we repeat" +#~ " this for multiple rounds." #~ msgstr "" -#~ "首先要注意的是 :code:`PytorchMNISTClient` 在其构造函数中实例化了一个 CNN " -#~ "模型" +#~ "将现有模型框架联邦化的概念始终是相同的,也很容易理解。我们必须启动一个*服务器*,然后对连接到*服务器*的*客户端*使用 " +#~ ":code:`mxnet_mnist.py`中的代码。*服务器*向客户端发送模型参数,然后*客户端*运行训练并更新参数。更新后的参数被发回*服务器*,然后会对所有收到的参数更新进行平均聚合。以上描述的是一轮联邦学习过程,我们将重复进行多轮学习。" #~ msgid "" -#~ "The code for the CNN is available under :code:`quickstart-pytorch.mnist` " -#~ "and it is reproduced below. It is the same network found in `Basic MNIST " -#~ "Example `_." +#~ "Finally, we will define our *client* " +#~ "logic in :code:`client.py` and build " +#~ "upon the previously defined MXNet " +#~ "training in :code:`mxnet_mnist.py`. Our " +#~ "*client* needs to import :code:`flwr`, " +#~ "but also :code:`mxnet` to update the " +#~ "parameters on our MXNet model:" #~ msgstr "" -#~ "CNN 的代码可在 :code:`quickstart-pytorch.mnist` 下找到,现复制如下。它与 " -#~ "`Basic MNIST Example `_中的网络相同。" +#~ "最后,我们将在 :code:`client.py` 中定义我们的 *client* " +#~ "逻辑,并以之前在 :code:`mxnet_mnist.py` 中定义的 MXNet " +#~ "训练为基础。我们的 *client* 不仅需要导入 :code:`flwr`,还需要导入 " +#~ ":code:`mxnet`,以更新 MXNet 模型的参数:" #~ msgid "" -#~ "The second thing to notice is that :code:`PytorchMNISTClient` class " -#~ "inherits from the :code:`fl.client.Client`, and hence it must implement " -#~ "the following methods:" +#~ "Implementing a Flower *client* basically " +#~ "means implementing a subclass of either" +#~ " :code:`flwr.client.Client` or " +#~ ":code:`flwr.client.NumPyClient`. Our implementation " +#~ "will be based on " +#~ ":code:`flwr.client.NumPyClient` and we'll call " +#~ "it :code:`MNISTClient`. :code:`NumPyClient` is " +#~ "slightly easier to implement than " +#~ ":code:`Client` if you use a framework" +#~ " with good NumPy interoperability (like " +#~ "PyTorch or MXNet) because it avoids " +#~ "some of the boilerplate that would " +#~ "otherwise be necessary. :code:`MNISTClient` " +#~ "needs to implement four methods, two " +#~ "methods for getting/setting model parameters," +#~ " one method for training the model," +#~ " and one method for testing the " +#~ "model:" #~ msgstr "" -#~ "第二件要注意的事是 :code:`PytorchMNISTClient` 类继承自 :code:`fl.client." -#~ "Client`,因此它必须实现以下方法:" +#~ "实现 Flower *client*基本上意味着要实现 " +#~ ":code:`flwr.client.Client` 或 " +#~ ":code:`flwr.client.NumPyClient` 的子类。我们的代码实现将基于 " +#~ ":code:`flwr.client.NumPyClient`,并将其命名为 " +#~ ":code:`MNISTClient`。如果使用具有良好 NumPy 互操作性的框架(如 PyTorch" +#~ " 或 MXNet),:code:`NumPyClient` 比 " +#~ ":code:`Client`更容易实现,因为它避免了一些不必要的操作。:code:`MNISTClient` " +#~ "需要实现四个方法,两个用于获取/设置模型参数,一个用于训练模型,一个用于测试模型:" + +#~ msgid "transform MXNet :code:`NDArray`'s to NumPy :code:`ndarray`'s" +#~ msgstr "将 MXNet :code:`NDArray` 转换为 NumPy :code:`ndarray`" #~ msgid "" -#~ "When comparing the abstract class to its derived class :code:" -#~ "`PytorchMNISTClient` you will notice that :code:`fit` calls a :code:" -#~ "`train` function and that :code:`evaluate` calls a :code:`test`: function." +#~ "The challenging part is to transform " +#~ "the MXNet parameters from :code:`NDArray` " +#~ "to :code:`NumPy Arrays` to make it " +#~ "readable for Flower." #~ msgstr "" -#~ "将抽象类与其派生类 :code:`PytorchMNISTClient` 进行比较时,您会发现 :code:" -#~ "`fit` 调用了一个 :code:`train` 函数,而 :code:`evaluate` 则调用了一个 :" -#~ "code:`test`: 函数。" +#~ "具有挑战性的部分是将 MXNet 参数从 :code:`NDArray` 转换为 " +#~ ":code:`NumPy Arrays` 以便 Flower 可以读取。" #~ msgid "" -#~ "These functions can both be found inside the same :code:`quickstart-" -#~ "pytorch.mnist` module:" +#~ "The two :code:`NumPyClient` methods " +#~ ":code:`fit` and :code:`evaluate` make use " +#~ "of the functions :code:`train()` and " +#~ ":code:`test()` previously defined in " +#~ ":code:`mxnet_mnist.py`. So what we really " +#~ "do here is we tell Flower through" +#~ " our :code:`NumPyClient` subclass which of" +#~ " our already defined functions to " +#~ "call for training and evaluation. We " +#~ "included type annotations to give you" +#~ " a better understanding of the data" +#~ " types that get passed around." #~ msgstr "" -#~ "这些函数都可以在同一个 :code:`quickstart-pytorch.mnist` 模块中找到:" +#~ "这两个 :code:`NumPyClient` 方法 :code:`fit` 和 " +#~ ":code:`evaluate` 使用了之前在 :code:`mxnet_mnist.py` " +#~ "中定义的函数 :code:`train()` 和 :code:`test()`。因此,我们要做的就是通过" +#~ " :code:`NumPyClient` 子类告知 Flower " +#~ "在训练和评估时要调用哪些已定义的函数。我们加入了类型注解,以便让您更好地理解传递的数据类型。" #~ msgid "" -#~ "Observe that these functions encapsulate regular training and test loops " -#~ "and provide :code:`fit` and :code:`evaluate` with final statistics for " -#~ "each round. You could substitute them with your custom train and test " -#~ "loops and change the network architecture, and the entire example would " -#~ "still work flawlessly. As a matter of fact, why not try and modify the " -#~ "code to an example of your liking?" +#~ "Having defined data loading, model " +#~ "architecture, training, and evaluation we " +#~ "can put everything together and train" +#~ " our :code:`Sequential` model on MNIST." #~ msgstr "" -#~ "请注意,这些函数封装了常规的训练和测试循环,并为 :code:`fit` 和 :code:" -#~ "`evaluate` 提供了每轮的最终统计数据。您可以用自定义的训练和测试循环来替代" -#~ "它们,并改变网络结构,整个示例仍然可以完美运行。事实上,为什么不按照自己的" -#~ "喜好修改代码呢?" +#~ "在定义了数据加载、模型架构、训练和评估之后,我们就可以将所有内容整合在一起,在 MNIST 上训练我们的 " +#~ ":code:`Sequential` 模型。" -#~ msgid "Give It a Try" -#~ msgstr "试试看" +#~ msgid "" +#~ "in each window (make sure that the" +#~ " server is still running before you" +#~ " do so) and see your MXNet " +#~ "project run federated learning across " +#~ "two clients. Congratulations!" +#~ msgstr "确保服务器仍在运行后,然后就能在每个窗口中看到 MXNet 项目在两个客户端上运行联邦学习了。祝贺!" #~ msgid "" -#~ "Looking through the quickstart code description above will have given a " -#~ "good understanding of how *clients* and *servers* work in Flower, how to " -#~ "run a simple experiment, and the internals of a client wrapper. Here are " -#~ "a few things you could try on your own and get more experience with " -#~ "Flower:" +#~ "The full source code for this " +#~ "example: `MXNet: From Centralized To " +#~ "Federated (Code) " +#~ "`_. Our " +#~ "example is of course somewhat over-" +#~ "simplified because both clients load the" +#~ " exact same dataset, which isn't " +#~ "realistic. You're now prepared to " +#~ "explore this topic further. How about" +#~ " using a CNN or using a " +#~ "different dataset? How about adding more" +#~ " clients?" #~ msgstr "" -#~ "通过上面的快速入门代码描述,你将对 Flower 中*客户端*和*服务器*的工作方式、" -#~ "如何运行一个简单的实验以及客户端封装器的内部结构有一个很好的了解。您可以自" -#~ "己尝试以下内容,以获得更多使用 Flower 的经验:" +#~ "此示例的完整源代码在:\"MXNet: From Centralized To " +#~ "Federated (Code) " +#~ "`_。当然,我们的示例有些过于简单,因为两个客户端都加载了完全相同的数据集,这并不真实。现在您已经准备好进一步探讨了。使用" +#~ " CNN 或使用不同的数据集会如何?添加更多客户端会如何?" -#~ msgid "" -#~ "Try and change :code:`PytorchMNISTClient` so it can accept different " -#~ "architectures." -#~ msgstr "尝试修改 :code:`PytorchMNISTClient`,使其可以接受不同的架构。" +#~ msgid "with the following command sequence:" +#~ msgstr "使用以下命令序列:" #~ msgid "" -#~ "Modify the :code:`train` function so that it accepts different optimizers" -#~ msgstr "修改 :code:`train` 函数,使其接受不同的优化器" +#~ "In case you are a researcher you" +#~ " might be just fine using the " +#~ "self-signed certificates generated using " +#~ "the scripts which are part of this" +#~ " guide." +#~ msgstr "如果你是一名研究人员,使用本指南中的脚本生成的自签名证书就可以了。" #~ msgid "" -#~ "Modify the :code:`test` function so that it proves not only the top-1 " -#~ "(regular accuracy) but also the top-5 accuracy?" -#~ msgstr "" -#~ "修改 :code:`test` 函数,使其不仅能输出前 1 名(常规精确度),还能证明前 5 " -#~ "名的精确度?" +#~ "We are now going to show how " +#~ "to write a sever which uses the" +#~ " previously generated scripts." +#~ msgstr "现在,我们将展示如何编写一个使用先前生成的脚本的服务器。" #~ msgid "" -#~ "Go larger! Try to adapt the code to larger images and datasets. Why not " -#~ "try training on ImageNet with a ResNet-50?" +#~ "When providing certificates, the server " +#~ "expects a tuple of three certificates." +#~ " :code:`Path` can be used to easily" +#~ " read the contents of those files " +#~ "into byte strings, which is the " +#~ "data type :code:`start_server` expects." #~ msgstr "" -#~ "让我们尝试让代码适应更大的图像和数据集。为什么不尝试使用 ResNet-50 在 " -#~ "ImageNet 上进行训练呢?" +#~ "在提供证书时,服务器希望得到由三个证书组成的元组。 :code:`Path` " +#~ "可用于轻松地将这些文件的内容读取为字节字符串,这就是 :code:`start_server` 期望的数据类型。" -#~ msgid "You are ready now. Enjoy learning in a federated way!" -#~ msgstr "您现在已经准备就绪。尽情享受联邦学习的乐趣吧!" +#~ msgid "Flower server" +#~ msgstr "Flower 服务器" -#~ msgid "Differential privacy" -#~ msgstr "差别隐私" +#~ msgid "flower-driver-api" +#~ msgstr "flower-driver-api" -#~ msgid "" -#~ "Flower provides differential privacy (DP) wrapper classes for the easy " -#~ "integration of the central DP guarantees provided by DP-FedAvg into " -#~ "training pipelines defined in any of the various ML frameworks that " -#~ "Flower is compatible with." -#~ msgstr "" -#~ "Flower 提供了差分隐私 (DP) 封装类,可将 DP-FedAvg 提供的核心 DP 轻松集成" -#~ "到 Flower 兼容的各种 ML 框架中定义的训练模式中。" +#~ msgid "flower-fleet-api" +#~ msgstr "flower-fleet-api" #~ msgid "" -#~ "Please note that these components are still experimental; the correct " -#~ "configuration of DP for a specific task is still an unsolved problem." +#~ ":py:obj:`start_driver `\\ " +#~ "\\(\\*\\[\\, server\\_address\\, server\\, ...\\]\\)" #~ msgstr "" -#~ "请注意,这些组件仍处于试验阶段,如何为特定任务正确配置 DP 仍是一个尚未解决" -#~ "的问题。" +#~ ":py:obj:`start_driver `\\ " +#~ "\\(\\*\\[\\, server\\_address\\, server\\, ...\\]\\)" + +#~ msgid "Start a Flower Driver API server." +#~ msgstr "启动基于 Ray 的Flower模拟服务器。" #~ msgid "" -#~ "The name DP-FedAvg is misleading since it can be applied on top of any FL " -#~ "algorithm that conforms to the general structure prescribed by the FedOpt " -#~ "family of algorithms." +#~ ":py:obj:`Driver `\\ " +#~ "\\(\\[driver\\_service\\_address\\, ...\\]\\)" #~ msgstr "" -#~ "DP-FedAvg 这个名称容易引起误解,因为它可以应用于任何符合 FedOpt 系列算法规" -#~ "定的一般结构的 FL 算法之上。" +#~ "Flower 1.0: ``start_server(..., " +#~ "config=flwr.server.ServerConfig(num_rounds=3, " +#~ "round_timeout=600.0), ...)``" -#~ msgid "DP-FedAvg" -#~ msgstr "DP-FedAvg" +#~ msgid "`Driver` class provides an interface to the Driver API." +#~ msgstr "`Driver` 类为驱动程序 API 提供了一个接口。" #~ msgid "" -#~ "DP-FedAvg, originally proposed by McMahan et al. [mcmahan]_ and extended " -#~ "by Andrew et al. [andrew]_, is essentially FedAvg with the following " -#~ "modifications." -#~ msgstr "DP-FedAvg 最初由McMahan等人提出,并由Andrew等人加以扩展。" +#~ "The IPv4 or IPv6 address of the" +#~ " Driver API server. Defaults to " +#~ "`\"[::]:9091\"`." +#~ msgstr "服务器的 IPv4 或 IPv6 地址。默认为 `\"[::]:8080\"。" + +#~ msgid ":py:obj:`close `\\ \\(\\)" +#~ msgstr "server.strategy.Strategy" + +#~ msgid "Disconnect from the SuperLink if connected." +#~ msgstr "如果已连接,请断开与超级链接的连接。" + +#~ msgid "start\\_driver" +#~ msgstr "启动客户端" #~ msgid "" -#~ "**Clipping** : The influence of each client's update is bounded by " -#~ "clipping it. This is achieved by enforcing a cap on the L2 norm of the " -#~ "update, scaling it down if needed." -#~ msgstr "" -#~ "**裁剪** : 裁剪会影响到每个客户端的模型参数。具体做法是对参数的 L2 准则设" -#~ "置上限,必要时将其缩减。" +#~ "The IPv4 or IPv6 address of the" +#~ " Driver API server. Defaults to " +#~ "`\"[::]:8080\"`." +#~ msgstr "服务器的 IPv4 或 IPv6 地址。默认为 `\"[::]:8080\"。" #~ msgid "" -#~ "**Noising** : Gaussian noise, calibrated to the clipping threshold, is " -#~ "added to the average computed at the server." -#~ msgstr "" -#~ "**噪声** : 在服务器计算出的平均值中加入高斯噪声,该噪声根据剪切阈值进行校" -#~ "准。" +#~ "A server implementation, either " +#~ "`flwr.server.Server` or a subclass thereof." +#~ " If no instance is provided, then " +#~ "`start_driver` will create one." +#~ msgstr "服务器实现,可以是 `flwr.server.Server` 或其子类。如果没有提供实例,`start_server` 将创建一个。" #~ msgid "" -#~ "The distribution of the update norm has been shown to vary from task-to-" -#~ "task and to evolve as training progresses. This variability is crucial in " -#~ "understanding its impact on differential privacy guarantees, emphasizing " -#~ "the need for an adaptive approach [andrew]_ that continuously adjusts the " -#~ "clipping threshold to track a prespecified quantile of the update norm " -#~ "distribution." +#~ "An implementation of the class " +#~ "`flwr.server.ClientManager`. If no implementation" +#~ " is provided, then `start_driver` will " +#~ "use `flwr.server.SimpleClientManager`." #~ msgstr "" -#~ "事实证明,参数更新准则的分布会随着任务的不同而变化,并随着训练的进展而演" -#~ "变。因此,我们采用了一种自适应方法,该方法会不断调整剪切阈值,以跟踪参数更" -#~ "新准则分布的预设量化值。" +#~ "抽象基类 `flwr.server.ClientManager` " +#~ "的实现。如果没有提供实现,`start_server` 将使用 " +#~ "`flwr.server.client_manager.SimpleClientManager`。" -#~ msgid "Simplifying Assumptions" -#~ msgstr "简化假设" +#~ msgid "The Driver object to use." +#~ msgstr "要使用的驱动程序对象。" -#~ msgid "" -#~ "We make (and attempt to enforce) a number of assumptions that must be " -#~ "satisfied to ensure that the training process actually realizes the :math:" -#~ "`(\\epsilon, \\delta)` guarantees the user has in mind when configuring " -#~ "the setup." -#~ msgstr "" -#~ "我们提出(并试图执行)了一系列必须满足的假设,以确保训练过程真正实现用户在" -#~ "配置设置时所定的 :math:`(\\epsilon,\\delta)` 。" +#~ msgid "Starting a driver that connects to an insecure server:" +#~ msgstr "启动不安全的服务器:" -#~ msgid "" -#~ "**Fixed-size subsampling** :Fixed-size subsamples of the clients must be " -#~ "taken at each round, as opposed to variable-sized Poisson subsamples." -#~ msgstr "" -#~ "** 固定大小的子样本** :与可变大小的泊松分布子样本相比,每轮必须抽取固定大" -#~ "小的客户端子样本。" +#~ msgid "Starting a driver that connects to an SSL-enabled server:" +#~ msgstr "启动支持 SSL 的服务器:" #~ msgid "" -#~ "**Unweighted averaging** : The contributions from all the clients must " -#~ "weighted equally in the aggregate to eliminate the requirement for the " -#~ "server to know in advance the sum of the weights of all clients available " -#~ "for selection." +#~ ":py:obj:`run_simulation_from_cli " +#~ "`\\ \\(\\)" #~ msgstr "" -#~ "**非加权平均**: 所有客户端的贡献必须加权相等,这样服务器就不需要事先知道" -#~ "所有客户的权重总和。" -#~ msgid "" -#~ "**No client failures** : The set of available clients must stay constant " -#~ "across all rounds of training. In other words, clients cannot drop out or " -#~ "fail." +#~ msgid "Run Simulation Engine from the CLI." #~ msgstr "" -#~ "**没有失败的客户端** : 在各轮训练中,可用客户端的数量必须保持不变。换句话" -#~ "说,客户端不能退出或失败。" -#~ msgid "" -#~ "The first two are useful for eliminating a multitude of complications " -#~ "associated with calibrating the noise to the clipping threshold, while " -#~ "the third one is required to comply with the assumptions of the privacy " -#~ "analysis." -#~ msgstr "" -#~ "前两种方法有助于消除将噪声校准为削波阈值所带来的诸多复杂问题,而第三种方法" -#~ "则需要符合隐私分析的假设。" +#~ msgid "run\\_simulation\\_from\\_cli" +#~ msgstr "运行模拟" #~ msgid "" -#~ "These restrictions are in line with constraints imposed by Andrew et al. " -#~ "[andrew]_." -#~ msgstr "这些限制与 Andrew 等人所施加的限制一致。" +#~ "Check out this Federated Learning " +#~ "quickstart tutorial for using Flower " +#~ "with MXNet to train a Sequential " +#~ "model on MNIST." +#~ msgstr "查看此联邦学习 快速入门教程,了解如何使用 Flower 和 MXNet 在 MNIST 上训练序列模型。" -#~ msgid "Customizable Responsibility for Noise injection" -#~ msgstr "可定制的噪声注入" +#~ msgid "Quickstart MXNet" +#~ msgstr "快速入门 MXNet" #~ msgid "" -#~ "In contrast to other implementations where the addition of noise is " -#~ "performed at the server, you can configure the site of noise injection to " -#~ "better match your threat model. We provide users with the flexibility to " -#~ "set up the training such that each client independently adds a small " -#~ "amount of noise to the clipped update, with the result that simply " -#~ "aggregating the noisy updates is equivalent to the explicit addition of " -#~ "noise to the non-noisy aggregate at the server." +#~ "MXNet is no longer maintained and " +#~ "has been moved into `Attic " +#~ "`_. As a " +#~ "result, we would encourage you to " +#~ "use other ML frameworks alongside " +#~ "Flower, for example, PyTorch. This " +#~ "tutorial might be removed in future " +#~ "versions of Flower." #~ msgstr "" -#~ "与其他在服务器上添加噪声的实现方法不同,您可以配置噪声注入的位置,以便更好" -#~ "地匹配您的威胁模型。我们为用户提供了设置训练的灵活性,使每个客户端都能独立" -#~ "地为剪切参数更新添加少量噪声,这样,只需聚合噪声更新,就相当于在服务器上为" -#~ "非噪声聚合添加噪声了。" #~ msgid "" -#~ "To be precise, if we let :math:`m` be the number of clients sampled each " -#~ "round and :math:`\\sigma_\\Delta` be the scale of the total Gaussian " -#~ "noise that needs to be added to the sum of the model updates, we can use " -#~ "simple maths to show that this is equivalent to each client adding noise " -#~ "with scale :math:`\\sigma_\\Delta/\\sqrt{m}`." -#~ msgstr "" -#~ "准确地说,我们假设每轮采样的客户端数量为:math:`m`,:math:" -#~ "`\\sigma_\\Delta` 为需要添加到模型更新总和中的总高斯噪声的规模,我们就可以" -#~ "用简单的数学方法证明了,这相当于每个客户端都添加了规模为 :math:" -#~ "`\\sigma_\\Delta/\\sqrt{m}` 的噪声。" +#~ "In this tutorial, we will learn " +#~ "how to train a :code:`Sequential` model" +#~ " on MNIST using Flower and MXNet." +#~ msgstr "在本教程中,我们将学习如何使用 Flower 和 MXNet 在 MNIST 上训练 :code:`Sequential` 模型。" -#~ msgid "Wrapper-based approach" -#~ msgstr "基于封装的方法" +#~ msgid "Since we want to use MXNet, let's go ahead and install it:" +#~ msgstr "既然我们要使用 MXNet,那就继续安装吧:" #~ msgid "" -#~ "Introducing DP to an existing workload can be thought of as adding an " -#~ "extra layer of security around it. This inspired us to provide the " -#~ "additional server and client-side logic needed to make the training " -#~ "process differentially private as wrappers for instances of the :code:" -#~ "`Strategy` and :code:`NumPyClient` abstract classes respectively. This " -#~ "wrapper-based approach has the advantage of being easily composable with " -#~ "other wrappers that someone might contribute to the Flower library in the " -#~ "future, e.g., for secure aggregation. Using Inheritance instead can be " -#~ "tedious because that would require the creation of new sub- classes every " -#~ "time a new class implementing :code:`Strategy` or :code:`NumPyClient` is " -#~ "defined." +#~ "Now that we have all our " +#~ "dependencies installed, let's run a " +#~ "simple distributed training with two " +#~ "clients and one server. Our training " +#~ "procedure and network architecture are " +#~ "based on MXNet´s `Hand-written Digit " +#~ "Recognition tutorial " +#~ "`_." #~ msgstr "" -#~ "在现有工作负载中引入 DP 可以被认为是在其周围增加了一层额外的安全性。受此启" -#~ "发,我们提供了额外的服务器端和客户端逻辑,分别作为 :code:`Strategy` 和 :" -#~ "code:`NumPyClient` 抽象类实例的封装器,使训练过程具有不同的隐私性。这种基" -#~ "于封装器的方法的优点是可以很容易地与将来有人贡献给 Flower 的其他封装器(例" -#~ "如用于安全聚合的封装器)进行组合。使用继承可能会比较繁琐,因为每次定义实" -#~ "现 :code:`Strategy` 或 :code:`NumPyClient` 的新类时,都需要创建新的子类。" +#~ "现在,我们已经安装了所有依赖项,让我们用两个客户端和一个服务器来运行一个简单的分布式训练。我们的训练程序和网络架构基于 " +#~ "MXNet 的 `手写数字识别教程 " +#~ "`_\"。" #~ msgid "" -#~ "The first version of our solution was to define a decorator whose " -#~ "constructor accepted, among other things, a boolean-valued variable " -#~ "indicating whether adaptive clipping was to be enabled or not. We quickly " -#~ "realized that this would clutter its :code:`__init__()` function with " -#~ "variables corresponding to hyperparameters of adaptive clipping that " -#~ "would remain unused when it was disabled. A cleaner implementation could " -#~ "be achieved by splitting the functionality into two decorators, :code:" -#~ "`DPFedAvgFixed` and :code:`DPFedAvgAdaptive`, with the latter sub- " -#~ "classing the former. The constructors for both classes accept a boolean " -#~ "parameter :code:`server_side_noising`, which, as the name suggests, " -#~ "determines where noising is to be performed." -#~ msgstr "" -#~ "我们的第一版解决方案是定义一个装饰器,其构造函数接受一个布尔值变量,表示是" -#~ "否启用自适应剪裁。我们很快意识到,这样会使其 :code:`__init__()` 函数中与自" -#~ "适应裁剪超参数相对应的变量变得杂乱无章,而这些变量在自适应裁剪被禁用时将保" -#~ "持未使用状态。要实现更简洁的功能,可以将该功能拆分为两个装饰器,即 :code:" -#~ "`DPFedAvgFixed` 和 :code:`DPFedAvgAdaptive`,后者是前者的子类。这两个类的" -#~ "构造函数都接受一个布尔参数 :code:`server_side_noising`,顾名思义,它决定了" -#~ "在哪里加噪声。" +#~ "In a file called :code:`client.py`, " +#~ "import Flower and MXNet related " +#~ "packages:" +#~ msgstr "在名为 :code:`client.py` 的文件中,导入 Flower 和 MXNet 相关软件包:" -#~ msgid "" -#~ "The server-side capabilities required for the original version of DP-" -#~ "FedAvg, i.e., the one which performed fixed clipping, can be completely " -#~ "captured with the help of wrapper logic for just the following two " -#~ "methods of the :code:`Strategy` abstract class." -#~ msgstr "" -#~ "只需对 :code:`Strategy` 抽象类的以下两个方法进行封装,就能完全捕获 DP-" -#~ "FedAvg 原始版本(即执行固定剪裁的版本)所需的服务器端功能。" +#~ msgid "In addition, define the device allocation in MXNet with:" +#~ msgstr "此外,还可以在 MXNet 中定义设备分配:" #~ msgid "" -#~ ":code:`configure_fit()` : The config dictionary being sent by the " -#~ "wrapped :code:`Strategy` to each client needs to be augmented with an " -#~ "additional value equal to the clipping threshold (keyed under :code:" -#~ "`dpfedavg_clip_norm`) and, if :code:`server_side_noising=true`, another " -#~ "one equal to the scale of the Gaussian noise that needs to be added at " -#~ "the client (keyed under :code:`dpfedavg_noise_stddev`). This entails " -#~ "*post*-processing of the results returned by the wrappee's implementation " -#~ "of :code:`configure_fit()`." +#~ "We use MXNet to load MNIST, a " +#~ "popular image classification dataset of " +#~ "handwritten digits for machine learning. " +#~ "The MXNet utility :code:`mx.test_utils.get_mnist()`" +#~ " downloads the training and test " +#~ "data." #~ msgstr "" -#~ ":code:`configure_fit()` :由封装的 :code:`Strategy` 发送到每个客户端的配置" -#~ "字典需要使用等于裁剪阈值的附加值(在 :code:`dpfedavg_clip_norm` 下键入)进" -#~ "行扩充。并且,如果 server_side_noising=true,则另一个值等于需要在客户端添" -#~ "加的高斯噪声的大小(在 dpfedavg_noise_stddev 下键入)。这需要对封装后的" -#~ "configure_fit() 所返回的结果进行后处理。" +#~ "我们使用 MXNet 加载 MNIST,这是一个用于机器学习的流行手写数字图像分类数据集。MXNet" +#~ " 工具 :code:`mx.test_utils.get_mnist()` 会下载训练和测试数据。" #~ msgid "" -#~ ":code:`aggregate_fit()`: We check whether any of the sampled clients " -#~ "dropped out or failed to upload an update before the round timed out. In " -#~ "that case, we need to abort the current round, discarding any successful " -#~ "updates that were received, and move on to the next one. On the other " -#~ "hand, if all clients responded successfully, we must force the averaging " -#~ "of the updates to happen in an unweighted manner by intercepting the :" -#~ "code:`parameters` field of :code:`FitRes` for each received update and " -#~ "setting it to 1. Furthermore, if :code:`server_side_noising=true`, each " -#~ "update is perturbed with an amount of noise equal to what it would have " -#~ "been subjected to had client-side noising being enabled. This entails " -#~ "*pre*-processing of the arguments to this method before passing them on " -#~ "to the wrappee's implementation of :code:`aggregate_fit()`." -#~ msgstr "" -#~ ":code:`aggregate_fit()`: 我们会检查是否有任何客户端在本轮超时前退出或未能" -#~ "上传参数更新。在这种情况下,我们需要中止当前一轮,丢弃已收到的所有参数更" -#~ "新,然后继续下一轮。另一方面,如果所有客户端都成功响应,我们就必须通过拦" -#~ "截 :code:`FitRes` 的 :code:`parameters` 字段并将其设置为 1,强制以不加权的" -#~ "方式平均更新。此外,如果 :code:`server_side_noising=true`,每次更新都会受" -#~ "到一定量的噪声扰动,其扰动量相当于启用客户端噪声时的扰动量。 这就需要在将" -#~ "本方法的参数传递给封装的 :code:`aggregate_fit()` 之前,对参数进行*预*处" -#~ "理。" +#~ "Define the training and loss with " +#~ "MXNet. We train the model by " +#~ "looping over the dataset, measure the" +#~ " corresponding loss, and optimize it." +#~ msgstr "用 MXNet 定义训练和损失值。我们在数据集上循环训练模型,测量相应的损失值,并对其进行优化。" #~ msgid "" -#~ "We can't directly change the aggregation function of the wrapped strategy " -#~ "to force it to add noise to the aggregate, hence we simulate client-side " -#~ "noising to implement server-side noising." -#~ msgstr "" -#~ "我们无法直接改变封装策略的聚合函数,迫使它在聚合中添加噪声,因此我们模拟客" -#~ "户端噪声来实现服务器端噪声。" +#~ "Next, we define the validation of " +#~ "our machine learning model. We loop " +#~ "over the test set and measure both" +#~ " loss and accuracy on the test " +#~ "set." +#~ msgstr "接下来,我们定义机器学习模型的验证。我们在测试集上循环,测量测试集上的损失值和准确率。" #~ msgid "" -#~ "These changes have been put together into a class called :code:" -#~ "`DPFedAvgFixed`, whose constructor accepts the strategy being decorated, " -#~ "the clipping threshold and the number of clients sampled every round as " -#~ "compulsory arguments. The user is expected to specify the clipping " -#~ "threshold since the order of magnitude of the update norms is highly " -#~ "dependent on the model being trained and providing a default value would " -#~ "be misleading. The number of clients sampled at every round is required " -#~ "to calculate the amount of noise that must be added to each individual " -#~ "update, either by the server or the clients." -#~ msgstr "" -#~ "这些变化被整合到一个名为 :code:`DPFedAvgFixed` 的类中,其构造函数接受被装" -#~ "饰的策略、剪切阈值和每轮采样的客户数作为必选参数。用户需要指定剪切阈值,因" -#~ "为参数更新规范的数量级在很大程度上取决于正在训练的模型,提供默认值会产生误" -#~ "导。每轮采样的客户端数量是计算服务器或客户在每次参数更新时添加的噪音量所必" -#~ "需的。" +#~ "After defining the training and testing" +#~ " of a MXNet machine learning model," +#~ " we use these functions to implement" +#~ " a Flower client." +#~ msgstr "在定义了 MXNet 机器学习模型的训练和测试后,我们使用这些函数实现了 Flower 客户端。" -#~ msgid "" -#~ "The additional functionality required to facilitate adaptive clipping has " -#~ "been provided in :code:`DPFedAvgAdaptive`, a subclass of :code:" -#~ "`DPFedAvgFixed`. It overrides the above-mentioned methods to do the " -#~ "following." -#~ msgstr "" -#~ "自适应剪裁所需的附加功能在 :code:`DPFedAvgAdaptive` 中提供,其是 :code:" -#~ "`DPFedAvgFixed` 的子类。它重写了上述方法,以实现以下功能。" +#~ msgid "Our Flower clients will use a simple :code:`Sequential` model:" +#~ msgstr "我们的 Flower 客户端将使用简单的 :code:`Sequential` 模型:" #~ msgid "" -#~ ":code:`configure_fit()` : It intercepts the config dict returned by :code:" -#~ "`super.configure_fit()` to add the key-value pair :code:" -#~ "`dpfedavg_adaptive_clip_enabled:True` to it, which the client interprets " -#~ "as an instruction to include an indicator bit (1 if update norm <= " -#~ "clipping threshold, 0 otherwise) in the results returned by it." +#~ "After loading the dataset with " +#~ ":code:`load_data()` we perform one forward " +#~ "propagation to initialize the model and" +#~ " model parameters with :code:`model(init)`. " +#~ "Next, we implement a Flower client." #~ msgstr "" -#~ ":code:`configure_fit()`:它截取由 :code:`super.configure_fit()` 返回的 " -#~ "config 字典,并在其中添加键-值对 :code:`dpfedavg_adaptive_clip_enabled:" -#~ "True\",客户端将其解释为在返回结果中包含一个指示位(如果参数更新范式 <= 剪" -#~ "裁阈值,则为 1,否则为 0)的指令。" +#~ "使用 :code:`load_data()` 加载数据集后,我们会执行一次前向传播,使用 " +#~ ":code:`model(init)` 初始化模型和模型参数。接下来,我们实现一个 Flower " +#~ "客户端。" #~ msgid "" -#~ ":code:`aggregate_fit()` : It follows a call to :code:`super." -#~ "aggregate_fit()` with one to :code:`__update_clip_norm__()`, a procedure " -#~ "which adjusts the clipping threshold on the basis of the indicator bits " -#~ "received from the sampled clients." +#~ "Flower provides a convenience class " +#~ "called :code:`NumPyClient` which makes it " +#~ "easier to implement the :code:`Client` " +#~ "interface when your workload uses MXNet." +#~ " Implementing :code:`NumPyClient` usually means" +#~ " defining the following methods " +#~ "(:code:`set_parameters` is optional though):" #~ msgstr "" -#~ ":code:`aggregate_fit()`:在调用:code:`super.aggregate_fit()`后,再调用:" -#~ "code:`__update_clip_norm__()`,该过程根据从采样客户端接收到的指示位调整裁" -#~ "剪阈值。" +#~ "Flower 提供了一个名为 :code:`NumPyClient` 的便捷类,当您的工作负载使用" +#~ " MXNet 时,它可以让您更轻松地实现 :code:`Client` 接口。实现 " +#~ ":code:`NumPyClient` 通常意味着定义以下方法(:code:`set_parameters` " +#~ "是可选的):" -#~ msgid "" -#~ "The client-side capabilities required can be completely captured through " -#~ "wrapper logic for just the :code:`fit()` method of the :code:" -#~ "`NumPyClient` abstract class. To be precise, we need to *post-process* " -#~ "the update computed by the wrapped client to clip it, if necessary, to " -#~ "the threshold value supplied by the server as part of the config " -#~ "dictionary. In addition to this, it may need to perform some extra work " -#~ "if either (or both) of the following keys are also present in the dict." -#~ msgstr "" -#~ "客户端所需的功能完全可以通过 :code:`NumPyClient` 抽象类的 :code:`fit()` 方" -#~ "法的封装逻辑来实现。准确地说,我们需要对封装客户端计算的参数更新进行处理," -#~ "以便在必要时将其剪切到服务器作为配置字典的一部分提供的阈值。除此之外,如果" -#~ "配置字典中还存在以下任一(或两个)键,客户端可能还需要执行一些额外的工作。" +#~ msgid "They can be implemented in the following way:" +#~ msgstr "它们可以通过以下方式实现:" #~ msgid "" -#~ ":code:`dpfedavg_noise_stddev` : Generate and add the specified amount of " -#~ "noise to the clipped update." -#~ msgstr "" -#~ "code:`dpfedavg_noise_stddev`:生成并在剪切参数更新中添加指定数量的噪声。" +#~ "We can now create an instance of" +#~ " our class :code:`MNISTClient` and add " +#~ "one line to actually run this " +#~ "client:" +#~ msgstr "现在我们可以创建一个 :code:`MNISTClient` 类的实例,并添加一行来实际运行该客户端:" #~ msgid "" -#~ ":code:`dpfedavg_adaptive_clip_enabled` : Augment the metrics dict in the :" -#~ "code:`FitRes` object being returned to the server with an indicator bit, " -#~ "calculated as described earlier." +#~ "That's it for the client. We only" +#~ " have to implement :code:`Client` or " +#~ ":code:`NumPyClient` and call " +#~ ":code:`fl.client.start_client()` or " +#~ ":code:`fl.client.start_numpy_client()`. The string " +#~ ":code:`\"0.0.0.0:8080\"` tells the client " +#~ "which server to connect to. In our" +#~ " case we can run the server and" +#~ " the client on the same machine, " +#~ "therefore we use :code:`\"0.0.0.0:8080\"`. If" +#~ " we run a truly federated workload" +#~ " with the server and clients running" +#~ " on different machines, all that " +#~ "needs to change is the " +#~ ":code:`server_address` we pass to the " +#~ "client." #~ msgstr "" -#~ ":code:`dpfedavg_adaptive_clip_enabled`:在返回给服务器的 :code:`FitRes` 对" -#~ "象中的度量值字典中增加一个指标位,计算方法如前所述。" +#~ "这就是客户端。我们只需实现 :code:`Client` 或 :code:`NumPyClient`" +#~ " 并调用 :code:`fl.client.start_client()` 或 " +#~ ":code:`fl.client.start_numpy_client()`。字符串 " +#~ ":code:`\"0.0.0.0:8080\"`会告诉客户端要连接的服务器。在本例中,我们可以在同一台机器上运行服务器和客户端,因此我们使用" +#~ " " +#~ ":code:`\"0.0.0.0:8080\"`。如果我们运行的是真正的联邦工作负载,服务器和客户端运行在不同的机器上,那么需要改变的只是传递给客户端的" +#~ " :code:`server_address`。" -#~ msgid "Performing the :math:`(\\epsilon, \\delta)` analysis" -#~ msgstr "进行 :math:`(epsilon, \\delta)` 分析" +#~ msgid "" +#~ "With both client and server ready, " +#~ "we can now run everything and see" +#~ " federated learning in action. Federated" +#~ " learning systems usually have a " +#~ "server and multiple clients. We " +#~ "therefore have to start the server " +#~ "first:" +#~ msgstr "客户端和服务器都准备就绪后,我们现在就可以运行一切,看看联邦学习的运行情况。联邦学习系统通常有一个服务器和多个客户端。因此,我们必须先启动服务器:" #~ msgid "" -#~ "Assume you have trained for :math:`n` rounds with sampling fraction :math:" -#~ "`q` and noise multiplier :math:`z`. In order to calculate the :math:" -#~ "`\\epsilon` value this would result in for a particular :math:`\\delta`, " -#~ "the following script may be used." +#~ "Congratulations! You've successfully built and" +#~ " run your first federated learning " +#~ "system. The full `source code " +#~ "`_ for this example can " +#~ "be found in :code:`examples/quickstart-mxnet`." #~ msgstr "" -#~ "假设您已经训练了 :math:`n` 轮,采样比例为 :math:`q`,噪声乘数为 :math:" -#~ "`z`。为了计算特定 :math:`\\delta` 的 :math:`epsilon` 值,可以使用下面的脚" -#~ "本。" - -#~ msgid "Flower driver SDK." -#~ msgstr "Flower 服务器。" +#~ "恭喜您!您已经成功构建并运行了第一个联邦学习系统。本示例的`完整源代码 " +#~ "`_ 可在 :code:`examples/quickstart-" +#~ "mxnet` 中找到。" -#~ msgid "driver" -#~ msgstr "服务器" - -#~ msgid "Get task results." -#~ msgstr "汇总训练结果。" +#~ msgid ":code:`load_mnist()`" +#~ msgstr ":code:`load_mnist()`" -#~ msgid "Request for run ID." -#~ msgstr "Flower 基线申请" +#~ msgid "Loads the MNIST dataset using OpenML" +#~ msgstr "使用 OpenML 加载 MNIST 数据集" -#~ msgid "Get client IDs." -#~ msgstr "返回客户端(本身)。" +#~ msgid ":code:`shuffle()`" +#~ msgstr ":code:`shuffle()`" -#~ msgid "" -#~ "Flower usage examples used to be bundled with Flower in a package called " -#~ "``flwr_example``. We are migrating those examples to standalone projects " -#~ "to make them easier to use. All new examples are based in the directory " -#~ "`examples `_." -#~ msgstr "" -#~ "Flower 的使用示例曾与 Flower 捆绑在一个名为 ``flwr_example`` 的软件包中。" -#~ "我们正在将这些示例迁移到独立项目中,以使它们更易于使用。所有新示例都位于目" -#~ "录 `examples `_。" +#~ msgid "Shuffles data and its label" +#~ msgstr "对数据及其标签进行洗牌" -#~ msgid "Quickstart TensorFlow/Keras" -#~ msgstr "快速入门 TensorFlow/Keras" +#~ msgid ":code:`partition()`" +#~ msgstr ":code:`partition()`" -#~ msgid "Legacy Examples (`flwr_example`)" -#~ msgstr "传统示例 (`flwr_example`)" +#~ msgid "Splits datasets into a number of partitions" +#~ msgstr "将数据集分割成多个分区" #~ msgid "" -#~ "The useage examples in `flwr_example` are deprecated and will be removed " -#~ "in the future. New examples are provided as standalone projects in " -#~ "`examples `_." +#~ "We load the MNIST dataset from " +#~ "`OpenML " +#~ "`_, a" +#~ " popular image classification dataset of" +#~ " handwritten digits for machine learning." +#~ " The utility :code:`utils.load_mnist()` downloads" +#~ " the training and test data. The " +#~ "training set is split afterwards into" +#~ " 10 partitions with :code:`utils.partition()`." #~ msgstr "" -#~ "在 `flwr_example` 中的使用示例已被弃用,今后将被移除。新示例将作为独立项目" -#~ "在 `examples `_ 中提" -#~ "供。" +#~ "我们从 `OpenML `_ 中加载 " +#~ "MNIST 数据集,这是一个用于机器学习的流行手写数字图像分类数据集。实用程序 " +#~ ":code:`utils.load_mnist()` 下载训练和测试数据。然后使用 " +#~ ":code:`utils.partition()`将训练集分割成 10 个分区。" -#~ msgid "Extra Dependencies" -#~ msgstr "额外依赖" +#~ msgid "Let's get stated!" +#~ msgstr "让我们开始吧!" -#~ msgid "" -#~ "The core Flower framework keeps a minimal set of dependencies. The " -#~ "examples demonstrate Flower in the context of different machine learning " -#~ "frameworks, so additional dependencies need to be installed before an " -#~ "example can be run." +#~ msgid "|2b5c62c529f6416f840c594cce062fbb|" #~ msgstr "" -#~ "Flower 核心框架只保留了最低限度的依赖项。这些示例在不同机器学习框架的背景" -#~ "下演示了 Flower,因此在运行示例之前需要安装额外的依赖项。" -#~ msgid "For PyTorch examples::" -#~ msgstr "PyTorch 示例::" - -#~ msgid "For TensorFlow examples::" -#~ msgstr "TensorFlow 示例::" - -#~ msgid "For both PyTorch and TensorFlow examples::" -#~ msgstr "PyTorch 和 TensorFlow 示例::" - -#~ msgid "" -#~ "Please consult :code:`pyproject.toml` for a full list of possible extras " -#~ "(section :code:`[tool.poetry.extras]`)." +#~ msgid "|90b334680cb7467d9a04d39b8e8dca9f|" #~ msgstr "" -#~ "请参阅 :code:`pyproject.toml`,了解可能的 extras 的完整列表(章节 :code:" -#~ "`[tool.poems.extras]`)。" - -#~ msgid "PyTorch Examples" -#~ msgstr "PyTorch 示例" -#~ msgid "" -#~ "Our PyTorch examples are based on PyTorch 1.7. They should work with " -#~ "other releases as well. So far, we provide the following examples." +#~ msgid "|65764ceee89f4335bfd93fd0b115e831|" #~ msgstr "" -#~ "我们的 PyTorch 示例基于 PyTorch 1.7。它们应该也能在其他版本中使用。到目前" -#~ "为止,我们提供了以下示例。" - -#~ msgid "CIFAR-10 Image Classification" -#~ msgstr "CIFAR-10 图像分类" -#~ msgid "" -#~ "`CIFAR-10 and CIFAR-100 `_ " -#~ "are popular RGB image datasets. The Flower CIFAR-10 example uses PyTorch " -#~ "to train a simple CNN classifier in a federated learning setup with two " -#~ "clients." +#~ msgid "|d97319ec28bb407ea0ab9705e38f3bcf|" #~ msgstr "" -#~ "CIFAR-10 和 CIFAR-100 ``_ 是" -#~ "流行的 RGB 图像数据集。Flower CIFAR-10 示例使用 PyTorch 在有两个客户端的联" -#~ "邦学习设置中训练一个简单的 CNN 分类器。" - -#~ msgid "First, start a Flower server:" -#~ msgstr "首先,启动 Flower 服务器:" - -#~ msgid "$ ./src/py/flwr_example/pytorch_cifar/run-server.sh" -#~ msgstr "$ ./src/py/flwr_example/pytorch_cifar/run-server.sh" - -#~ msgid "Then, start the two clients in a new terminal window:" -#~ msgstr "然后,在新的终端窗口中启动两个客户端:" - -#~ msgid "$ ./src/py/flwr_example/pytorch_cifar/run-clients.sh" -#~ msgstr "$ ./src/py/flwr_example/pytorch_cifar/run-clients.sh" -#~ msgid "For more details, see :code:`src/py/flwr_example/pytorch_cifar`." -#~ msgstr "更多详情,请参阅 :code:`src/py/flwr_example/pytorch_cifar`。" - -#~ msgid "ImageNet-2012 Image Classification" -#~ msgstr "ImageNet-2012 图像分类" - -#~ msgid "" -#~ "`ImageNet-2012 `_ is one of the major computer " -#~ "vision datasets. The Flower ImageNet example uses PyTorch to train a " -#~ "ResNet-18 classifier in a federated learning setup with ten clients." +#~ msgid "|11e95ac83a8548d8b3505b4663187d07|" #~ msgstr "" -#~ "ImageNet-2012 `_ 是主要的计算机视觉数据集之一。" -#~ "Flower ImageNet 示例使用 PyTorch 在有十个客户端的联邦学习设置中训练 " -#~ "ResNet-18 分类器。" - -#~ msgid "$ ./src/py/flwr_example/pytorch_imagenet/run-server.sh" -#~ msgstr "$ ./src/py/flwr_example/pytorch_imagenet/run-server.sh" -#~ msgid "$ ./src/py/flwr_example/pytorch_imagenet/run-clients.sh" -#~ msgstr "$ ./src/py/flwr_example/pytorch_imagenet/run-clients.sh" +#~ msgid "|1dab2f3a23674abc8a6731f20fa10730|" +#~ msgstr "" -#~ msgid "For more details, see :code:`src/py/flwr_example/pytorch_imagenet`." -#~ msgstr "更多详情,请参阅 :code:`src/py/flwr_example/pytorch_imagenet`。" +#~ msgid "|7f0ee162da38450788493a21627306f7|" +#~ msgstr "" -#~ msgid "TensorFlow Examples" -#~ msgstr "TensorFlow 示例" +#~ msgid "|296a1fb72c514b23b3d8905ff0ff98c6|" +#~ msgstr "" -#~ msgid "" -#~ "Our TensorFlow examples are based on TensorFlow 2.0 or newer. So far, we " -#~ "provide the following examples." +#~ msgid "|5b1408eec0d746cdb91162a9107b6089|" #~ msgstr "" -#~ "我们的 TensorFlow 示例基于 TensorFlow 2.0 或更新版本。到目前为止,我们提供" -#~ "了以下示例。" -#~ msgid "Fashion-MNIST Image Classification" -#~ msgstr "Fashion-MNIST 图像分类" +#~ msgid "|aef19f4b122c4e8d9f4c57f99bcd5dd2|" +#~ msgstr "" -#~ msgid "" -#~ "`Fashion-MNIST `_ is " -#~ "often used as the \"Hello, world!\" of machine learning. We follow this " -#~ "tradition and provide an example which samples random local datasets from " -#~ "Fashion-MNIST and trains a simple image classification model over those " -#~ "partitions." +#~ msgid "|2881a86d8fc54ba29d96b29fc2819f4a|" #~ msgstr "" -#~ "`Fashion-MNIST `_ 经常被" -#~ "用作机器学习的 \"你好,世界!\"。我们遵循这一传统,提供了一个从Fashion-" -#~ "MNIST 中随机抽样本地数据集的示例,并在这些分区上训练一个简单的图像分类模" -#~ "型。" -#~ msgid "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-server.sh" -#~ msgstr "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-server.sh" +#~ msgid "|ec1fe880237247e0975f52766775ab84|" +#~ msgstr "" -#~ msgid "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-clients.sh" -#~ msgstr "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-clients.sh" +#~ msgid "|9fdf048ed58d4467b2718cdf4aaf1ec3|" +#~ msgstr "" -#~ msgid "" -#~ "For more details, see :code:`src/py/flwr_example/" -#~ "tensorflow_fashion_mnist`." +#~ msgid "|ff726bc5505e432388ee2fdd6ef420b9|" #~ msgstr "" -#~ "更多详情,请参阅 :code:`src/py/flwr_example/tensorflow_fashion_mnist`。" + diff --git a/doc/locales/zh_Hans/LC_MESSAGES/sphinx.po b/doc/locales/zh_Hans/LC_MESSAGES/sphinx.po index 98f300b872e..6c786f9ed8a 100644 --- a/doc/locales/zh_Hans/LC_MESSAGES/sphinx.po +++ b/doc/locales/zh_Hans/LC_MESSAGES/sphinx.po @@ -8,7 +8,7 @@ msgid "" msgstr "" "Project-Id-Version: Flower main\n" "Report-Msgid-Bugs-To: \n" -"POT-Creation-Date: 2023-11-23 18:31+0100\n" +"POT-Creation-Date: 2024-05-28 11:47+0200\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language: zh_Hans\n" @@ -17,7 +17,7 @@ msgstr "" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: 8bit\n" -"Generated-By: Babel 2.13.1\n" +"Generated-By: Babel 2.15.0\n" #: ../../source/_templates/base.html:18 msgid "About these documents" @@ -28,6 +28,7 @@ msgid "Index" msgstr "" #: ../../source/_templates/base.html:24 +#: ../../source/_templates/sidebar/search.html:2 msgid "Search" msgstr "" @@ -35,3 +36,7 @@ msgstr "" msgid "Copyright" msgstr "" +#: ../../source/_templates/sidebar/search.html:2 +msgid "Type / to search" +msgstr "" +