They are open source . This means that if there are errors in the library , Users can use the GitHub Release problems in （ And repair ）, In addition, you can add your own functions to the library ;
Due to the global interpreter lock ,Python It runs slowly inside . So these frameworks use C/C++ As a back-end to handle all computing and parallel processes .
This is a very friendly framework , senior API-Keras The availability of makes the model layer define 、 Loss function and model creation become very easy ;
TensorFlow2.0 with Eager Execution（ Dynamic graph mechanism ）, This makes the library more user-friendly , And it's a major upgrade to previous versions ;
Keras This high-level interface has some disadvantages , because TensorFlow Abstract many underlying mechanisms （ Just for the convenience of end users ）, This gives researchers less freedom in dealing with models ;
Tensorflow Provides TensorBoard, It's actually Tensorflow Visualization Toolkit . It allows researchers to visualize the loss function 、 Model diagram 、 Model analysis, etc .
And TensorFlow Different ,PyTorch Use dynamic type diagrams , This means that the execution diagram is created at run time . It allows us to modify and check the internal structure of the diagram at any time ;
In addition to user-friendly advanced API outside ,PyTorch It also includes carefully constructed low-level API, Allow more and more control over the machine learning model . We can check the forward and backward transfer of the model and modify the output during training . This proved to be very effective for gradient clipping and neural style migration ;
PyTorch Allow users to extend code , You can easily add new loss functions and user-defined layers .PyTorch Of Autograd The module realizes the back-propagation derivative in the deep learning algorithm , stay Tensor All operations on the class , Autograd Can automatically provide differential , It simplifies the complex process of manually calculating derivatives ;
PyTorch Parallel and parallel processing of data GPU The use of has extensive support ;
PyTorch Than TensorFlow more Python turn .PyTorch Very suitable Python The ecological system , It allows the use of Python Class debugger tool to debug PyTorch Code .
As described on the official website ,JAX Be able to execute Python+NumPy Composable conversion of programs ： To quantify 、JIT To GPU/TPU wait ;
And PyTorch comparison ,JAX The most important aspect is how to calculate the gradient . stay Torch in , The diagram is created during forward delivery , The gradient is calculated during backward transfer , On the other hand , stay JAX in , The calculation is expressed as a function . Use... On functions grad() Returns a gradient function , This function directly calculates the function gradient for a given input ;
JAX It's a autograd Tools , It is not recommended to use... Alone . There are various based on JAX Machine learning library , Among them, it is worth noting that ObJax、Flax and Elegy. Because they all use the same core and the interface is just JAX Library wrapper, So you can put them in the same bracket Next ;
Flax Originally in PyTorch Developed under the ecosystem , Pay more attention to the flexibility of use . On the other hand ,Elegy suffer Keras inspire .ObJAX It is mainly designed for research-oriented purposes , It pays more attention to simplicity and comprehensibility .
本文为[Shared by: Chang Zheng]所创，转载请带上原文链接，感谢