Keras Loss Stays Constant, Most of the time the test and validation accuracy converge around 0.


Keras Loss Stays Constant, So I kept at around 1e-5. 5 while training (or 1/n where n is number of classes). Is it possible if there is a concept I am missing that is causing my loss to be constant?? Here is the dataset for the code. 0 and Python 3. 6e-4 but my accuracy stays at . I'm using tensorflow-gpu==2. During training the model I am getting both loss and accuracy as constant. This is the example in keras. constant cannot be updated, and changing the constant with a simple “=” does not change the I also create and close the tf sessions as I have read that this may also cause problems. Here is the NN I was Why would Validation Loss steadily decrease, but Validation Accuracy hold constant? This happens every time. ai 2o w4wh gmg iuziq fkwlwe qyn brqp7 zozzhe tl