You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi again, I'm here with a new question passing only a few day :)
I want to define a last logit layer as a class center loss like exp(-||x-mu1||^2) (in first, exp is not important so I didnt include it) I have 4k class and wrote a code : net_auto = Layer.fromDagNN(net_dagx);
W = Param('value',randn(1,1,4096,4000,'single') ,'learningRate',2);
net_1 = sum( (W-net_auto{1}).^2,3);
when I looked inside vl_wsum, I saw this sizes and got an error message
first size is for W and second is mini batch data. I thought that autonn processes data one by one from mini batch so I wrote code every time in this idea. Then tried for-loop : for i = 1:40
net_1a{i} = sum( (W-net_auto{1}(1,1,:,i)).^2,3);
end
net_1 = cat(4,net_1a{:});
vl_wsum loss can be passed, but in this case, vl_nnloss gives error.
The text was updated successfully, but these errors were encountered:
W = Param('value',randn(1,1,4096,4000,'single') ,'learningRate',5);
for i = 1:20
net_1a{i} = sum( (W-net_auto{1}(1,1,:,i)).^2,3);
net_1a{i} = reshape(net_1a{i},1,1,[]);
end
net_1 = 1-cat(4,net_1a{:});
I should reshape "sum( (W-net_auto{1}(1,1,:,i)).^2,3);" result. Because it has [1,1,1,4000]. It must be [1,1,4000]. (1). quest. : Am I right about definition of class center as W? and after training I've got new err msg :
I really dont understand :/
(2). quest. : Also if I use vl_nnconv, W tensor can be used again, results will be generated from inner product <n,w>. In above case results are produced by |n-w|^2 there isn't much additional computational complexity. But vl_nnconv case is x5 faster than above definition. Do "reshape" and "sum" methods take much times in autonn? or for-loop? and also when I use vl_nnconv , I can choose 40 miniBatch size but now I got outOfMemory for my 1080Ti. So I use 20 miniBatch for this case... What causes this?
Hi again, I'm here with a new question passing only a few day :)
I want to define a last logit layer as a class center loss like exp(-||x-mu1||^2) (in first, exp is not important so I didnt include it) I have 4k class and wrote a code :
net_auto = Layer.fromDagNN(net_dagx);
W = Param('value',randn(1,1,4096,4000,'single') ,'learningRate',2);
net_1 = sum( (W-net_auto{1}).^2,3);
when I looked inside vl_wsum, I saw this sizes and got an error message
first size is for W and second is mini batch data. I thought that autonn processes data one by one from mini batch so I wrote code every time in this idea. Then tried for-loop :
for i = 1:40
net_1a{i} = sum( (W-net_auto{1}(1,1,:,i)).^2,3);
end
net_1 = cat(4,net_1a{:});
vl_wsum loss can be passed, but in this case, vl_nnloss gives error.
The text was updated successfully, but these errors were encountered: