Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hyperparameters Study #236

Open
rezaBarzgar opened this issue Apr 17, 2024 · 1 comment
Open

Hyperparameters Study #236

rezaBarzgar opened this issue Apr 17, 2024 · 1 comment
Assignees
Labels
curriculum experiment Running a study or baseline for results

Comments

@rezaBarzgar
Copy link
Member

rezaBarzgar commented Apr 17, 2024

This issue reports the results of my SuperLoss hyperparameter study on IMDB and DBLP.

You can find results in detail here, in sheets DBLP hyperparameter SUPERLOSS and IMDB hyperparameter SUPERLOSS.

SuperLoss has two hyperparameters which are τ and λ:

  • τ: A threshold that separates easy experts from difficult ones based on their respective losses.
  • λ: Regularization hyperparameter.

In this hyperparameter study, 5 different permutations of τ and λ were tested on 2 different datasets (IMDB and DBLP):

  • τ 0.7 - λ 0.0 (finished)
  • τ 0.7 - λ 0.9 (finished)
  • τ 0.0 - λ 0.0 (finished)
  • τ 0.5 - λ 0.5 (finished)
  • τ 0.0 - λ 0.9 (finished) (Original values for the results that we reported in our papers)
  • τ 0.9 - λ 0.9 (running)
  • τ 0.9 - λ 0.0 (finished)

Analysis of the results:

  • This hyperparameter study supports our claim that loss-based curriculum learning techniques are ineffective for Bayesian neural networks.
  • Based on the results, it can be concluded that λ has a direct effect on the performance of models on team formation task while τ has the reverse effect. A higher value of λ is better, a lower value of τ is better. The best results are gained from τ 0.0 - λ 0.9.
@rezaBarzgar rezaBarzgar added the experiment Running a study or baseline for results label Apr 17, 2024
@rezaBarzgar rezaBarzgar self-assigned this Apr 17, 2024
@rezaBarzgar
Copy link
Member Author

@hosseinfani

Up until now, I've conducted the following experiments that we can include in our contributions for our next paper:

  • Created a static curriculum based on popularity labels. The results are aligned with our claim that dynamic CL is a better fit for the Team Recommendation task.
  • Conducted a hyperparameter study on superloss to examine the effects of different hyperparameters on the Team Recommendation task. It's worth mentioning that parametric CL (data parameters) has no additional hyperparameters to study.

I would appreciate it if you could review the results sheet and share your feedback.

Note: The results for the static curriculum will be updated on the sheet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
curriculum experiment Running a study or baseline for results
Projects
None yet
Development

No branches or pull requests

2 participants