Skip to content

Sequence Model Recommendation with YOLOv8 #16936

Closed Answered by glenn-jocher
aemiio asked this question in Q&A
Discussion options

You must be logged in to vote

For handling sequential dependencies in Braille with YOLOv8, a Transformer model is often a strong choice due to its ability to capture long-range dependencies effectively. However, if computational resources are a concern, LSTM or Bidirectional LSTM can also be effective for context-sensitive tasks. Consider experimenting with these models to see which best suits your project's needs.

Replies: 1 comment 2 replies

Comment options

You must be logged in to vote
2 replies
@aemiio
Comment options

@glenn-jocher
Comment options

Answer selected by aemiio
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
question Further information is requested detect Object Detection issues, PR's
2 participants