Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added MapIdParallel to the View #28

Merged
merged 2 commits into from
Oct 14, 2024
Merged

Conversation

opengs
Copy link
Contributor

@opengs opengs commented Oct 18, 2023

Added MapIdParallel to the views template generator. It splits entities into chunks up to a specified size and then maps those chunks in parallel.

I didn't really tested it :) Maybe @unitoftime can do it on one of the projects where this ECS library is fully integrated to see if it works.

@opengs opengs changed the title Added MapIdParallel to the Views Added MapIdParallel to the View Oct 18, 2023
@unitoftime
Copy link
Owner

Oh Very cool! I had no idea anyone was working on this lol. Let me take a look the next chance I get!

Thanks,
Unit

view_gen.go Outdated
Comment on lines 222 to 263
for _, archId := range v.filter.archIds {

sliceA, _ = v.storageA.slice[archId]

lookup := v.world.engine.lookup[archId]
if lookup == nil {
panic("LookupList is missing!")
}
// lookup, ok := v.world.engine.lookup[archId]
// if !ok { panic("LookupList is missing!") }
ids := lookup.id

compA = nil
if sliceA != nil {
compA = sliceA.comp
}

startWorkRangeIndex := -1
for idx := range ids {
//TODO: chunks may be very small because of holes. Some clever heuristic is required. Most probably this is a problem of storage segmentation, but not this map algorithm.
if ids[idx] == InvalidEntity {
if startWorkRangeIndex != -1 {
newWorkChanel <- workPackage{start: startWorkRangeIndex, end: idx, ids: ids, a: compA}
startWorkRangeIndex = -1
}
continue
} // Skip if its a hole

if startWorkRangeIndex == -1 {
startWorkRangeIndex = idx
}

if idx-startWorkRangeIndex >= chunkSize {
newWorkChanel <- workPackage{start: startWorkRangeIndex, end: idx + 1, ids: ids, a: compA}
startWorkRangeIndex = -1
}
}

if startWorkRangeIndex != -1 {
newWorkChanel <- workPackage{start: startWorkRangeIndex, end: len(ids), a: compA}
}
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand this correctly. It looks like we have the main goroutine looping over every Id to generate the list of work to be accomplished by the worker pool goroutines. Is this exclusively because of holes? It is likely more efficient to have the work generating goroutine (ie the main goroutine) simply just split up the ranges for the worker goroutines to execute on. Then have the worker goroutines skip columns that they detect as a hole. On the hole fragmentation side: During a write to the ECS world, the code first tries to fill in a hole, rather than appending to the main component slices.

All that said, I don't mind the current code as is. The first implementation of this function need not be hugely optimized, I can optimize it later if you don't want to or don't have the time.

I'm also trying to think of how the user would decide what chunkSize to pass in, or if there's some way for us to automagically determine it. Maybe splitting it based on the number of threads that could possibly run. Like if you've got 8 threads, we'd split the total work into 1/8th (and maybe have some check to make sure there is even enough entities to make it worth splitting).

@unitoftime unitoftime merged commit 8afcb57 into unitoftime:master Oct 14, 2024
0 of 3 checks passed
@unitoftime
Copy link
Owner

merged. though I might rewrite some of it. Thanks for the good work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants