Closed
Bug 1167809
Opened 10 years ago
Closed 10 years ago
large media file does not play in non-e10s service worker site
Categories
(Core :: DOM: Service Workers, defect)
Core
DOM: Service Workers
Tracking
()
RESOLVED
FIXED
mozilla43
People
(Reporter: bkelly, Assigned: nsm)
References
Details
Attachments
(1 file, 1 obsolete file)
19.83 KB,
patch
|
nsm
:
review+
|
Details | Diff | Splinter Review |
STR:
1) Go to https://blog.wanderview.com/who-visualized-the-bomp/
2) reload to get the page controlled
3) press 'f'
After a few seconds a song should start playing. In e10s mode or if you force-refresh from the network (with bug 1157619 applied), this works. In non-e10s loaded from the Service Worker it does not.
Note, the other audio controls in the page do work. The main difference between these is the size of the song. Its ~5MB whereas the others are much smaller.
Its possible we are somehow killing the ServiceWorker thread before the full song can be loaded or something.
Reporter | ||
Comment 1•10 years ago
|
||
You can also go to the original non-sw enabled site here to see its normal behavior:
http://pancaketheorem.com/stuff/bomp/
Updated•10 years ago
|
tracking-e10s:
--- → -
Reporter | ||
Updated•10 years ago
|
Assignee: bkelly → nobody
Status: ASSIGNED → NEW
Our channel is getting canceled with NS_ERROR_FILE_TOO_BIG, which suggests that size is indeed the issue.
So CacheObserver::EntryIsTooBig will return true for anything over 5 MB, and the sound is 5.3 MB. Thus the necko cache output stream fails a write with NS_ERROR_FILE_TOO_BIG.
The stack is
> xul.dll!mozilla::net::CacheFileOutputStream::Write(const char * aBuf, unsigned int aCount, unsigned int * _retval) Line 99 C++
xul.dll!nsStreamCopierIB::ConsumeInputBuffer(nsIInputStream * aInStr, void * aClosure, const char * aBuffer, unsigned int aOffset, unsigned int aCount, unsigned int * aCountWritten) Line 520 C++
xul.dll!mozilla::SnappyUncompressInputStream::ReadSegments(nsresult (nsIInputStream *, void *, const char *, unsigned int, unsigned int, unsigned int *) * aWriter, void * aClosure, unsigned int aCount, unsigned int * aBytesReadOut) Line 131 C++
xul.dll!mozilla::dom::cache::ReadStream::Inner::ReadSegments(nsresult (nsIInputStream *, void *, const char *, unsigned int, unsigned int, unsigned int *) * aWriter, void * aClosure, unsigned int aCount, unsigned int * aNumReadOut) Line 315 C++
xul.dll!mozilla::dom::cache::ReadStream::ReadSegments(nsresult (nsIInputStream *, void *, const char *, unsigned int, unsigned int, unsigned int *) * aWriter, void * aClosure, unsigned int aCount, unsigned int * aNumReadOut) Line 532 C++
xul.dll!nsStreamCopierIB::DoCopy(nsresult * aSourceCondition, nsresult * aSinkCondition) Line 538 C++
xul.dll!nsAStreamCopier::Process() Line 300 C++
xul.dll!nsAStreamCopier::Run() Line 426 C++
Why are we writing to the Necko cache at all?
Flags: needinfo?(bkelly)
Reporter | ||
Comment 4•10 years ago
|
||
I believe this is due to the way we use fake necko cache entries to do interception in non-e10s.
Honza, any ideas for working around the size limit when we're doing interception instead of a real cache entry?
Flags: needinfo?(bkelly) → needinfo?(honzab.moz)
It seems, uh, inadvisable, to actually be writing to disk here ...
Comment 6•10 years ago
|
||
The cache entries created for synthesized fetch responses only use the memory cache.
Ok, that doesn't speak to the real problem though. Even the Necko memory cache has a finite entry size. What should happen when someone tries to write more than that?
Flags: needinfo?(josh)
![]() |
||
Comment 8•10 years ago
|
||
This is how the cache behaves. We could introduce a flag to do not check on limits. Ideally, there should be a cache storage specific to synthetization needs where would could selectively workaround any issues like these.
Other option is to completely abandon the idea of using HTTP cache for synthetization and rather just use a storage stream. We do something similar on child process.
Flags: needinfo?(honzab.moz)
Reporter | ||
Comment 9•10 years ago
|
||
It sounds like removing the dependency on cache entries would be the best long term solution. It seems like we've had lots of unexpected issues due to integrating with them like this.
But why a storage stream instead of a pipe? It seems we don't need to read the data more than once. Why not let the memory free as its read like the pipe does?
![]() |
||
Comment 10•10 years ago
|
||
(In reply to Ben Kelly [:bkelly] from comment #9)
> It sounds like removing the dependency on cache entries would be the best
> long term solution. It seems like we've had lots of unexpected issues due
> to integrating with them like this.
I would personally vote for that. It shows up to be a bad idea to go via the cache.
>
> But why a storage stream instead of a pipe? It seems we don't need to read
> the data more than once. Why not let the memory free as its read like the
> pipe does?
Pipe doesn't support concurrency if it were needed. But that may be quit OK. Up to you to decide.
Comment 11•10 years ago
|
||
Do we think we can get this done before 43 moves to Aurora (~Sept 22)? If not, would a band-aid fix like increasing the cache size limit from 5 MB to 15 (or whatever) be worth doing?
Do we have someone who's familiar with this code availale to do the fix here? (Honza is pretty slammed). I could throw a couple different necko peeps at this, but they'd need to learn the ropes. (We've got a few weeks, so that might be fine).
Flags: needinfo?(honzab.moz)
Reporter | ||
Comment 12•10 years ago
|
||
I don't think any size limit is reasonable for service worker interception. Can we just set a flag that this is a synthetic response and bypass the check completely?
![]() |
||
Comment 13•10 years ago
|
||
(In reply to Jason Duell [:jduell] (needinfo? me) from comment #11)
> Do we think we can get this done before 43 moves to Aurora (~Sept 22)? If
> not, would a band-aid fix like increasing the cache size limit from 5 MB to
> 15 (or whatever) be worth doing?
>
> Do we have someone who's familiar with this code availale to do the fix
> here? (Honza is pretty slammed). I could throw a couple different necko
> peeps at this, but they'd need to learn the ropes. (We've got a few weeks,
> so that might be fine).
There are two places we do the entry size limit check/kill with a call to static CacheObserver::EntryIsTooBig. It should be possible to open an entry with some new flag to ignore the limits (not to call that method).
I personally would prefer to add a new "storage" method to nsICacheStorageService - like synthesizingCacheStorage() - that would return a memory-only storage but with some flag to ignore the limits. That flag would internally be carried on any opened entries with that storage and also automatically add the id enhancement that jdm is doing in the http channel manually at the moment, [1].
Anyway, if it'd be easier to move away from cache for this whole mechanism, then I's rather prefer that.
(In reply to Ben Kelly [:bkelly] from comment #12)
> I don't think any size limit is reasonable for service worker interception.
> Can we just set a flag that this is a synthetic response and bypass the
> check completely?
No. There is at the moment no such flag. It needs to be implemented (as above).
[1] http://hg.mozilla.org/mozilla-central/annotate/74fbd245369c/netwerk/protocol/http/nsHttpChannel.cpp#l2960
Flags: needinfo?(honzab.moz)
Comment 14•10 years ago
|
||
Everybody is in favour of moving away from using the cache at this point, but unfortunately it doesn't make sense to do it now when we're hoping to enable Service Workers by default very soon.
Flags: needinfo?(josh)
Ok, so can we drop this from the ServiceWorkers-v1 bug then?
Flags: needinfo?(josh)
Reporter | ||
Comment 16•10 years ago
|
||
No. We need to add the flag as suggested in comment 13. I asked Nikhil to look at it.
Flags: needinfo?(josh)
Assignee | ||
Updated•10 years ago
|
Assignee: nobody → nsm.nikhil
Status: NEW → ASSIGNED
Assignee | ||
Comment 17•10 years ago
|
||
Honza,
Is this what you were suggesting, and if yes, who should I ask to review?
I have tested that it plays music in the demo linked above.
Flags: needinfo?(honzab.moz)
Assignee | ||
Comment 18•10 years ago
|
||
![]() |
||
Comment 19•10 years ago
|
||
Comment on attachment 8656878 [details] [diff] [review]
Add skip size check flag to cache for use with ServiceWorkers.
Review of attachment 8656878 [details] [diff] [review]:
-----------------------------------------------------------------
::: netwerk/cache2/CacheEntry.h
@@ +54,5 @@
> NS_DECL_NSICACHEENTRY
> NS_DECL_NSIRUNNABLE
>
> CacheEntry(const nsACString& aStorageID, nsIURI* aURI, const nsACString& aEnhanceID,
> + bool aUseDisk, bool aSkipSizeCheck = false);
don't have a default value please
::: netwerk/cache2/CacheStorage.h
@@ +52,5 @@
> public:
> CacheStorage(nsILoadContextInfo* aInfo,
> bool aAllowDisk,
> + bool aLookupAppCache,
> + bool aSkipSizeCheck = false);
don't have a default value
::: netwerk/cache2/nsICacheStorageService.idl
@@ +59,5 @@
> + * Get storage for synthesized cache entries that we currently use for ServiceWorker interception in non-e10s mode.
> + *
> + * This cache storage has no limits on file size to allow the ServiceWorker to intercept large files.
> + */
> + nsICacheStorage synthesizedCacheStorage(in nsILoadContextInfo aLoadContextInfo);
change iid of the inteface.
ideally, this method should take the intercept-id string and make the storage automatically add it during entry open to the id extension. but that is a nice to have and involves more code (a new SynthesizedCacheStorage class with overridden AsyncOpen and OpenTruncate methods.) so, up to you, I'm OK with this approach too as we are about to go away from cache soon anyway.
Attachment #8656878 -
Flags: review+
![]() |
||
Updated•10 years ago
|
Flags: needinfo?(honzab.moz)
Assignee | ||
Comment 20•10 years ago
|
||
Removes default values.
Attachment #8656878 -
Attachment is obsolete: true
Attachment #8657273 -
Flags: review+
Assignee | ||
Updated•10 years ago
|
Keywords: checkin-needed
Comment 21•10 years ago
|
||
Keywords: checkin-needed
Comment 22•10 years ago
|
||
Status: ASSIGNED → RESOLVED
Closed: 10 years ago
status-firefox43:
--- → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla43
You need to log in
before you can comment on or make changes to this bug.
Description
•