Also add an apply_to_mentions attribute on Glitch::KeywordMute, which is
used to calculate scope. Next up: additions to the test suite to
demonstrate how scoping works.
Do not touch statuses_count on accounts table when mass-destroying
statuses to reduce load when removing accounts, same for
reblogs_count and favourites_count
Do not count statuses with direct visibility in statuses_count
Fix#828
* optimize direct timeline
* fix typo in class name
* change filter condition for direct timeline
* fix codestyle issue
* revoke index_accounts_not_silenced because direct timeline does not use it.
* revoke index_accounts_not_silenced because direct timeline does not use it.
* fix rspec test condition.
* fix rspec test condition.
* fix rspec test condition.
* revoke adding column and partial index
* (direct timeline) move merging logic to model
* fix pagination parameter
* add method arguments that switches return array of status or cache_ids
* fix order by
* returns ActiveRecord.Relation in default behavor
* fix codestyle issue
- POST /api/v1/push/subscription
- PUT /api/v1/push/subscription
- DELETE /api/v1/push/subscription
- New OAuth scope: "push" (required for the above methods)
Same URI passed between follow request and follow, since they are
the same thing in ActivityPub. Local URIs are generated during
creation using UUIDs and are passed to serializers.
* Added a timeline for Direct statuses
* Lists all Direct statuses you've sent and received
* Displayed in Getting Started
* Streaming server support for direct TL
* Changes to match other timelines in 2.0
Sidekiq sometimes throws errors for users that have more pinned items
than the allowed by the local instance. It should only validate the
number of pins for local accounts.
This also limits the statuses returned by API, but pagination is not
implemented in Web API yet. I still expect it brings user experience
better than making a user wait to fetch all ancestor statuses and flooding
the column with them.
to_s method of HTTP::Response keeps blocking while it receives the whole
content, no matter how it is big. This means it may waste time to receive
unacceptably large files. It may also consume memory and disk in the
process. This solves the inefficency by checking response length while
receiving.
* Fix#201: Account archive download
* Export actor and private key in the archive
* Optimize BackupService
- Add conversation to cached associations of status, because
somehow it was forgotten and is source of N+1 queries
- Explicitly call GC between batches of records being fetched
(Model class allocations are the worst offender)
- Stream media files into the tar in 1MB chunks
(Do not allocate media file (up to 8MB) as string into memory)
- Use #bytesize instead of #size to calculate file size for JSON
(Fix FileOverflow error)
- Segment media into subfolders by status ID because apparently
GIF-to-MP4 media are all named "media.mp4" for some reason
* Keep uniquely generated filename in Paperclip::GifTranscoder
* Ensure dumped files do not overwrite each other by maintaing directory partitions
* Give tar archives a good name
* Add scheduler to remove week-old backups
* Fix code style issue
The cache store is explicitly used by some specs, but they were not
isolated and therefore not reliable. This fixes the issue by clearing
the cache after each specs.