Changeset View
Changeset View
Standalone View
Standalone View
src/databaseinterface.h
Show First 20 Lines • Show All 81 Lines • ▼ Show 20 Line(s) | 49 | enum ColumnsRoles { | |||
---|---|---|---|---|---|
82 | HasEmbeddedCover, | 82 | HasEmbeddedCover, | ||
83 | FileModificationTime, | 83 | FileModificationTime, | ||
84 | FirstPlayDate, | 84 | FirstPlayDate, | ||
85 | LastPlayDate, | 85 | LastPlayDate, | ||
86 | PlayCounter, | 86 | PlayCounter, | ||
87 | PlayFrequency, | 87 | PlayFrequency, | ||
88 | ElementTypeRole, | 88 | ElementTypeRole, | ||
89 | LyricsRole, | 89 | LyricsRole, | ||
90 | FileNameRole, | ||||
mgallien: Management of this new role is increasing the size of data exported by database to the model… | |||||
Unfortunately, it seems that this does not have a noticable effect. Any other ideas? astippich: Unfortunately, it seems that this does not have a noticable effect. Any other ideas? | |||||
Did you try to see if it is database requests that are slow by activating the database logging (especially about slow requests) ? mgallien: Did you try to see if it is database requests that are slow by activating the database logging… | |||||
astippich: Stupid question: How do I activate the database logging? | |||||
Sorry for the lack of info. mgallien: Sorry for the lack of info.
I meant activating the categorized logging of database and possibly… | |||||
90 | }; | 91 | }; | ||
91 | 92 | | |||
92 | Q_ENUM(ColumnsRoles) | 93 | Q_ENUM(ColumnsRoles) | ||
93 | 94 | | |||
94 | private: | 95 | private: | ||
95 | 96 | | |||
96 | using DataType = QMap<ColumnsRoles, QVariant>; | 97 | using DataType = QMap<ColumnsRoles, QVariant>; | ||
97 | 98 | | |||
▲ Show 20 Lines • Show All 70 Lines • ▼ Show 20 Line(s) | 168 | { | |||
168 | return operator[](key_type::DurationRole).toTime(); | 169 | return operator[](key_type::DurationRole).toTime(); | ||
169 | } | 170 | } | ||
170 | 171 | | |||
171 | QUrl resourceURI() const | 172 | QUrl resourceURI() const | ||
172 | { | 173 | { | ||
173 | return operator[](key_type::ResourceRole).toUrl(); | 174 | return operator[](key_type::ResourceRole).toUrl(); | ||
174 | } | 175 | } | ||
175 | 176 | | |||
177 | QString fileName() const | ||||
178 | { | ||||
179 | return operator[](key_type::FileNameRole).toString(); | ||||
180 | } | ||||
181 | | ||||
176 | QUrl albumCover() const | 182 | QUrl albumCover() const | ||
177 | { | 183 | { | ||
178 | return operator[](key_type::ImageUrlRole).toUrl(); | 184 | return operator[](key_type::ImageUrlRole).toUrl(); | ||
179 | } | 185 | } | ||
180 | 186 | | |||
181 | bool isSingleDiscAlbum() const | 187 | bool isSingleDiscAlbum() const | ||
182 | { | 188 | { | ||
183 | return operator[](key_type::IsSingleDiscAlbumRole).toBool(); | 189 | return operator[](key_type::IsSingleDiscAlbumRole).toBool(); | ||
▲ Show 20 Lines • Show All 406 Lines • ▼ Show 20 Line(s) | 462 | private: | |||
590 | void upgradeDatabaseV11(); | 596 | void upgradeDatabaseV11(); | ||
591 | 597 | | |||
592 | void upgradeDatabaseV12(); | 598 | void upgradeDatabaseV12(); | ||
593 | 599 | | |||
594 | void upgradeDatabaseV13(); | 600 | void upgradeDatabaseV13(); | ||
595 | 601 | | |||
596 | void upgradeDatabaseV14(); | 602 | void upgradeDatabaseV14(); | ||
597 | 603 | | |||
604 | void upgradeDatabaseV15(); | ||||
605 | | ||||
598 | void checkDatabaseSchema(); | 606 | void checkDatabaseSchema(); | ||
599 | 607 | | |||
600 | void checkAlbumsTableSchema(); | 608 | void checkAlbumsTableSchema(); | ||
601 | 609 | | |||
602 | void checkArtistsTableSchema(); | 610 | void checkArtistsTableSchema(); | ||
603 | 611 | | |||
604 | void checkComposerTableSchema(); | 612 | void checkComposerTableSchema(); | ||
605 | 613 | | |||
Show All 27 Lines |
Management of this new role is increasing the size of data exported by database to the model and possibly read by the view. This can be a real cause of slowdown in particular if the filename text can be quite long.
You can handle the filename call in the model and only when it is strictly needed to display a track without title (if I understand correctly). That may be enough to keep the same performance level but I cannot be sure.
It would be probably wiser to reduce the size of the data instead of increasing them.