REBOL3 - SQLite (C library embeddable DB [web-public])

Return to Index Page
Most recent messages (300 max) are listed first.

#UserMessageDate
1089Gabrielesqlite /path/to/file.db .dump >/path/to/dumpfile.sql13-Jan-10 10:35
1088GabrieleJames, if you have the sqlite command available, you can do:13-Jan-10 10:34
1087james_nakRobert, Well, that's what I do do now but I was wondering if there was also a command.12-Jan-10 21:57
1086Robertwhy not just copy the file?12-Jan-10 18:28
1085james_nakIs there a method to "dump" the contents of a sqlite db for back-up purposes using the the driver?12-Jan-10 16:27
1084joannakThis unifoce srting seems to be a mess.. Well, technically they do have some standards, but trying to even think any decent size implementation of that.. http://www.unicode.org/reports/tr10/10-Jan-10 4:52
1083joannakICU seems to have such library with nice license... Unfortunately the Binary-only package for one platform is like 10Megabytes.. http://icu-project.org/download/4.2.html#ICU4C10-Jan-10 4:34
1082BrianHA solution would be to move the sorting out into the host code, where it can be made to use any system-specific sorting code, which should work for platforms with good Unicode support like Windows, OS X and some Linux distros. The problem with that is that the data would probably need to be migrated to the host format before the host routines could be used; string! is not UTF-8 or UTF-16 internally, it is UCS-8 (in theory), UCS-16 or UCS-32 (in theory), all fixed-length encodings. That would add a lot of overhead to sorting.6-Jan-10 18:12
1081BrianHThe sorting problem with collations applies to Unicode, not just UTF-whatever, so it is a problem. Mostly a problem of size: The actual colation sorting code is small, but the collation data is large. Add more than one or two and REBOL gets huge.6-Jan-10 18:06
1080PekrDoes following mean, we could have trouble to propelry sort (using collations) because of UTF-8?

http://stackoverflow.com/questions/181037/case-insensitive-utf-8-string-collation-for-sqlite-c-c

6-Jan-10 13:58
1079ChrisI seem to recall a note about locking issues over a network path, so ymmv.25-Nov-09 2:28
1078Jankostupid me, I only now realized sqlite driver supports prepared statements too and I was concatenating strings all along.24-Nov-09 11:30
1077RobertAs long as you can access it via a normal file-path, yes. Can be slow but should work.24-Nov-09 11:21
1076jrichardsIs it possible to use the sqlite driver for a file that resides on the internet?24-Nov-09 0:39
1075Pekryes, it does ...7-Oct-09 22:17
1074GiuseppeCDoes REBOL SQLite support SQLLite3 ?7-Oct-09 19:18
1073GiuseppeCIt is back online again.7-Oct-09 19:17
1072GrahamProbably hasn't paid renewal fees!15-Sep-09 20:25
1071jack-ortIs there a problem with:

http://www.dobeash.com/

seems like domain is up for sale ??

15-Sep-09 15:08
1070jack-ortRobert - thanks...that's what I eventually did.

Manu - thank you for making clear what the /direct refinement does...I had read about that in the SQLite Driver Guide, didn't understand it since REBOL is still new to me, and forgot about it. Works perfectly! So much to learn....

14-Sep-09 20:12
1069ManuMJack, finally I saw the problem

The return of sql isn't two blocks of string, it's two blocks of word. So 1124prgr isn,t a valid word and isn't a valid integer

>> a: sql "select jobid, role from roles where userid = 'cda6'" >> type? first first a ==word

You need to add the /direct refinement to sql and it returns strings >> sql/direct "select jobid || role from roles where userid = 'cda6'" ==[["1124prgr"]["1125 test"]]

11-Sep-09 22:37
1068RobertWhy not make the JOIN on the Rebol side?11-Sep-09 18:18
1067jack-ortThanks for the suggestion Manu. Sounded good to me, but I get the same error even with the empty string at the front:

>> sql "select '' || jobid || role from roles where userid = 'cda6'" ** Syntax Error: Invalid integer -- 1124prgr ** Near: (line 1) 1124prgr >>

even though it appears to be integer, jobid variable is text, according to SQLite: >> sql "select typeof(jobid) from roles" == [[text] [text] [text] [text]] >> Again, thanks! I welcome any other suggestions.

11-Sep-09 13:42
1066ManuMTry this >>sql "select '' || jobid || role from roles where userid = 'cda6'" I think the problem is the first column is a integer, so SQLite driver think the result is a integer ( invalid integer ), with '' the first column is a string11-Sep-09 9:42
1065jack-ortReversing the column order in my concatenate makes it work, but not what I want:

>> sql "select role || jobid from roles where userid = 'cda6'" == [[prgr1124] [test1125]] >>

10-Sep-09 20:43
1064jack-ortI apologize if this is a repeat of an earlier message - I thought I submitted, but never saw it appear.

Question: With REBOL/View 2.7.6 under Windows XP, using SQLite driver v1.0.6, should I be able to use the concatenate operator (||) in a SELECT statement? Query works under the sqlite3 command line, and a similar SELECT without the concatenate returns values as expected:

>> sql "select jobid, role from roles where userid = 'cda6'" == [[1124 prgr] [1125 test]]

>> sql "select jobid || role from roles where userid = 'cda6'" ** Syntax Error: Invalid integer -- 1124prgr ** Near: (line 1) 1124prgr >>

10-Sep-09 20:21
1063Jankoyoure welcome26-May-09 16:42
1062amacleodI changed all the paths to absolute paths...it seems to have done the trick. Thanks for the hint, Janko!26-May-09 16:18
1061JankoI got similar error but I don't know if it was for the relative path or not the right version of dll / so26-May-09 16:08
1060amacleodIt works fine until I try to call another app. Paths do not change but I'm not using absolute paths. I'll need to test that...26-May-09 16:05
1059JankoCheck if you have the right path to file.. and make it an absolute path.. and if you have the right version .. sqlite != sqlite326-May-09 16:02
1058amacleodI seem to get an error with sqlite after using "call" to start an external program.

** User Error: SQLite out of memory ** Near: make error! reform ["SQLite" error]

Anyone experience this?

26-May-09 15:56
1057RobertS.22-May-09 16:00
1056GreggIt shouldn't be hard to devise a small benchmark though.21-May-09 14:49
1055GreggThe OS caches things too, don't forget.21-May-09 14:48
1054JankoIf you had persistent connection and if sqlite does some caching it could be faster, but if you need to open a connection on each request I think it would be much slower because that is more expensive (I assume).. it probably also matters how many scripts do you need to run per request21-May-09 5:12
1053ChrisPerhaps particularly a CGI environment where each instance is a separate process?21-May-09 4:59
1052ChrisSay, where there are a moderate number of moderate sized scripts - could it be quicker just to fire up an SQLite connection and access the scripts/modules from records as opposed to flat file?21-May-09 4:58
1051ChrisHas anyone done any benchmarks for running scripts from a DB instead of from the filesystem?21-May-09 4:55
1050Jankofound solution. I need to join with subqueries:

{SELECT * FROM } table { as i LEFT OUTER JOIN ( SELECT id_} table { as ibs_paren, SUM(price * qty) as amount FROM } table {_b ib GROUP BY id_} table { ) ibs ON i.id = ibs_paren LEFT OUTER JOIN ( SELECT id_} table { as ips_paren, SUM(ip.amount) as payed_amount FROM } table {_p ip GROUP BY ip.id_} table { ) ips ON i.id = ips_paren order by title;

9-May-09 10:24
1049JankoThis guy has totally the same problem but no answer http://forums.devshed.com/db2-development-114/left-outer-join-3-tables-sum-field-of-2nd-3rdt-588801.html9-May-09 9:32
1048Jankoah, my method with count won't work because count also multiplies if I have more than 1 columnt in second subtable9-May-09 9:15
1047Jankohm.. I have a question for SQL / sqlite experts : I have a query with two JOINS . There is parent table which has 2 subtables ... in each subtable I need to aggregate (SUM) some value ...

select i.*, SUM(ib.price * ib.qty) as amount, SUM(ip.amount) as payed_amount from invoice_sent as i left outer join invoice_sent_b as ib on i.id = ib.id_invoice_sent left outer join invoice_sent_p as ip on i.id = ip.id_invoice_sent group by i.id order by i.title;

The problem is tha because of join , the amount is correct , is the sum of values in invoice_sent_b , but payed_amount is multiplied by the number of rows invoice_sent_b has .

I understand why this happens, but I don't know how to prevent it, if I want to get all data in one query.

( I know how to solve the prolem , but it's outside the DB so it's more of a hack -- I also ger COUNT of _b table rows and divide second SUM by it on client )

9-May-09 9:00
1046Jankoyes30-Apr-09 15:51
1045Jankoyou can put cheyenne there or any other custom server you want30-Apr-09 15:50
1044Pekrah, virtual machine, ok ...30-Apr-09 15:50
1043Jankoit's like you have your own computer that you can reinstall stuff or OS .. separated from others but it's running on virtualisation software so there are many such separate computers per one real computer , so it's *cheaper* than paying for having a full server30-Apr-09 15:49
1042Jankovirtual private server ... like you can buy here .. www.linode.com or sliceshare.com30-Apr-09 15:48
1041Pekrbtw - what is VPS?30-Apr-09 15:47
1040JankoI will try this on my local computers and then on that VPS.. and report you back :)30-Apr-09 15:47
1039Jankohm... mucho interesante :)30-Apr-09 15:46
1038PekrJanko - I did small test for you. With no indices the speed was: 0:00:00.516 and I used LIKE expressions, which need to do searches in terms of field values ....

REBOL []

print "Starting test for Janko ..."

do %sqlite.r

attempt [delete %janko.db]

connect/direct/create %janko.db

sql { CREATE TABLE [domains] ( [id] INTEGER NOT NULL PRIMARY KEY, [domain] VARCHAR NOT NULL, [user] VARCHAR NOT NULL, [processed] DATETIME NULL, [ok_count] INT NOT NULL DEFAULT 0, [fail_count] INT NOT NULL DEFAULT 0, [error] TEXT NULL )

}

sql "BEGIN"

for i 1 1000 1 [ sql reduce ["insert into domains values(?,?,?,?,?,?,?)" i i i i i i i] ]

sql "COMMIT"

start: now/time/precise

sql {update domains set user = 'u3' where domain like '%1%' and user like '%1%'}

print now/time/precise - start

disconnect %janko.db

halt

30-Apr-09 15:42
1037Jankoany query I try :) .. I talked about this to you already :) .. look a little up at 14-Apr30-Apr-09 15:12
1036JankoI talked about this in detail a while ago.. now I optimised the problem so it's not a biggie any more but I am still mad that all rebol files can do changes in moments and when I need to update the sqlite page halts for noricable time30-Apr-09 15:11
1035Pekrshow me the table structure, show me the query :-)30-Apr-09 15:10
1034Jankoyes, and nothing big.. 5 simple rows30-Apr-09 15:09
1033Pekrah, single query?30-Apr-09 15:09
1032Jankoyes, I know for that .. that would come into play if I would be doing 10 inserts for example ,.. with begin commit it would take just 1/10 of time as without , but I am doing a simple sql scentence here. and I tried begin/commit also with this.30-Apr-09 15:08
1031PekrHave you tried put BEGIN transaction and END transaction outer commands? Because if not, then it commits each time. The difference is going to be ... well, drastic ...30-Apr-09 15:00
1030Jankoand this is no problem with rebol binding as the delays are the same if I go to sqlite3 shell30-Apr-09 14:46
1029Janko(I tried rebDB on same VPS - localy where comp. is fast I notice no delay at sqlite either)30-Apr-09 14:43
1028JankoI use it for very simple task just so it takes case for locking of messages to bots that can come from multiple processes.. but at my small/cheap VPS that I use for bots update/insert/delete takes multiple seconds which is very bad.. all rebol writes/deletes which it does with normal files return imediately on same VPS and yesterday I tried rebDB and it was also much much faster for insert/update/delete (almost no noticable delay) for the same amount of data (300 lines) as sqlite.. funny thing is that sqlite delays the same at these operations if there is 300 or 5000 rows in table30-Apr-09 14:42
1027PekrSQLIte is fast for simple to middle local stuff. I have few obstacles with it

1) it stores everything in one file. You can't use simplicity of file-system for simple back-up purposes. Attaching DBs (max 10 precompiled value) is not an option, as then transactions are not atomic 2) it is not secure - can't be secured easily, because encryption is not part of the package 3) serverless (both advantage = no install, but also disadvantage). It provides locking. They claim multiple instances of app can access one file, but I did not find more info on that. Dunno how granular locking you can do. You have to create server front-end yourself ...

30-Apr-09 14:27
1026Jankoonly that fast (at sqlite) is still a little problematic to me30-Apr-09 14:24
1025Pekrrebol.com motto - Web 3.0 starts here. Smarter, faster, better.30-Apr-09 14:23
1024MaximI love the sqlite motto :-)

"Small. Fast. Reliable. Choose any three."

30-Apr-09 14:13
1023PekrReading thru SQLite changelog found this - http://sqlite.org/rtree.html ... not sure of its practical usability, but some of you might find it interesting :-)30-Apr-09 14:09
1022sqlabnormally there is, but not always. Most socket connections are static, they stay connected for hours or longer.21-Apr-09 13:52
1021RobertHm... normaly there should be event/data21-Apr-09 11:27
1020sqlabyes21-Apr-09 11:26
1019RobertEvent = TCP/IP event?21-Apr-09 11:10
1018sqlabAs I know, that there should be data available, I read again after timeout21-Apr-09 10:50
1017sqlabIf I have some processes serving incoming events and sending their data to a central storage process, the central process sometimes seems to get an event, but not the data with that event. When the data really arrives, I do not get the event. Maybe he got the first event meant for an other connection21-Apr-09 10:45
1016RobertDon't understand the problem. Is it realted to my suggestion or your current try?21-Apr-09 10:40
1015sqlabRobert, that could be a solution. Unfortunately I observed a crosstalk of events with many ip connections, means one process is awakened without data available, if I send data additionaly on the same server via ip21-Apr-09 10:38
1014RobertI'm pretty sure a proxy process can handle 200req/s at the frontend and backend. So if 200req/s is the maximum you can get from one process, adding more processes just devides this down. But it won't scale up.21-Apr-09 10:36
1013sqlabI get up to 200 inserts with just one process, with two concurrent process this slows down to 30 to 60 per second21-Apr-09 10:32
1012sqlabMaybe my test settings are different from your requirements. During testing I let two concurrent processes insert a record with three fields, one ip address, a timestamp and one text field with variable length from 1k to 8k., and this i do with full speed21-Apr-09 10:29
1011RobertYou can add things like priority, read-only support etc.21-Apr-09 10:29
1010RobertIMO you are much flexibler anyway.21-Apr-09 10:28
1009Robertsqlab, use a proxy process that does the priorization. Send requests from client to proxy-process and this one handles 1 connection to the database.21-Apr-09 10:28
1008Jankohow are you testing it.. how many writes/sec .. processes?21-Apr-09 10:28
1007Jankohm.. too bad21-Apr-09 10:24
1006sqlabno difference, sooner than later I get a lock, even with 100 retries after less than 1000 statements21-Apr-09 10:24
1005sqlabmy mistake, I forgot to set the number of retries during connect. I will test again21-Apr-09 10:16
1004JankoHe tried only 100 times (by looking at the example he gave) .. and I don't know what he means by 500 >>> in line with mysql and others- at 500+ simultaneous users you start to get about a 10% drop because of lock overhead. <<<

Most probably not 500 processes writing all the time ... without delays

21-Apr-09 10:10
1003sqlabyes, it reduces, but they happen still21-Apr-09 10:05
1002sqlabdo-step seems to retry until not busy21-Apr-09 10:04
1001Jankoaha .. does it at least signidicantly reduce the occurence of the lock or it's the same as no retry?21-Apr-09 10:04
1000sqlabsorry, that was the number of statements until a lockk occurs21-Apr-09 10:02
999Jankowhat disconnection and connecting? :)21-Apr-09 10:01
998sqlaba few 10k times21-Apr-09 10:01
997Jankohow many times do you retry?21-Apr-09 10:01
996Jankoaha ... hm.. I havent tried it myself yet21-Apr-09 10:00
995sqlabtrying again is already implemented in sqlite.r in the function do-step. I expanded the wait time with a random duration up to one second. still not good enough And now additionaly I connected and disconnected according the link . Again I encountered locks21-Apr-09 9:57
994JankoI understood the text in link that if you get a lock, you wait for a while and try again.. and that by this method even it scales to quite big number of concurr processes compared to mysql for example21-Apr-09 9:40
993sqlabJanko: I just tested again two concurrent processes writing into one table. This time I used the approach connecting and closing for each statement, as the link you gave suggested.. I still got locking.21-Apr-09 9:17
992amacleodworks...thanks a lot. I had this fear of having to change large parts of my code if I added or removed columns...This solves that problem.17-Apr-09 1:53
991sqlabshould be like insert into table (col3, col2, col1) values (?, ?, ?)16-Apr-09 20:42
990amacleodFor example: SQL reduce [{UPDATE books SET bk=?, chap=? WHERE ref_number=?} blk/2 blk/3 blk/1]16-Apr-09 19:40
989amacleodCan you do the same for inserting values? I know I can do it for updating row data but I can not figure out the syntax for inserting..16-Apr-09 19:39
988amacleodThat sounds helpful..thanks sqlab.16-Apr-09 18:13
987sqlabIn sql there is no need for reordering the column order, as you can get any order you desire by using the column names in your select statement.16-Apr-09 17:48
986Jankoalter table supports just renaming table and adding rows, for all else you need to create new table -> insert .. select ... ; data and drop old and rename new16-Apr-09 15:57
985amacleodIs there a way to reorder columns in SQLITE?16-Apr-09 15:55
984Pekryou can use varchar even for indexes, it should speed up things significantly ...14-Apr-09 19:28
983Jankobut the changes you proposed will help me get it faster anyway and I will try them14-Apr-09 19:13
982OldesYes.. it's possible, that the VPS does many other disk IO oparations so you have to wait.14-Apr-09 19:09
981JankoI used sqlite here and there for more real DB work and I never seen any critical slownes (extept if you do a typical like inserting 100 rows each in it's own transaction (without begin commit), in fact it seemed always very fast to me ... thats why I suspect all this wouldn't show up if I had some better VPS. Also because if fluctuates so much I suspect disk on computer vps is on is maybe busy doing other stuff so at one moment it is idle and it works faster at another it waits for >3 seconds14-Apr-09 19:07
980Jankoaha, I will try that .. and I can use Integer for user ... because now each user get's folder like u<Int> ... and I could maybe put all 3 mailboxes into the same table so it would be only 1 insert / update instead of 3 on changes ... I didn't think performance will be the problem here (it still seems a little funny) , because it's just a simple table without any relations or anything and no big amunt of data14-Apr-09 19:03
979Oldesalso... if you want to index domain and user, you should not use varchar without length.14-Apr-09 18:59
978Jankoaha, thanks for idea.. I will use explain .. although it's so simple query that it shouldn't do any strange things I think14-Apr-09 18:55
977Jankoyes, I think so too.. it is interesting that the time looks the same on 4000 rows ... maybe reason for this is also because I am on some very cheap VPS (12EUR, 180MB ram, ? CPU) ...14-Apr-09 18:54
976OldesI'm not sqlite user but I would probably tried to use EXPLAIN to see, what's going on when you do the query - http://www.sqlite.org/lang_explain.html14-Apr-09 18:52
975Oldes1s to 5s on 180 rows is bad. whatever you do.14-Apr-09 18:51
974JankoI had it without indexes at first , and later added indexes while I was trying various things, at 180 records there wasn't any noticable change. Well the result doesn't seem so bad to me right now.. if it has the same delay with 4000 records it's okey-ish. On my local computer which is much better than some small VPS I noticed no delays. I just realized that the delay at web-app was 3x bigger than this because I have 3 bots and each has it's own "mailbox" ... The solution for this situation will be affloading the inserts from the request process, for the future when things will need to scale up I will try doing this different anyway, that's why I was playing with actor like systems anyway14-Apr-09 17:42
973sqlabIf you update an index field, the index too has to be updated. Do you open and close your db, as it is recommended in the link you posted? Then you have to add the time for that too.14-Apr-09 15:56
972Jankohave to go..storm14-Apr-09 15:26
971Jankohm.. very interesting results... and positive basically :) with 4000 records insert time, and update time for the same query is just the same as with 183 , and select is fast as before... then it's survivable .. I was afraid that if with so few rows it's so bad with couple more it will be unworkable14-Apr-09 15:25
970Janko(maybe I just need a better VPS)14-Apr-09 15:08
969Jankobut I am sure even if I write that data inf ile as rebol blocks and load and foreach them to find the one it would take far less than 1s , it's just 180 rows!!!! :)14-Apr-09 15:07
968Pekr... well, I am not good at sql internals, so .... try various combinations, and you'll see ..14-Apr-09 15:06
967Pekrsimply put - how db chooses, which index to use? You have them separate as well as mixed. I would use mixed index (domain,user) only if those two fields would be defined as a primary key together ...14-Apr-09 15:05
966PekrI would left out first index and add index for your primary key .... (not sure though :-)14-Apr-09 15:04
965JankoI did before when trying stuff index|dom_user|domains|11|CREATE INDEX dom_user on domains ( domain, user ) index|dom1|domains|16|CREATE INDEX dom1 on domains ( domain ) index|user1|domains|21|CREATE INDEX user1 on domains ( user )14-Apr-09 15:03
964PekrOK, index you ID field, then also user and domain field14-Apr-09 15:01
963JankoI will insert random rows so I will have couple of 1000 , and then I will see what I get14-Apr-09 15:01
962JankoI added the indexes to both, and each separate .. but it's roghly the same ... before indexes select returned imediatelly on the same where14-Apr-09 15:00
961Janko"u3" is just here for testing purposes, so I can repeat the query :)14-Apr-09 14:59
960JankoBecause times vary from 1s to 5s I suspect taht vps's disk or something might be a little owerburdened, but as I said 300kb rebol (this is 10kb) worked im ms range according to cheyenne14-Apr-09 14:58
959Pekrbtw - why do you set user="u3" for records, where user is already of "u3" value?14-Apr-09 14:58
958Pekrtry to index domain, and maybe even user fields, or it will go sequentially thru all of record lines ...14-Apr-09 14:57
957JankoI tried now with transaction .. it has no point here becauase it's just one query at the time ... but results were as expected.. update did finish imediatelly , but commit took the 1-3s time also14-Apr-09 14:56
956JankoCREATE TABLE [domains] ( [id] INTEGER NOT NULL PRIMARY KEY, [domain] VARCHAR NOT NULL, [user] VARCHAR NOT NULL, [processed] DATETIME NULL, [ok_count] INT NOT NULL DEFAULT 0, [fail_count] INT NOT NULL DEFAULT 0, [error] TEXT NULL )

test query is as straighforward as it can be:

update domains set user = "u3" where domain = "www.todelete.com" and user = "u3";

14-Apr-09 14:54
955JankoThis is a very small VPS, but I have 300kb raw rebol data structures in ordinary files, and I edit and seek them without any indexes and it works immediatelly.. I only moved this part of data to sqlite because it handeles the file locking ( these are sort of mailboxes so that the app and bots can communicate over them )14-Apr-09 14:51
954JankoPekr.. I will14-Apr-09 14:48
953JankoI also added the indexes now and it is maybe a little faster but on single where but on both that I need it looks more or less the same (select by same condition is imeddiate)14-Apr-09 14:47
952Pekrcan you post whole query?14-Apr-09 14:44
951JankoI don't get this ... I started getting very long loading times with my webapp when I changed or inserted the and it was very fast before ... now I saw that it's the sqlite making these delays.. this is not the problem of sqlite.r but the sqlite itself because I get the same behaviour with sqlite3 shell.

But I can't believe this , I am certain I am doing something wrong.. I remember sqlite can handle GB of data and is very fast, but in my case... I have 183 rows in a simple 5 column table (db file is 10kb) .. if I do single update table X set y = ".." where Z = ".."; it takes like 3 seconds. This updates just 1 row out of 183. Does anyone have any idea? I tried to do the "Vacuum" command but it's the same after it.

14-Apr-09 14:39
950Janko.14-Apr-09 14:29
949Jankothis might be usefull to users of sqlite.. I yesterday encountered the "db is locked" error and it got me a little worried, but with this (simple) aproach it seems to solve that http://itsystementwicklung.de/pipermail/list-pysqlite/2009-April/000380.html11-Apr-09 15:32
948amacleodI did not realize sqlite.r was set up to use mysql3.so (linux libs) Got my app running on linux witout a hitch... Auto detects OS...great!1-Apr-09 4:10
947Jankothanks Doc!17-Mar-09 20:10
946DockimbelYou don't need to use apt-get for sqlite, just download the latest library and put it in your app folder near sqlite.r : http://www.sqlite.org/sqlite-3.6.11.so.gz (I guess you'll need to rename it to libsqlite3.so).17-Mar-09 19:39
945Jankoaha.. stupid me.. apt-get usually doesn't provide latest versions but a more "stable" ones ... yes it's 3.3.817-Mar-09 18:35
944DockimbelHave your application execute the following SQL statement and then look at the result : select sqlite_version(); AFAICT, sqlite3_prepare_v2 is available starting from SQLite 3.5.2.17-Mar-09 18:33
943JankoI hope this doesn't mean I something else won't work as it should17-Mar-09 18:29
942Jankoha.. I changed sqlite_prepare_v2 to sqlite_prepare in sqlite.r and now rebpro and cheyenne work!!17-Mar-09 18:24
941Jankoah... I looked at history of this chat and saw that there is debian package libsqlite3-dev ... I installed this and now I have libsqlite3.so .. it seems rebpro finds it without copying it to local directory now but it still gibes the same " Cannot open sqlite3_prepare_v2" error17-Mar-09 18:06
940Jankowhen I installed sqlite with apt-get install sqlite3 I got 2 files into /usr/lib/ directory ./usr/lib/libsqlite3.so.0.8.6 and ./usr/lib/libsqlite3.so.0 ... if I run rebol it doesn't find sqlite so I copied one of them (I tried with both, they are of same size btw) to directory where rebol / rebpro and sqlite.r is and renamed it to libsqlite3.so and then it seems to find it ... now this is just common sense as I am not that experienced with this on linux17-Mar-09 17:58
939Janko-s probably means without asking for permissions... I tried now rebpro -s -q but I got the same .so error .. maybe I didn't prepare the .so files well ..I am just looking into docs

>> do %sqlite.r ** Access Error: Cannot open sqlite3_prepare_v2 ** Near: *prepare: make routine! [db [integer!] dbq [string!] len [integer!] stmt [struct! [[integer!]]] dummy [struct! [[integer...

17-Mar-09 17:51
938Oldeshttp://www.rebol.com/docs/words/wusage.html17-Mar-09 17:49
937Oldesyou have to start it with the -s switch and good is to use the -q switch as well.17-Mar-09 17:48
936Jankoaha.. I am getting somewhere .. it alows it now but I get some error with .so .. maybe this is the reason cheyenne can't open it either

>> do %sqlite.r Script: "SQLite driver" (26-Nov-2008) REBOL - Security Check: Script requests permission to open a port for read/write on: libsqlite3.so Yes, allow all, no, or quit? (Y/A/N/Q) Y ** Access Error: Cannot open sqlite3_prepare_v2 ** Near: *prepare: make routine! [db [integer!] dbq [string!] len [integer!] stmt [struct! [[integer!]]] dummy [struct! [[integer..

17-Mar-09 17:41
935Jankowow.. first rebol download > 1MB :)17-Mar-09 17:31
934Jankohu, I never saw that download page :)17-Mar-09 17:22
933OldesAt least what I was trying, there was the license message on boot, but I could use the /library17-Mar-09 17:19
932Oldeshttp://www.rebol.net/builds/#section-117-Mar-09 17:18
931Oldesjust download it and try it17-Mar-09 17:18
930OldesI think you don't need the license17-Mar-09 17:17
929JankoI am an new/old user so I am still on free versions of rebol.. I gues now is time to buy the SDK17-Mar-09 17:13
928JankoOldes: yes, it looks that way by the error given17-Mar-09 17:10
927OldesOr you can use Rebpro from Rebol's SDK17-Mar-09 17:01
926OldesI think the library calls are not enabled in linux version of Core.. ask Carl.17-Mar-09 16:58
925JankoDid anyone try using sqlite.r from Dobeash in free rebol code 2.7.6 on linux. (debian). I get "feature not available in this REBOL" but it works on windows.17-Mar-09 16:08
924Janko.17-Mar-09 15:59
923Jankoa view into db file as hex quickly resolved which version is which, it is written at the start, and so I saw what is going on, thanks Petr11-Mar-09 8:12
922JankoIt worked for me but now I started to get "Not a database or encripted" error and I couldn't figure out why.. I discovered that the sqliteadmin app was making v2 files now and that was the problem.11-Mar-09 8:05
921PekrIIRC it supports version 3 of the driver. Dunno the name of .dll. But - just rename sqlite3.dll to sqlite.dll and run few tests, no? :-)11-Mar-09 7:59
920JankoI can't seem to find info if dobedash's sqlite.r should work with sqlite3.dll or sqlite.dll, does anyone have any idea?11-Mar-09 7:52
919BrianHI agree. None are planned at the moment, but it is likely that there will be more in the future.1-Mar-09 15:01
918GrahamI think it will be a long time before R3 is usable in the real world ... so my understanding is that there will still be 2.7 releases.1-Mar-09 6:37
917BrianHThere may be some changes to behavior that was so broken that it couldn't possibly have been relied on by existing code, like the change in return value of ALTER in 2.7.6. I would be surprised if the INSERT/dup/part bug Carl found this week isn't fixed in the next R2 release, for instance, whenever that release happens.1-Mar-09 5:03
916BrianHWe find new bugs in R2 natives all of the time - Carl just found one in the last week. There will likely be bugfix releases, but no incompatible changes will be made to R2 anymore, afaict. All of the new functions in R3 and incompatible improvements to existing functions will likely be made in a separate script that you can load or encap. That script will be maintained outside of the R2 update cycle. I think that will be the plan, at least.1-Mar-09 5:00
915?like = think1-Mar-09 2:38
914?I like 2.7.6 is the last of the 2.x series. Brian may release a library containing updated mezzanines and patches.1-Mar-09 2:38
913BrianHRight now 2.7.7 is not being worked on at all. That sonds like a good thing to fix when we do start working on 2.7.7.1-Mar-09 1:26
912GrahamIs this GC problem being worked on for 2.7.7 ??1-Mar-09 1:15
911amacleodRobert, 'recycle' worked like a charm! Thanks a lot. I had the same problem when reading from the DB so I used 'recycle again in hte query loop.... This had the added benifit of of sealing amemory leak with the images too. Memory would increase evrytime a new image was displayed but now it tops out at an acceptable level..1-Mar-09 0:40
910RobertOverall I don't care to much, which way we go. It must be simple and straight forward to use. And we must have a way on the Rebol side to privode the C side of life. Like pointers and structs etc. without tricks.27-Feb-09 17:04
909RobertI don't know anything about the plug-in approach at the moment, so I can't say.27-Feb-09 17:04
908PekrRobert - what is your preferable method for "DLL interface"? Is it a mixture of rebol dialect (as in R2) and plug-in aproach?27-Feb-09 16:09
907amacleodThanks Robert.27-Feb-09 14:30
906RobertThat's one cause why I want to get my hands on the external library interface in R3 ASAP.27-Feb-09 14:30
905RobertYou have to try. The SQLite driver uses a heuristic every 100 rounds or so. Just play around to see if it makes a difference.27-Feb-09 14:30
904amacleodSo at what point would I use recycle? After each insert? in my foreach loop?27-Feb-09 14:24
903RobertI think it's on the way from Rebol to DLL.27-Feb-09 14:23
902amacleodI'm able to get a large set of results from mysql and use it (View the images in a layout) but when I try to insert this data into sqlite it seems to get currupted... It sounds like a sqlite problem...27-Feb-09 14:04
901amacleodno, How would that work when working with large database results?27-Feb-09 13:54
900RobertIIRC there is a bug in Rebol's GC that can show up when using big datasets and corrupts data. Have you tried to play around with RECYCLE27-Feb-09 8:13
899amacleodActually size of the image does not seem to be the prob as this works: SQL reduce [{insert into images values (?,?,?,?,?,?,?,?)} "img/1" "img/2" "img/3" "img/4" "img/5" pic "img/7" "img/8"] where pic is a large 4000x3000 full color photo. I get no error. But if I loop 50 and insert the above data 50 times I get an error???27-Feb-09 6:14
898amacleodI see what might be the prob... Some images are large...Too large? And may be getting truncated.

I was using binary..should I use blob?

27-Feb-09 5:51
897amacleodI'm getting errors when I try to insert a binary file into sqlite

I have no problem when I read/binary an image from disk and insert it but when I download it from a mysql db its saving as those crazy text characters.

I'm converting it back to binary with "to-binary" and when I probe it it looks right but it keeps converting back to the original mysql output...

Any ideas what might be going on?

I can view the outputed image from mysql when I use to-binary so I know that its not currupted.

27-Feb-09 5:32
896PekrOldes - thanks - that is an interim solution for R2 :-)21-Jan-09 13:56
895Oldesthis is not sqlite retated, but: do http://box.lebeda.ws/~hmm/rebol/projects/ucs2/latest/make-ducet.r21-Jan-09 11:32
894RobertOk, now I got it. ;-)21-Jan-09 10:13
893PekrI want to sort in SQLite. But then you receive your recordset to REBOL, you put it into grid for e.g., which has facility for sorting columns. Then you press particular column, and grid sorts your result recordset using rebol's built in 'sort function ... and the result is wrong ... (well, but this is minor issue, the importance is to get correct resultset from the query. I just tried to say, that R3 has to address some localisation principles itself too ....)20-Jan-09 21:51
892RobertBut you can sort today in R2. Why do you need SQLite collations if you don't want to sort in SQLite?20-Jan-09 14:26
891Pekr... one more point for future. Imagine obtaining correct order from SQL, then using REBOL level grid, and column sort facility. I think that we also will need to get such things adressed in R3 directly, or it will distort sort order ...20-Jan-09 14:01
890PekrREBOL SQLite DLL? I don't want other DLL ... We need better interfacing to do it in REBOL as a binary, with back-pointer from C level :-)20-Jan-09 13:54
889RobertAnd than have those collations hard coded to a Rebol SQLite DLL.20-Jan-09 13:47
888RobertSET-SORTING-MODE "CZ"20-Jan-09 13:47
887PekrThere is some collation function which we need to wrap. I posted it here some time ago, but we were not succesfull in wrapping and utilising it. I also tried to look into IIRC Python sources, and it was not clear to me, how to specify it in REBOL level. IIRC it is callback type function ...20-Jan-09 13:00
886RobertDo you have any references to an country specific sorting implementation? Than I can take a look how to add it.20-Jan-09 12:33
885RobertPutting this into SQLite would require, that we add those country specific sorting rules at the C level and provide a Rebol call, so select the correct sorting.20-Jan-09 12:33
884RobertAh, ok. How about sorting in Rebol? Does this help?20-Jan-09 12:32
883Pekruse case? proper national sorting? IF you do some SELECT on field like last name, and you want some in-between results, e.g. A - D, then Czech alphabet has C with a hook upon it, and it is supposed to sort right after C, but without collation support it will sort after Z ....20-Jan-09 10:48
882RobertDo you have a use-case for this? Never had this requirement.20-Jan-09 9:07
881RobertIs this used to define user defined comparing functions?20-Jan-09 9:07
880PekrIt would be nice to support collations too, but I was not successfull in wrapping that functionality ....20-Jan-09 8:58
879RobertThe newest version of SQLite adds support for nested transactions. I'm going to look at the SQLite driver to see how we can support this.20-Jan-09 7:53
878RobertOk, I thought there was a different trick. Well, I'm not using JOINS nor VIEWS a lot in my app. I preferr to get back Rebol blocks and traverse these and collect what I need. Much simpler than hacking long SQL statements.4-Jan-09 20:06
877sqlabThe new field OldID holds the former primary key. You have to join via the oldID instead of the primary key. If you can not alter your select statement, maybe you can generate an adequate view.4-Jan-09 17:54
876Robertsqlab, don't understand what you mean. How will an oldID help me to get all records back together? This would require that the application knows about oldID.4-Jan-09 13:36
875sqlabI just tried using an extra object with only the reference pointers. But I never checked, if it is working with more than one opened database because of the limitations regarding simulataneous access by more than one process.4-Jan-09 13:27
874sqlabWhy not transfer the old ID to a new indexed field oldID?4-Jan-09 13:24
873Pekrof course you are right. You just have to be carefull or you could end-up with some "dead" child records.4-Jan-09 12:19
872RobertHow else will you do it if you transfer one set of related records from database A to a database B?4-Jan-09 11:05
871Pekrrewriting IDs? A risky business :-) I have never done anything like that.4-Jan-09 11:01
870RobertA bit OT: Has anybody an idea how a "schema driven" database export does/could work?

I have an applicaiton that uses some tables, and records are linked by primary index IDs. Now I want to export a record and all its dependend records either into a new database or over the network to some other process.

Because ID ranges are different in the export target database or on the remote server, I need to rewrite the old IDs with the new ones.

At the moment I have a hand written, very app specific (and error prone) function for this. But I would like to do this in a much more generic fashion. Maybe just specifcing the relationship with some simple dialect and than have a generic function collecting everything.

4-Jan-09 9:13
869RobertFYI: I'm currently adding some stuff to Ashley's SQLite driver to:

1st: Handle in memory databases (":memory:") 2nd: To handle connection to more than one database file at once.

So, if someone did this already pleasae let me know :-)

4-Jan-09 9:10
868ManuM. .23-Dec-08 15:16
867BrianHI am not as familiar with the bugs in your SQLite access infrastructure as I am with SQLite itself, so I helped where I could :(22-Dec-08 20:38
866amacleodThanks for all the help.22-Dec-08 20:37
865BrianHThat is a good approach anyways, as it will help prevent SQL injection attacks.22-Dec-08 20:36
864amacleodBrianH,

Its working...I just do not know how to make it dynamic.

22-Dec-08 20:36
863amacleodto make it more sophisticated I can parse the search input for "aluminum ladders" and seperate words as having "and" between each...like most search engines. THan build it and insert it.22-Dec-08 20:35
862BrianHIf yo can be certain that any ' in your strings is being escaped properly, that may be the way to go.22-Dec-08 20:34
861amacleodI gues I can build the {select * from fdbooks where ftext like '%ladder%'} string dynamically with rejoin and insert it:

srch: {select * from fdbooks where ftext like '%ladder%'} reslts: sql reduce [srch] == [[2 "FFP-LADDERS" "1-PORTABLE LADDERS" "2." " SIZES AND TYPES OF PORTABLE LADDERS IN USE^/" "" 4-Dec-2008/15:29:1 9] [4 "FFP-LADD...

22-Dec-08 20:32
860BrianHHave you considered whether it is a casing issue?22-Dec-08 20:31
859BrianHIt sounds like you are on the right track, but running into bugs in the REBOL SQLite access infrastrcture.22-Dec-08 20:31
858amacleod>> srch: "aluminum" == "aluminum" >> reslts: sql reduce [{select * from fdbooks where ftext like '%'||?||'%'} srch] == []

>> srch: 'aluminum == aluminum >> reslts: sql reduce [{select * from fdbooks where ftext like '%'||?||'%'} srch] == [[9 "FFP-LADDERS" "1-PORTABLE LADDERS" "3." " CONSTRUCTION OF PORTABLE ALUMINUM LADDERS^/" "" 4-Dec-2008/15:29:19 ] [10 "FFP-LADD...

22-Dec-08 20:28
857amacleodinsert and update use "?" so you can use variables.

reslts: sql reduce [{select * from fdbooks where ftext like '%'||?||'%'} srch] == [] works when srch is a word! but not when it contains a string!

22-Dec-08 20:25
856amacleodThanks BrianH, But how do I use it dynamically?22-Dec-08 20:23
855BrianHIf you want to search for a multiword string, use LIKE '%aluminum ladder%', then use OR to add other clauses. The choices above, with examples: 1: like '%aluminum%ladder%' 2: like '%aluminum ladder%' 3: like '%aluminum%' and like '%ladder%' 4: like '%aluminum%' or like '%ladder%' The || operator means string concatenation, not or.22-Dec-08 20:22
854amacleod2 and 3 I suppose22-Dec-08 20:16
853amacleodI need to search for multi-word string "aluminum ladder" (in that order... but also would like to beable to search for the occurnace of multiple words in any order: "aluminum" and "Ladder" and "water" etc22-Dec-08 20:14
852BrianHPick one of the 4.22-Dec-08 20:13
851BrianHAnswer my question first then I can answer yours :)22-Dec-08 20:13
850amacleodIt gives different results...

>> reslts: sql [{select * from fdbooks where ftext like '%aluminum%' || '%ladder%'}] == [[9 "FFP-LADDERS" "1-PORTABLE LADDERS" "3." " CONSTRUCTION OF PORTABLE ALUMINUM LADDERS^/" "" 4-Dec-2008/15:29:19 ] [10 "FFP-LADD...

>> reslts: sql [{select * from fdbooks where ftext like '%ladder%aluminum%' }] == [[11 "FFP-LADDERS" "1-PORTABLE LADDERS" "3.1.1" { Solid Beam Aluminum Construction- This type of ladder has a solid side rail co...

22-Dec-08 20:12
849BrianHAre you checking for whether - Both of the words are in the string in that order - Both of the words are in the string directly adjacent to each other in that order - Both of the words are in the string in any order - Either of the words are in the string in any order ?22-Dec-08 20:11
848amacleodHow do I do that with variables?22-Dec-08 20:09
847BrianHEquivalent to the above: {select * from fdbooks where ftext like '%aluminum%ladder%'}22-Dec-08 20:08
846amacleodsqlab, >> reslts: sql reduce [{select * from fdbooks where ftext like '%aluminum%' || '%ladder%'}] == [[9 "FFP-LADDERS" "1-PORTABLE LADDERS" "3." " CONSTRUCTION OF PORTABLE ALUMINUM LADDERS^/" "" 4-Dec-2008/15:29:19 ] [10 "FFP-LADD... works. Thanks again...but22-Dec-08 20:06
845sqlabDont't use the reduce22-Dec-08 20:06
844sqlaba || b is the sqlite syntax for concatenation of two strings. http://www.sqlite.org/lang_expr.html22-Dec-08 20:00
843Grahamoh ... sqlite ? ... ooops22-Dec-08 20:00
842BrianHSQLite has a CONTAINS extension?22-Dec-08 19:59
841GrahamInstead of LIKE, you can also use CONTAINS22-Dec-08 19:59
840GrahamAnd once you learn sql, then you can look at the dialect .... which is a simple substitution thing.22-Dec-08 19:57
839GrahamAnyway, you don't need to use the sql dialect ... just create your sql statement and then execute it.22-Dec-08 19:55
838GrahamI've never tested whether the sql dialect can replace ? inside single quotes.22-Dec-08 19:54
837BrianHOr REJOIN.22-Dec-08 19:54
836BrianHThose || are string concatenation, afaict. Use JOIN in REBOL for that.22-Dec-08 19:53
835GrahamI don't know sqllite's syntax .. but this looks wrong to me

reslts: sql reduce [{select * from fdbooks where ftext like '%aluminum ladders%'}]

it should simply be

reslts: sql {select * from fdbooks where ftext like '%aluminum ladders%'}

22-Dec-08 19:53
834Grahamthe sql statements are standard sql. You don't need to use the ? stuff .... ie. you can compose your statement before hand.22-Dec-08 19:52
833GrahamThen you substitute your variables ... that's it.22-Dec-08 19:51
832GrahamYou construct your sql statement and test it.22-Dec-08 19:51
831amacleodI do not see anything like that in any docs I can find.22-Dec-08 18:41
830amacleodI can find the sql commands but I do not know how to rebolize them.

Where can I find that I need to do this: '%'||?||'%' ?

22-Dec-08 18:41
829GrahamYou need a primer in sql ...22-Dec-08 18:22
828amacleodAnd how could I figure this out myself. Is this syntax a part of the sqlite.r interface or sqlite itself?22-Dec-08 16:13
827amacleodand while we are here... what about multiple words in string... "ladders" and "aluminum" as apposed to "aluminum ladders"22-Dec-08 16:12
826amacleodWhat if I want multi-word string? as in; reslts: sql reduce [{select * from fdbooks where ftext like '%aluminum ladders%'}] == [[9 "FFP-LADDERS" "1-PORTABLE LADDERS" "3." " CONSTRUCTION OF PORTABLE ALUMINUM LADDERS^/" "" 4-Dec-2008/15:29:19 ] [10 "FFP-LADD...22-Dec-08 16:10
825amacleodNot string?22-Dec-08 16:07
824amacleodas word?22-Dec-08 16:06
823amacleodI got it! it works when srch: 'ladders22-Dec-08 16:06
822amacleodsqlab, reslts: sql reduce [{select * from fdbooks where ftext like '%'||?||'%'} srch] == [] where srch: "ladders" no error but not finding what should be there.: reslts: sql reduce [{select * from fdbooks where ftext like '%ladders%'}] == [[2 "FFP-LADDERS" "1-PORTABLE LADDERS" "2." " SIZES AND TYPES OF PORTABLE LADDERS IN USE^/" "" 4-Dec-2008/15:29:1 9] [4 "FFP-LADD...22-Dec-08 16:06
821PeterWoodI got this unexpected behaviour: >> do %~/Code/Library/Rebol/sqlite.r Script: "SQLite driver" (26-Nov-2008) >> connect %~/Code/SQLite3/mydata.db ** User Error: SQLite out of memory ** Near: make error! reform ["SQLite" error] >> change-dir %~/Code/SQLite3 == %/Users/peter/Code/SQLite3 / >> connect %mydata.db == 340076822-Dec-08 14:59
820sqlabA concatenation like like '%' || ? || '%' shoud work22-Dec-08 6:13
819Grahamyou have to create your dialected string first22-Dec-08 5:50
818amacleodIgnore that last message...Sorry22-Dec-08 1:23
817amacleodSQL reduce [{INSERT INTO table VALUES (?, ?, ?)} fname lname phone]22-Dec-08 1:22
816amacleodThis works: rslts: sql reduce [{SELECT * FROM books WHERE ftext LIKE '%string%' } ]22-Dec-08 0:59
815amacleodI got the 'like' command working with a text string but I can not fiqure out how to use variables... something like: rslts: sql reduce [{SELECT * FROM books WHERE ftext LIKE '%?%' } var]22-Dec-08 0:38
814GiuseppeCThis also brings me a question: what about SQLite and UNICODE ?19-Dec-08 19:02
813GiuseppeCAmacleod: if you search for ANSI 92 you should be able to find for the complete specification.19-Dec-08 19:02
812amacleodThanks, I try it out in the morning. I googled but could not find it. I've had trouble finding a good source of docs for sqlite before. w3schools looks good. Thanks again all!19-Dec-08 4:24
811?Amacleod - http://www.w3schools.com/sql/sql_like.asp18-Dec-08 14:57
810sqlabselect * from table where f_text like '%table%'

but the wildcards may vary according the database

18-Dec-08 14:54
809ChrisBegins with is LIKE '...%' Anywhere is LIKE '%...%' End is LIKE '%...' Rebol filenames: LIKE '%.r'18-Dec-08 12:24
808RobertSee SQLite homepage. It's something like this: LIKE '%...%'18-Dec-08 12:20
807amacleodsyntax?18-Dec-08 12:01
806sqlab"like" is your friend18-Dec-08 7:06
805amacleodLet me first make sure i need fts... I want to find a sub-string within a larger string: for example: find the the rows that contain word "table" in a column called f_text one row of which might contain "When inserting into an fts table, if no docid is provided," This would be a hit since table is contained in the string.18-Dec-08 1:39
804amacleoddoes sqlite.r support these fts tables?18-Dec-08 1:33
803amacleodhow do you search for a string within a column. Is that what they refer to as full text search? (fts)18-Dec-08 1:24
802AshleyYep, by default the driver molds and loads REBOL values. Use /direct if you don't want this behaviour.14-Dec-08 21:48
801GiuseppeCIt seems to work even with block of lines14-Dec-08 15:15
800GiuseppeCForget it, just found the solution. My fear was all about string handling but everything seems going well.14-Dec-08 15:11
799GiuseppeCHello, I am experimenting with SQLite and I have a question: How do I store a web page retrieved with READ onto a SQLite field ?14-Dec-08 15:03
798amacleodThat's what happens when you work in hte middle of the night.4-Dec-08 16:57
797amacleodThanks Pekr.4-Dec-08 16:55
796amacleodFound it! I had my 'begin' of my transaction inside the loop.

dope!

4-Dec-08 16:55
795Pekrwhich version of sqlite dll are you using? there was one bug with update, which I reported one year ago, which even corrupted data. They fixed it thought in next version. So hopefully this is not your case ...4-Dec-08 16:54
794amacleodI found part of the problem... If I'm updating multiple records it fails to update but if update one record it updates. I'm using a transaction here. do I need to place each var in ()?4-Dec-08 16:46
793amacleodSorry I mean: nor this: SQL reduce [{UPDATE fdbooks SET bk=?, chap=?, sec_num=?, ftext=?, key_words=?, up_date=? WHERE ref_number=?} blk/2 blk/3 blk/4 blk/5 blk/6 (to-date blk/7) blk/1]4-Dec-08 16:29
792amacleodBanging my head against a wall!

This works: SQL reduce [{UPDATE notes SET note=?, up_date=? WHERE book=? AND chapter=? AND section=?} note_text now bk_tit/1 bk_tit/2 bk_tit/3] But this does not: SQL reduce ["Update fdbooks SET up_date=? WHERE ref_number=?" now ref] nor this: SQL reduce [{UPDATE notes SET note=?, up_date=? WHERE book=? AND chapter=? AND section=?} note_text now bk_tit/1 bk_tit/2 bk_tit/3] Any ideas?

4-Dec-08 16:26
791RobertI see two options: 1. We enhance the driver to be able to handle more database handles at the same time. This needs a way to select a database handle as the current one.

2. We make the driver as a prototype object which carries everything for one database handle. Than we need a way how to state which instance to use.

4-Dec-08 12:59
790RobertAnd than the returned DB handle has to be used for all actions against this database file. It's much like a file handler.4-Dec-08 11:25

Return to Index Page