Learn and Be Curious

주종면 : Oracle -> MongoDB로 전향한 전문가



Murmur3Partitioner calculator

http://www.geroba.com/cassandra/cassandra-token-calculator/

6

-9223372036854775808

n-6148914691236517206

n-3074457345618258604

n-2

n3074457345618258600

n6148914691236517202


원래 컬럼패밀리는


① 정렬                ②  → 

↓  rowkey1  c1 ...

                  v1 ...

    rowkey2

    rowkey3


카산드라는

Rowkey로 정렬할 수 없어 (hash)

컬럼으로 정렬한다.


카산드라 실습

./start-all.sh

./bin/nodetool -h s1 status


1. Cassandra-CLI Shell : 3.X 지원중단

2. CQL Shell


[CLI]

./bin/cassandra-cli



[CQL]

[cas@s2 cassandra]$ ./bin/cqlsh

Connected to Test Cluster at localhost:9160.

[cqlsh 4.1.1 | Cassandra 2.0.13 | CQL spec 3.1.1 | Thrift protocol 19.39.0]

Use HELP for help.

cqlsh> use test1;


cqlsh:test1> drop keyspace testdb;

cqlsh:test1> CREATE KEYSPACE testdb with REPLICATION={'class':'SimpleStrategy', 'replication_factor': 3};

cqlsh:test1> use testdb;

cqlsh:testdb> 

cqlsh:testdb> 

cqlsh:testdb> CREATE TABLE users (

          ...      userid varchar,

          ...      username varchar,

          ...      password varchar,

          ...      email varchar,

          ...      PRIMARY KEY (userid)

          ...    );


cqlsh:testdb> insert into users (userid, username, password, email)

          ... values('gdhong', '홍길동', '1111', 'gdhong@test.com');

cqlsh:testdb> select * from users;


 userid | email           | password | username

--------+-----------------+----------+----------

 gdhong | gdhong@test.com |     1111 |   홍길동


(1 rows)








cqlsh:testdb> 

cqlsh:testdb> CREATE TABLE employees1 (

          ... empid int,  deptid int, empname text,

          ... PRIMARY KEY (deptid, empid)   

          ... );

cqlsh:testdb> CREATE TABLE employees2 (

          ... empid int,  deptid int, empname text,

          ... PRIMARY KEY (empid, deptid)   

          ... );

cqlsh:testdb> 

cqlsh:testdb> 

cqlsh:testdb> INSERT INTO employees1 (empid, deptid, empname) VALUES (10001, 1, '홍길동');

cqlsh:testdb> INSERT INTO employees1 (empid, deptid, empname) VALUES (10002, 1, '박문수');

cqlsh:testdb> INSERT INTO employees1 (empid, deptid, empname) VALUES (10003, 2, '이몽룡');

cqlsh:testdb> INSERT INTO employees1 (empid, deptid, empname) VALUES (10004, 1, '변학도');

cqlsh:testdb> INSERT INTO employees1 (empid, deptid, empname) VALUES (10005, 3, '성춘향');

cqlsh:testdb> INSERT INTO employees1 (empid, deptid, empname) VALUES (10006, 3, '갑돌이');

cqlsh:testdb> 

cqlsh:testdb> INSERT INTO employees2 (empid, deptid, empname) VALUES (10001, 1, '홍길동');

cqlsh:testdb> INSERT INTO employees2 (empid, deptid, empname) VALUES (10002, 1, '박문수');

cqlsh:testdb> INSERT INTO employees2 (empid, deptid, empname) VALUES (10003, 2, '이몽룡');

cqlsh:testdb> INSERT INTO employees2 (empid, deptid, empname) VALUES (10004, 1, '변학도');

cqlsh:testdb> INSERT INTO employees2 (empid, deptid, empname) VALUES (10005, 3, '성춘향');

cqlsh:testdb> INSERT INTO employees2 (empid, deptid, empname) VALUES (10006, 3, '갑돌이');

cqlsh:testdb> 

cqlsh:testdb> 

cqlsh:testdb> 

cqlsh:testdb> select * from employees1;


 deptid | empid | empname

--------+-------+---------

      1 | 10001 |  홍길동

      1 | 10002 |  박문수

      1 | 10004 |  변학도

      2 | 10003 |  이몽룡

      3 | 10005 |  성춘향

      3 | 10006 |  갑돌이


(6 rows)


cqlsh:testdb> select * from employees2;


 empid | deptid | empname

-------+--------+---------

 10001 |      1 |  홍길동

 10002 |      1 |  박문수

 10003 |      2 |  이몽룡

 10006 |      3 |  갑돌이

 10004 |      1 |  변학도

 10005 |      3 |  성춘향


(6 rows)


cqlsh:testdb> 

============================================================

[default@testdb] [cas@s1 cassandra]$ ./bin/cassandra-cli

Connected to: "Test Cluster" on 127.0.0.1/9160

Welcome to Cassandra CLI version 2.0.13


The CLI is deprecated and will be removed in Cassandra 3.0.  Consider migrating to cqlsh.

CQL is fully backwards compatible with Thrift data; see http://www.datastax.com/dev/blog/thrift-to-cql3


Type 'help;' or '?' for help.

Type 'quit;' or 'exit;' to quit.


[default@unknown] use testdb;

Authenticated to keyspace: testdb

[default@testdb] list employees1;

Using default limit of 100

Using default cell limit of 100

-------------------

RowKey: 1

=> (name=10001:, value=, timestamp=1485157888772000)

=> (name=10001:empname, value=ed998deab8b8eb8f99, timestamp=1485157888772000)

=> (name=10002:, value=, timestamp=1485157888782000)

=> (name=10002:empname, value=ebb095ebacb8ec8898, timestamp=1485157888782000)

=> (name=10004:, value=, timestamp=1485157888812000)

=> (name=10004:empname, value=ebb380ed9599eb8f84, timestamp=1485157888812000)

-------------------

RowKey: 2

=> (name=10003:, value=, timestamp=1485157888794000)

=> (name=10003:empname, value=ec9db4ebaabdeba3a1, timestamp=1485157888794000)

-------------------

RowKey: 3

=> (name=10005:, value=, timestamp=1485157888820000)

=> (name=10005:empname, value=ec84b1ecb698ed96a5, timestamp=1485157888820000)

=> (name=10006:, value=, timestamp=1485157888825000)

=> (name=10006:empname, value=eab091eb8f8cec9db4, timestamp=1485157888825000)


3 Rows Returned.

Elapsed time: 86 msec(s).


employees1

Rowkey : deptid

컬럼 : empid

으로 정렬됨





-------------------

cqlsh:testdb> CREATE TABLE employees3 (

          ... empid int, deptid int, empname text,

          ... email text, tel text,

          ... PRIMARY KEY ((deptid, empid), empname)   

          ... );

cqlsh:testdb> INSERT INTO employees3 (empid, deptid, empname, email, tel) VALUES (10001, 1, '홍길동', 'gdhong@opensg.net', '010-222-3333');

cqlsh:testdb> INSERT INTO employees3 (empid, deptid, empname, email, tel) VALUES (10002, 1, '박문수', 'mspark@opensg.net','010-777-7778');

cqlsh:testdb> 



[default@testdb] list employees3;

Using default limit of 100

Using default cell limit of 100

-------------------

RowKey: 1:10002

=> (name=박문수:, value=, timestamp=1485159139959000)

=> (name=박문수:email, value=6d737061726b406f70656e73672e6e6574, timestamp=1485159139959000)

=> (name=박문수:tel, value=3031302d3737372d37373738, timestamp=1485159139959000)

-------------------

RowKey: 1:10001

=> (name=홍길동:, value=, timestamp=1485159139953000)

=> (name=홍길동:email, value=6764686f6e67406f70656e73672e6e6574, timestamp=1485159139953000)

=> (name=홍길동:tel, value=3031302d3232322d33333333, timestamp=1485159139953000)


2 Rows Returned.

Elapsed time: 77 msec(s).




PRIMARY KEY ((deptid, empid), empname)   

에서 deptid, empid는 복합키




SET


cqlsh:testdb> CREATE TABLE employees3 (

          ... empid int, deptid int, empname text,

          ... email text, tel text,

          ... PRIMARY KEY ((deptid, empid), empname)   

          ... );

cqlsh:testdb> INSERT INTO employees3 (empid, deptid, empname, email, tel) VALUES (10001, 1, '홍길동', 'gdhong@opensg.net', '010-222-3333');

cqlsh:testdb> INSERT INTO employees3 (empid, deptid, empname, email, tel) VALUES (10002, 1, '박문수', 'mspark@opensg.net','010-777-7778');


cqlsh:testdb> ALTER TABLE users ADD phones set<text>;

cqlsh:testdb> UPDATE users SET phones = phones + { '010-1212-3232' } WHERE userid='gdhong';

cqlsh:testdb> UPDATE users SET phones = phones + { '02-3429-5211' } WHERE userid='gdhong';

Request did not complete within rpc_timeout.

cqlsh:testdb> UPDATE users SET phones = phones + { '02-3429-5211' } WHERE userid='gdhong';

cqlsh:testdb> 

cqlsh:testdb> 

cqlsh:testdb> SELECT userid, phones FROM users;


 userid | phones

--------+-----------------------------------

 gdhong | {'010-1212-3232', '02-3429-5211'}


(1 rows)





LIST


cqlsh:testdb> 

cqlsh:testdb> 

cqlsh:testdb> ALTER TABLE users ADD visit_places list<text>;

cqlsh:testdb> UPDATE users SET visit_places = ['스타벅스', '내사무실'] 

          ...    WHERE userid='gdhong';

cqlsh:testdb> SELECT userid, visit_places FROM users;


 userid | visit_places

--------+--------------------------

 gdhong | ['스타벅스', '내사무실']


(1 rows)


cqlsh:testdb> UPDATE users SET visit_places = visit_places + ['잠실야구장'] 

          ...    WHERE userid='gdhong';

cqlsh:testdb> UPDATE users SET visit_places = ['인사동 골목'] + visit_places 

          ...    WHERE userid='gdhong';

cqlsh:testdb> SELECT userid, visit_places FROM users;


 userid | visit_places

--------+-------------------------------------------------------

 gdhong | ['인사동 골목', '스타벅스', '내사무실', '잠실야구장']


(1 rows)




MAP

cqlsh:testdb> drop table users;

cqlsh:testdb> CREATE TABLE users (

          ...      userid varchar,

          ...      username varchar,

          ...      password varchar,

          ...      email varchar,

          ...      PRIMARY KEY (userid)

          ...    );

cqlsh:testdb> 

cqlsh:testdb> ALTER TABLE users ADD visit_places map<timestamp, text>;

cqlsh:testdb> INSERT INTO users (userid, username, password, email) VALUES ('gdhong', '홍길동', '1234', 'gdhong@opensg.net');

cqlsh:testdb> UPDATE users SET visit_places={ '2013-08-31 12:12:46':'스타벅스' } WHERE userid='gdhong';

cqlsh:testdb> UPDATE users SET visit_places['2013-09-02 14:15:29'] = '야구장'  WHERE userid='gdhong';

cqlsh:testdb> SELECT userid, visit_places FROM users;


 userid | visit_places

--------+--------------------------------------------------------------------------------

 gdhong | {'2013-08-31 12:12:46+0900': '스타벅스', '2013-09-02 14:15:29+0900': '야구장'}


(1 rows)



---


[default@testdb] list users;

Using default limit of 100

Using default cell limit of 100

-------------------

RowKey: gdhong

=> (name=, value=, timestamp=1485160082176000)

=> (name=email, value=6764686f6e67406f70656e73672e6e6574, timestamp=1485160082176000)

=> (name=password, value=31323334, timestamp=1485160082176000)

=> (name=username, value=ed998deab8b8eb8f99, timestamp=1485160082176000)

index (1) must be less than size (1)

[default@testdb] 




GUI

C:\Users\student\Downloads\NOSQL데이터모델링\설치프로그램\클라이언트\cassandra

DevCenter-1.3.1-win-x86_64.zip








HBASE

Hadoop > Zookeeper > HBase



[hadoop@s1 ~]$ ./start-hb.sh

namenode running as process 8679. Stop it first.

s2: datanode running as process 7597. Stop it first.

s4: ssh: connect to host s4 port 22: No route to host

s3: ssh: connect to host s3 port 22: No route to host

s2: secondarynamenode running as process 7704. Stop it first.

jobtracker running as process 8860. Stop it first.

s2: tasktracker running as process 7794. Stop it first.

s4: ssh: connect to host s4 port 22: No route to host

s3: ssh: connect to host s3 port 22: No route to host

JMX enabled by default

Using config: /home/hadoop/zookeeper/bin/../conf/zoo.cfg

Starting zookeeper ... STARTED

JMX enabled by default

Using config: /home/hadoop/zookeeper/bin/../conf/zoo.cfg

Starting zookeeper ... STARTED

ssh: connect to host s3 port 22: No route to host

ssh: connect to host s4 port 22: No route to host

starting master, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-master-s1.test.com.out

s2: starting regionserver, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-regionserver-s2.test.com.out

s3: ssh: connect to host s3 port 22: No route to host

s4: ssh: connect to host s4 port 22: No route to host

[hadoop@s1 ~]$

[hadoop@s1 ~]$

[hadoop@s1 ~]$ jps

8860 JobTracker

9380 Jps

8679 NameNode

9287 HMaster

9175 QuorumPeerMain

[hadoop@s1 ~]$

[hadoop@s1 ~]$

[hadoop@s1 ~]$ ssh s2

[hadoop@s2 ~]$ jps

8056 HRegionServer

7997 QuorumPeerMain

8145 Jps

7704 SecondaryNameNode
7794 TaskTracker
7597 DataNode
[hadoop@s2 ~]$ exit
logout
Connection to s2 closed.
[hadoop@s1 ~]$ ssh s3
ssh: connect to host s3 port 22: No route to ho









- partial scan 가능

  : rowkey가 abc로 시작하는것 scan 

    like 'abc%'

  : cassandra는 불가


필터링기능설명

https://www.cloudera.com/documentation/enterprise/5-5-x/topics/admin_hbase_filtering.html


조회조건은 무조건 rowkey 로 가야한다 (rowkey를 잘정의해야한다)

- 복합키든


create 'orders', 'client', 'product'

put 'orders', 'joe_2013-01-13', 'client:name', 'Joe'



GUI : h-rider 

HBASE 낮은 버전만 지뭔

0.94 버전에서는 가능


https://phoenix.apache.org/

http://apache.tt.co.kr/phoenix/




secondary index 설정시 이하 추가돼야함


wal (저널로그)를 에디팅할 수 있어야함

vi conf/hbase-site.xml

<property>
  <name>hbase.regionserver.wal.codec</name>
  <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
</property>


scp conf/hbase-site.xml s2:~/hbase/conf

scp conf/hbase-site.xml s3:~/hbase/conf

scp conf/hbase-site.xml s4:~/hbase/conf






[hadoop@s1 다운로드]$ tar -xvf phoenix-3.3.1-bin.tar.gz 

phoenix-3.3.1-bin/

phoenix-3.3.1-bin/hadoop1/

phoenix-3.3.1-bin/hadoop1/phoenix-core-3.3.1-tests-hadoop1.jar

phoenix-3.3.1-bin/hadoop1/phoenix-flume-3.3.1-tests-hadoop1.jar

phoenix-3.3.1-bin/hadoop1/bin/

phoenix-3.3.1-bin/hadoop1/bin/log4j.properties

phoenix-3.3.1-bin/hadoop1/bin/performance.py

phoenix-3.3.1-bin/hadoop1/bin/psql.py

phoenix-3.3.1-bin/hadoop1/bin/phoenix_sandbox.py

phoenix-3.3.1-bin/hadoop1/bin/sqlline.py

phoenix-3.3.1-bin/hadoop1/bin/end2endTest.py

phoenix-3.3.1-bin/hadoop1/bin/readme.txt

phoenix-3.3.1-bin/hadoop1/bin/sandbox-log4j.properties

phoenix-3.3.1-bin/hadoop1/bin/hbase-site.xml

phoenix-3.3.1-bin/hadoop1/bin/phoenix_utils.py

phoenix-3.3.1-bin/hadoop1/phoenix-pig-3.3.1-tests-hadoop1.jar

phoenix-3.3.1-bin/hadoop1/phoenix-3.3.1-client-hadoop1.jar

phoenix-3.3.1-bin/hadoop1/phoenix-flume-3.3.1-hadoop1.jar

phoenix-3.3.1-bin/hadoop1/phoenix-pig-3.3.1-hadoop1.jar

phoenix-3.3.1-bin/CHANGES

phoenix-3.3.1-bin/common/

phoenix-3.3.1-bin/common/phoenix-3.3.1-client-minimal.jar

phoenix-3.3.1-bin/common/phoenix-core-3.3.1.jar

phoenix-3.3.1-bin/common/phoenix-3.3.1-client-without-hbase.jar

phoenix-3.3.1-bin/hadoop2/

phoenix-3.3.1-bin/hadoop2/phoenix-pig-3.3.1-hadoop2.jar

phoenix-3.3.1-bin/hadoop2/phoenix-pig-3.3.1-tests-hadoop2.jar

phoenix-3.3.1-bin/hadoop2/bin/

phoenix-3.3.1-bin/hadoop2/bin/log4j.properties

phoenix-3.3.1-bin/hadoop2/bin/performance.py

phoenix-3.3.1-bin/hadoop2/bin/psql.py

phoenix-3.3.1-bin/hadoop2/bin/phoenix_sandbox.py

phoenix-3.3.1-bin/hadoop2/bin/sqlline.py

phoenix-3.3.1-bin/hadoop2/bin/end2endTest.py

phoenix-3.3.1-bin/hadoop2/bin/readme.txt

phoenix-3.3.1-bin/hadoop2/bin/sandbox-log4j.properties

phoenix-3.3.1-bin/hadoop2/bin/hbase-site.xml

phoenix-3.3.1-bin/hadoop2/bin/phoenix_utils.py

phoenix-3.3.1-bin/hadoop2/phoenix-core-3.3.1-tests-hadoop2.jar

phoenix-3.3.1-bin/hadoop2/phoenix-flume-3.3.1-hadoop2.jar

phoenix-3.3.1-bin/hadoop2/phoenix-flume-3.3.1-tests-hadoop2.jar

phoenix-3.3.1-bin/hadoop2/phoenix-3.3.1-client-hadoop2.jar

phoenix-3.3.1-bin/README

phoenix-3.3.1-bin/LICENSE

phoenix-3.3.1-bin/NOTICE

phoenix-3.3.1-bin/examples/

phoenix-3.3.1-bin/examples/pig/

phoenix-3.3.1-bin/examples/pig/test.pig

phoenix-3.3.1-bin/examples/pig/testdata

phoenix-3.3.1-bin/examples/WEB_STAT.csv

phoenix-3.3.1-bin/examples/STOCK_SYMBOL.sql

phoenix-3.3.1-bin/examples/WEB_STAT_QUERIES.sql

phoenix-3.3.1-bin/examples/STOCK_SYMBOL.csv

phoenix-3.3.1-bin/examples/WEB_STAT.sql

[hadoop@s1 다운로드]$ ll

합계 168236

-rw-rw-r--. 1 hadoop hadoop     1576 2017-01-23 17:53 aa

-rw-rw-r--. 1 hadoop hadoop 38096663 2015-04-19 20:30 hadoop-1.2.1-bin.tar.gz

-rw-rw-r--. 1 hadoop hadoop 59364077 2015-04-19 20:30 hbase-0.94.27.tar.gz

drwxr-xr-x. 6 hadoop hadoop     4096 2015-04-04 06:59 phoenix-3.3.1-bin

-rw-rw-r--. 1 hadoop hadoop 57087019 2015-04-30 23:22 phoenix-3.3.1-bin.tar.gz

drwxrwxr-x. 2 hadoop hadoop     4096 2017-01-24 10:07 temp

-rw-rw-r--. 1 hadoop hadoop 17699306 2015-04-19 20:34 zookeeper-3.4.6.tar.gz

[hadoop@s1 다운로드]$ mv phoenix-3.3.1-bin ~/phoenix

[hadoop@s1 다운로드]$ cd

[hadoop@s1 ~]$ ll

합계 88

lrwxrwxrwx.  1 hadoop hadoop    13 2015-04-19 20:35 hadoop -> hadoop-1.2.1/

drwxr-xr-x. 16 hadoop hadoop  4096 2015-04-19 21:19 hadoop-1.2.1

lrwxrwxrwx.  1 hadoop hadoop    13 2015-04-19 21:06 hbase -> hbase-0.94.27

drwxr-xr-x. 11 hadoop hadoop  4096 2015-04-19 21:28 hbase-0.94.27

drwxr-xr-x.  6 hadoop hadoop  4096 2015-04-04 06:59 phoenix

-rwxr--r--.  1 hadoop hadoop   238 2015-04-19 22:30 start-hb.sh

-rwxr--r--.  1 hadoop hadoop   231 2015-04-19 22:29 stop-hb.sh

lrwxrwxrwx.  1 hadoop hadoop    16 2015-04-19 21:10 zookeeper -> zookeeper-3.4.6/

drwxr-xr-x. 12 hadoop hadoop  4096 2015-04-19 21:18 zookeeper-3.4.6

-rw-rw-r--.  1 hadoop hadoop 30623 2017-01-24 09:44 zookeeper.out

drwxr-xr-x.  2 hadoop hadoop  4096 2015-03-10 20:54 공개

drwxr-xr-x.  3 hadoop hadoop  4096 2017-01-24 10:25 다운로드

drwxr-xr-x.  2 hadoop hadoop  4096 2015-03-10 20:54 문서

drwxr-xr-x.  2 hadoop hadoop  4096 2015-03-10 20:55 바탕화면

drwxr-xr-x.  2 hadoop hadoop  4096 2015-03-10 20:54 비디오

drwxr-xr-x.  2 hadoop hadoop  4096 2015-03-10 20:54 사진

drwxr-xr-x.  2 hadoop hadoop  4096 2015-03-10 20:54 음악

drwxr-xr-x.  2 hadoop hadoop  4096 2015-03-10 20:54 템플릿

[hadoop@s1 ~]$ cd phoenix/

[hadoop@s1 phoenix]$ ll

합계 76

-rw-r--r--. 1 hadoop hadoop 35004 2015-04-04 06:58 CHANGES

-rw-r--r--. 1 hadoop hadoop 12316 2015-04-04 06:58 LICENSE

-rw-r--r--. 1 hadoop hadoop  2161 2015-04-04 06:58 NOTICE

-rw-r--r--. 1 hadoop hadoop   794 2015-04-04 06:58 README

drwxr-xr-x. 2 hadoop hadoop  4096 2015-04-04 06:58 common

drwxr-xr-x. 3 hadoop hadoop  4096 2015-04-04 06:58 examples

drwxr-xr-x. 3 hadoop hadoop  4096 2015-04-04 06:58 hadoop1

drwxr-xr-x. 3 hadoop hadoop  4096 2015-04-04 06:59 hadoop2

[hadoop@s1 phoenix]$ cd hadoop1

[hadoop@s1 hadoop1]$ cd ..

[hadoop@s1 phoenix]$ ll

합계 76

-rw-r--r--. 1 hadoop hadoop 35004 2015-04-04 06:58 CHANGES

-rw-r--r--. 1 hadoop hadoop 12316 2015-04-04 06:58 LICENSE

-rw-r--r--. 1 hadoop hadoop  2161 2015-04-04 06:58 NOTICE

-rw-r--r--. 1 hadoop hadoop   794 2015-04-04 06:58 README

drwxr-xr-x. 2 hadoop hadoop  4096 2015-04-04 06:58 common

drwxr-xr-x. 3 hadoop hadoop  4096 2015-04-04 06:58 examples

drwxr-xr-x. 3 hadoop hadoop  4096 2015-04-04 06:58 hadoop1

drwxr-xr-x. 3 hadoop hadoop  4096 2015-04-04 06:59 hadoop2

[hadoop@s1 phoenix]$ cp common/phoenix-core-3.3.1.jar ~/hbase/lib


[hadoop@s1 phoenix]$ scp ~/hbase/lib/* s2:~/hbase/lib/

[hadoop@s1 phoenix]$ scp ~/hbase/lib/* s3:~/hbase/lib/

[hadoop@s1 phoenix]$ scp ~/hbase/lib/* s4:~/hbase/lib/


phoenix-core-3.3.1.jar : JDBC


[hadoop@s1 ~]$ ./stop-hb.sh 

[hadoop@s1 ~]$ ./start-hb.sh 



[hadoop@s1 bin]$ pwd

/home/hadoop/phoenix/hadoop1/bin



./psql.py -t WEB_STAT s1 ../../examples/WEB_STAT.sql

./psql.py -t WEB_STAT s1 ../../examples/WEB_STAT.csv


./psql.py -t WEB_STAT s1 ../../examples/STOCK_SYMBOL.sql

./psql.py -t WEB_STAT s1 ../../examples/STOCK_SYMBOL.csv



./sqlline.py s1



HBASE에서는 인덱스가 없으나 피닉스에서 지원함

http://phoenix.apache.org/secondary_indexing.html

CREATE INDEX idx_web_stat ON WEB_STAT (domain);








'dev > NoSQL 모델링' 카테고리의 다른 글

NoSql 모델링 기법  (0) 2017.01.25
Shard Cluster  (1) 2017.01.25
Replica set  (0) 2017.01.25
4. Document Database  (0) 2017.01.24
NO SQL을 선택하는 이유  (0) 2017.01.23